Business Resource Planner Guide
For JBoss Developers
Abstract
This guide provides the steps needed for installing Business Resource Planner and instructions for using it.Chapter 1. Business Resource Planner Introduction
1.1. What is Business Resource Planner
- Employee/Patient Rosters. It helps create timetables for nurses and keeps track of patient bed management.
- Educational Timetables. It helps schedule lessons, courses, exams, and conference presentations.
- Shop Schedules: It tracks car assembly lines, machine queue planning, and workforce task planning.
- Cutting Stock: It minimizes waste by reducing the consumption of resources such as paper and steel.
Figure 1.1. Use Case Overview

[D]
1.2. Download Business Resource Planner
UpgradeFromPreviousVersionRecipe.txt, you can easily upgrade to a newer version and quickly deal with any backwards incompatible changes. This recipe file is included in every release.
Procedure 1.1. Get the release zip and run the examples
- Navigate to the Red Hat Customer Portal and log in with your user credentials.
- Select → → .
- From the Products drop-down menu, select BPM Suite or BRMS Platform.
- From the Version drop-down menu, select the product version 6.2.
- Select the Standalone or Deployable file and then click download.
- Unzip the files.
- Open the directory
examplesand run the following script:Linux or Mac:$ cd examples $ ./runExamples.sh
Windows:$ cd examples $ runExamples.bat
Figure 1.2. Download Business Resource Planner

[D]

[D]
1.3. Run the Examples
Procedure 1.2. To run the examples in your favorite IDE:
- Configure your IDE:
- In IntelliJ and NetBeans, just open the file
examples/sources/pom.xmlas a new project, the Maven integration will take care of the rest. - In Eclipse, open a new project for the directory
examples/sources.
- Add all the jars to the classpath from the directory
binariesand the directoryexamples/binaries, except for the fileexamples/binaries/optaplanner-examples-*.jar. - Add the Java source directory
src/main/javaand the Java resources directorysrc/main/resources.
- Next, create a run configuration:
- Main class:
org.optaplanner.examples.app.OptaPlannerExamplesApp - VM parameters (optional):
-Xmx512M -server - Working directory:
examples(this is the directory that contains the directorydata)
- Run that run configuration.
Procedure 1.3. Use Business Resource Planner with Maven, Gradle, Ivy, Buildr or ANT:
- Get the Business Resource Planner jars at the central maven repository (and also at the JBoss maven repository).
- If you use Maven, add a dependency to
optaplanner-corein your project'spom.xml:<dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-core</artifactId> <version>...</version> </dependency>To identify the latest version, check the central maven repository. This is similar for Gradle, Ivy, and Buildr. - If you're still using ANT (without Ivy), copy all the jars from the download zip's
binariesdirectory and manually verify that your classpath doesn't contain duplicate jars.
binaries directory contains far more jars then optaplanner-core actually uses. It also contains the jars used by other modules, such as optaplanner-benchmark.
pom.xml files to determine the minimal dependency set for a specific version of a specific module.
Procedure 1.4. Build Business Resource Planner from source:
$ git clone git@github.com:droolsjbpm/optaplanner.git optaplanner ...
- Do a Maven 3 build:
$ cd optaplanner $ mvn -DskipTests clean install ...
- Run this command to run any example directly from the command line:
$ cd optaplanner-examples $ mvn exec:exec ...
Chapter 2. Quick Start
2.1. Cloud Balancing Tutorial
2.1.1. Problem Description
- Every computer must be able to handle the minimum hardware requirements of the sum of its processes:
- The CPU power of a computer must be at least the sum of the CPU power required by the processes assigned to that computer.
- The RAM memory of a computer must be at least the sum of the RAM memory required by the processes assigned to that computer.
- The network bandwidth of a computer must be at least the sum of the network bandwidth required by the processes assigned to that computer.
- Each computer that has one or more processes assigned, incurs a maintenance cost (which is fixed per computer).
- Minimize the total maintenance cost.

2.1.2. Problem Size
Table 2.1. Cloud Balancing Problem Size
| Problem Size | Computers | Processes | Search Space |
|---|---|---|---|
| 2computers-6processes | 2 | 6 | 64 |
| 3computers-9processes | 3 | 9 | 10^4 |
| 4computers-012processes | 4 | 12 | 10^7 |
| 100computers-300processes | 100 | 300 | 10^600 |
| 200computers-600processes | 200 | 600 | 10^1380 |
| 400computers-1200processes | 400 | 1200 | 10^3122 |
| 800computers-2400processes | 800 | 2400 | 10^6967 |
2.1.3. Domain Model Design
Computer: represents a computer with certain hardware (CPU power, RAM memory, network bandwidth) and maintenance cost.Process: represents a process with a demand. Needs to be assigned to aComputerby Planner.CloudBalance: represents a problem. Contains everyComputerandProcessfor a certain data set.

- Planning entity: the class (or classes) that changes during planning. In this example, it is the class
Process. - Planning variable: the property (or properties) of a planning entity class that changes during planning. In this example, it is the property
computeron the classProcess. - Solution: the class that represents a data set and contains all planning entities. In this example that is the class
CloudBalance.
2.1.4. Main Method
org.optaplanner.examples.cloudbalancing.app.CloudBalancingHelloWorld. By default, it is configured to run for 120 seconds. It will execute this code:
Example 2.1. CloudBalancingHelloWorld.java
public class CloudBalancingHelloWorld {
public static void main(String[] args) {
// Build the Solver
SolverFactory solverFactory = SolverFactory.createFromXmlResource(
"org/optaplanner/examples/cloudbalancing/solver/cloudBalancingSolverConfig.xml");
Solver solver = solverFactory.buildSolver();
// Load a problem with 400 computers and 1200 processes
CloudBalance unsolvedCloudBalance = new CloudBalancingGenerator().createCloudBalance(400, 1200);
// Solve the problem
solver.solve(unsolvedCloudBalance);
CloudBalance solvedCloudBalance = (CloudBalance) solver.getBestSolution();
// Display the result
System.out.println("\nSolved cloudBalance with 400 computers and 1200 processes:\n"
+ toDisplayString(solvedCloudBalance));
}
...
}- Build the
Solverbased on a solver configuration (in this case an XML file from the classpath).SolverFactory solverFactory = SolverFactory.createFromXmlResource( "org/optaplanner/examples/cloudbalancing/solver/cloudBalancingSolverConfig.xml"); Solver solver = solverFactory.buildSolver(); - Load the problem.
CloudBalancingGeneratorgenerates a random problem: you will replace this with a class that loads a real problem, for example from a database.CloudBalance unsolvedCloudBalance = new CloudBalancingGenerator().createCloudBalance(400, 1200);
- Solve the problem.
solver.solve(unsolvedCloudBalance); CloudBalance solvedCloudBalance = (CloudBalance) solver.getBestSolution(); - Display the result.
System.out.println("\nSolved cloudBalance with 400 computers and 1200 processes:\n" + toDisplayString(solvedCloudBalance));
Solver, as detailed in the next section.
2.1.5. Solver Configuration
Example 2.2. cloudBalancingSolverConfig.xml
<?xml version="1.0" encoding="UTF-8"?>
<solver>
<!-- Domain model configuration -->
<scanAnnotatedClasses/>
<!-- Score configuration -->
<scoreDirectorFactory>
<scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
<easyScoreCalculatorClass>org.optaplanner.examples.cloudbalancing.solver.score.CloudBalancingEasyScoreCalculator</easyScoreCalculatorClass>
<!--<scoreDrl>org/optaplanner/examples/cloudbalancing/solver/cloudBalancingScoreRules.drl</scoreDrl>-->
</scoreDirectorFactory>
<!-- Optimization algorithms configuration -->
<termination>
<secondsSpentLimit>30</secondsSpentLimit>
</termination>
</solver>- Domain model configuration: What can Planner change? We need to make Planner aware of our domain classes. In this configuration, it will automatically scan all classes in your classpath (for an
@PlanningEntityor@PlanningSolutionannotation):<scanAnnotatedClasses/>
- Score configuration: How should Planner optimize the planning variables? What is our goal? Since we have hard and soft constraints, we use a
HardSoftScore. But we also need to tell Planner how to calculate the score, depending on our business requirements. Further down, we will look into two alternatives to calculate the score: using a simple Java implementation, or using Drools DRL.<scoreDirectorFactory> <scoreDefinitionType>HARD_SOFT</scoreDefinitionType> <easyScoreCalculatorClass>org.optaplanner.examples.cloudbalancing.solver.score.CloudBalancingEasyScoreCalculator</easyScoreCalculatorClass> <!--<scoreDrl>org/optaplanner/examples/cloudbalancing/solver/cloudBalancingScoreRules.drl</scoreDrl>--> </scoreDirectorFactory> - Optimization algorithms configuration: How should Planner optimize it? In this case, we use the default optimization algorithms (because no explicit optimization algorithms are configured) for 30 seconds:
<termination> <secondsSpentLimit>30</secondsSpentLimit> </termination>Planner should get a good result in seconds (and even in less than 15 milliseconds with real-time planning), but the more time it has, the better the result will be. Advanced use cases will likely use a different termination criteria than a hard time limit.The default algorithms will already easily surpass human planners and most in-house implementations. Use the Benchmarker to power tweak to get even better results.
2.1.6. Domain Model Implementation
2.1.6.1. The Computer Class
Computer class is a POJO (Plain Old Java Object). Usually, you will have more of this kind of classes.
Example 2.3. CloudComputer.java
public class CloudComputer ... {
private int cpuPower;
private int memory;
private int networkBandwidth;
private int cost;
... // getters
}2.1.6.2. The Process Class
Process class is particularly important. We need to tell Planner that it can change the field computer, so we annotate the class with @PlanningEntity and the getter getComputer() with @PlanningVariable:
Example 2.4. CloudProcess.java
@PlanningEntity(...)
public class CloudProcess ... {
private int requiredCpuPower;
private int requiredMemory;
private int requiredNetworkBandwidth;
private CloudComputer computer;
... // getters
@PlanningVariable(valueRangeProviderRefs = {"computerRange"})
public CloudComputer getComputer() {
return computer;
}
public void setComputer(CloudComputer computer) {
computer = computer;
}
// ************************************************************************
// Complex methods
// ************************************************************************
...
}computer, are retrieved from a method on the Solution implementation: CloudBalance.getComputerList(), which returns a list of all computers in the current data set. The valueRangeProviderRefs property is used to pass this information to the Planner.
2.1.6.3. The CloudBalance Class
CloudBalance class implements the Solution interface. It holds a list of all computers and processes. We need to tell Planner how to retrieve the collection of processes that it can change, therefore we must annotate the getter getProcessList with @PlanningEntityCollectionProperty.
CloudBalance class also has a property score, which is the Score of that Solution instance in its current state:
Example 2.5. CloudBalance.java
@PlanningSolution
public class CloudBalance ... implements Solution<HardSoftScore> {
private List<CloudComputer> computerList;
private List<CloudProcess> processList;
private HardSoftScore score;
@ValueRangeProvider(id = "computerRange")
public List<CloudComputer> getComputerList() {
return computerList;
}
@PlanningEntityCollectionProperty
public List<CloudProcess> getProcessList() {
return processList;
}
...
public HardSoftScore getScore() {
return score;
}
public void setScore(HardSoftScore score) {
this.score = score;
}
// ************************************************************************
// Complex methods
// ************************************************************************
public Collection<? extends Object> getProblemFacts() {
List<Object> facts = new ArrayList<Object>();
facts.addAll(computerList);
// Do not add the planning entity's (processList) because that will be done automatically
return facts;
}
...
}getProblemFacts() method is only needed for score calculation with Drools. It is not needed for the other score calculation types.
2.1.7. Score Configuration
Solution with the highest Score. This example uses a HardSoftScore, which means Planner will look for the solution with no hard constraints broken (fulfill hardware requirements) and as little as possible soft constraints broken (minimize maintenance cost).

- Easy Java
- Incremental Java
- Drools
2.1.7.1. Easy Java Score Configuration
EasyScoreCalculator in plain Java.
<scoreDirectorFactory>
<scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
<easyScoreCalculatorClass>org.optaplanner.examples.cloudbalancing.solver.score.CloudBalancingEasyScoreCalculator</easyScoreCalculatorClass>
</scoreDirectorFactory>calculateScore(Solution) method to return a HardSoftScore instance.
Example 2.6. CloudBalancingEasyScoreCalculator.java
public class CloudBalancingEasyScoreCalculator implements EasyScoreCalculator<CloudBalance> {
/**
* A very simple implementation. The double loop can easily be removed by using Maps as shown in
* {@link CloudBalancingMapBasedEasyScoreCalculator#calculateScore(CloudBalance)}.
*/
public HardSoftScore calculateScore(CloudBalance cloudBalance) {
int hardScore = 0;
int softScore = 0;
for (CloudComputer computer : cloudBalance.getComputerList()) {
int cpuPowerUsage = 0;
int memoryUsage = 0;
int networkBandwidthUsage = 0;
boolean used = false;
// Calculate usage
for (CloudProcess process : cloudBalance.getProcessList()) {
if (computer.equals(process.getComputer())) {
cpuPowerUsage += process.getRequiredCpuPower();
memoryUsage += process.getRequiredMemory();
networkBandwidthUsage += process.getRequiredNetworkBandwidth();
used = true;
}
}
// Hard constraints
int cpuPowerAvailable = computer.getCpuPower() - cpuPowerUsage;
if (cpuPowerAvailable < 0) {
hardScore += cpuPowerAvailable;
}
int memoryAvailable = computer.getMemory() - memoryUsage;
if (memoryAvailable < 0) {
hardScore += memoryAvailable;
}
int networkBandwidthAvailable = computer.getNetworkBandwidth() - networkBandwidthUsage;
if (networkBandwidthAvailable < 0) {
hardScore += networkBandwidthAvailable;
}
// Soft constraints
if (used) {
softScore -= computer.getCost();
}
}
return HardSoftScore.valueOf(hardScore, softScore);
}
}Maps to iterate through the processList only once, it is still slow because it does not do incremental score calculation. To fix that, either use an incremental Java score function or a Drools score function. Let's take a look at the latter.
2.1.7.2. Drools Score Configuration
scoreDrl resource in the classpath:
<scoreDirectorFactory>
<scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
<scoreDrl>org/optaplanner/examples/cloudbalancing/solver/cloudBalancingScoreRules.drl</scoreDrl>
</scoreDirectorFactory>Example 2.7. cloudBalancingScoreRules.drl - Hard Constraints
...
import org.optaplanner.examples.cloudbalancing.domain.CloudBalance;
import org.optaplanner.examples.cloudbalancing.domain.CloudComputer;
import org.optaplanner.examples.cloudbalancing.domain.CloudProcess;
global HardSoftScoreHolder scoreHolder;
// ############################################################################
// Hard constraints
// ############################################################################
rule "requiredCpuPowerTotal"
when
$computer : CloudComputer($cpuPower : cpuPower)
$requiredCpuPowerTotal : Number(intValue > $cpuPower) from accumulate(
CloudProcess(
computer == $computer,
$requiredCpuPower : requiredCpuPower),
sum($requiredCpuPower)
)
then
scoreHolder.addHardConstraintMatch(kcontext, $cpuPower - $requiredCpuPowerTotal.intValue());
end
rule "requiredMemoryTotal"
...
end
rule "requiredNetworkBandwidthTotal"
...
endExample 2.8. cloudBalancingScoreRules.drl - Soft Constraints
// ############################################################################
// Soft constraints
// ############################################################################
rule "computerCost"
when
$computer : CloudComputer($cost : cost)
exists CloudProcess(computer == $computer)
then
scoreHolder.addSoftConstraintMatch(kcontext, - $cost);
end2.1.8. Beyond this Tutorial
- Each
Processbelongs to aService. A computer might crash, so processes running the same service should be assigned to different computers. - Each
Computeris located in aBuilding. A building might burn down, so processes of the same services should be assigned to computers in different buildings.
Chapter 3. Use Cases and Examples
3.1. Examples Overview
examples/sources and also in git under optaplanner/optaplanner-examples.
Table 3.1. Examples Overview
| Example | Domain | Size | Competition? | Special features used |
|---|---|---|---|---|
| N queens |
|
|
| None |
| Cloud balancing |
|
|
| |
| Traveling salesman |
|
|
| |
| Dinner party |
|
|
|
|
| Tennis club scheduling |
|
|
| |
| Course timetabling |
|
|
| |
| Machine reassignment |
|
|
| |
| Vehicle routing |
|
|
|
|
| Vehicle routing with time windows |
Extra on Vehicle routing:
|
|
|
Extra on Vehicle routing:
|
| Project job scheduling |
|
|
| |
| Hospital bed planning |
|
|
| |
| Exam timetabling |
|
|
|
|
| Employee rostering |
|
|
| |
| Traveling tournament |
|
|
|
|
| Cheap time scheduling |
|
|
| |
| Investment |
|
|
|
- that clearly defines a real-word use case
- with real-world constraints
- with multiple, real-world datasets
- that expects reproducible results within a specific time limit on specific hardware
- that has had serious participation from the academic and/or enterprise Operations Research community
3.2. Basic Examples
3.2.1. N Queens
3.2.1.1. Problem Description

- Use a chessboard of n columns and n rows.
- Place n queens on the chessboard.
- No 2 queens can attack each other. A queen can attack any other queen on the same horizontal, vertical or diagonal line.
Figure 3.1. A Wrong Solution for the 4 Queens Puzzle

A1 and B0 can attack each other (so can queens B0 and D0). Removing queen B0 would respect the "no 2 queens can attack each other" constraint, but would break the "place n queens" constraint.
Figure 3.2. A Correct Solution for the 4 Queens Puzzle

3.2.1.2. Problem Size
4queens has 4 queens with a search space of 256. 8queens has 8 queens with a search space of 10^7. 16queens has 16 queens with a search space of 10^19. 32queens has 32 queens with a search space of 10^48. 64queens has 64 queens with a search space of 10^115. 256queens has 256 queens with a search space of 10^616.
3.2.1.3. Domain Model
public class Column {
private int index;
// ... getters and setters
}public class Row {
private int index;
// ... getters and setters
}public class Queen {
private Column column;
private Row row;
public int getAscendingDiagonalIndex() {...}
public int getDescendingDiagonalIndex() {...}
// ... getters and setters
}Queen instance has a Column (for example: 0 is column A, 1 is column B, ...) and a Row (its row, for example: 0 is row 0, 1 is row 1, ...). Based on the column and the row, the ascending diagonal line as well as the descending diagonal line can be calculated. The column and row indexes start from the upper left corner of the chessboard.
public class NQueens implements Solution<SimpleScore> {
private int n;
private List<Column> columnList;
private List<Row> rowList;
private List<Queen> queenList;
private SimpleScore score;
// ... getters and setters
}NQueens instance contains a list of all Queen instances. It is the Solution implementation which will be supplied to, solved by and retrieved from the Solver. Notice that in the 4 queens example, NQueens's getN() method will always return 4.
Table 3.2. A Solution for 4 Queens Shown in the Domain Model
| A solution | Queen | columnIndex | rowIndex | ascendingDiagonalIndex (columnIndex + rowIndex) | descendingDiagonalIndex (columnIndex - rowIndex) |
|---|---|---|---|---|---|
![]() | A1 | 0 | 1 | 1 (**) | -1 |
| B0 | 1 | 0 (*) | 1 (**) | 1 | |
| C2 | 2 | 2 | 4 | 0 | |
| D0 | 3 | 0 (*) | 3 | 3 |
3.2.2. Cloud Balancing
3.2.3. Traveling Salesman (TSP - Traveling Salesman Problem)
3.2.3.1. Problem Description
3.2.3.2. Problem Size
dj38 has 38 cities with a search space of 10^58. europe40 has 40 cities with a search space of 10^62. st70 has 70 cities with a search space of 10^126. pcb442 has 442 cities with a search space of 10^1166. lu980 has 980 cities with a search space of 10^2927.
3.2.3.3. Problem Difficulty

3.2.4. Dinner Party
3.2.4.1. Problem Description
- This time she invited 144 guests and prepared 12 round tables with 12 seats each.
- Every guest should sit next to someone (left and right) of the opposite gender.
- And that neighbour should have at least one hobby in common with the guest.
- At every table, there should be 2 politicians, 2 doctors, 2 socialites, 2 coaches, 2 teachers and 2 programmers.
- And the 2 politicians, 2 doctors, 2 coaches and 2 programmers shouldn't be the same kind at a table.
3.2.4.2. Problem Size
wedding01 has 18 jobs, 144 guests, 288 hobby practicians, 12 tables and 144 seats with a search space of 10^310.
3.2.5. Tennis Club Scheduling
3.2.5.1. Problem Description
- Conflict: A team can only play once per day.
- Unavailability: Some teams are unavailable on some dates.
- Fair assignment: All teams should play an (almost) equal number of times.
- Evenly confrontation: Each team should play against every other team an equal number of times.
3.2.5.2. Problem Size
munich-7teams has 7 teams, 18 days, 12 unavailabilityPenalties and 72 teamAssignments with a search space of 10^60.
3.2.5.3. Domain Model

3.3. Real Examples
3.3.1. Course Timetabling (ITC 2007 Track 3 - Curriculum Course Scheduling)
3.3.1.1. Problem Description
- Teacher conflict: A teacher must not have 2 lectures in the same period.
- Curriculum conflict: A curriculum must not have 2 lectures in the same period.
- Room occupancy: 2 lectures must not be in the same room in the same period.
- Unavailable period (specified per dataset): A specific lecture must not be assigned to a specific period.
- Room capacity: A room's capacity should not be less than the number of students in its lecture.
- Minimum working days: Lectures of the same course should be spread out into a minimum number of days.
- Curriculum compactness: Lectures belonging to the same curriculum should be adjacent to each other (so in consecutive periods).
- Room stability: Lectures of the same course should be assigned the same room.
3.3.1.2. Problem Size
comp01 has 24 teachers, 14 curricula, 30 courses, 160 lectures, 30 periods, 6 rooms and 53 unavailable period constraints with a search space of 10^360. comp02 has 71 teachers, 70 curricula, 82 courses, 283 lectures, 25 periods, 16 rooms and 513 unavailable period constraints with a search space of 10^736. comp03 has 61 teachers, 68 curricula, 72 courses, 251 lectures, 25 periods, 16 rooms and 382 unavailable period constraints with a search space of 10^653. comp04 has 70 teachers, 57 curricula, 79 courses, 286 lectures, 25 periods, 18 rooms and 396 unavailable period constraints with a search space of 10^758. comp05 has 47 teachers, 139 curricula, 54 courses, 152 lectures, 36 periods, 9 rooms and 771 unavailable period constraints with a search space of 10^381. comp06 has 87 teachers, 70 curricula, 108 courses, 361 lectures, 25 periods, 18 rooms and 632 unavailable period constraints with a search space of 10^957. comp07 has 99 teachers, 77 curricula, 131 courses, 434 lectures, 25 periods, 20 rooms and 667 unavailable period constraints with a search space of 10^1171. comp08 has 76 teachers, 61 curricula, 86 courses, 324 lectures, 25 periods, 18 rooms and 478 unavailable period constraints with a search space of 10^859. comp09 has 68 teachers, 75 curricula, 76 courses, 279 lectures, 25 periods, 18 rooms and 405 unavailable period constraints with a search space of 10^740. comp10 has 88 teachers, 67 curricula, 115 courses, 370 lectures, 25 periods, 18 rooms and 694 unavailable period constraints with a search space of 10^981. comp11 has 24 teachers, 13 curricula, 30 courses, 162 lectures, 45 periods, 5 rooms and 94 unavailable period constraints with a search space of 10^381. comp12 has 74 teachers, 150 curricula, 88 courses, 218 lectures, 36 periods, 11 rooms and 1368 unavailable period constraints with a search space of 10^566. comp13 has 77 teachers, 66 curricula, 82 courses, 308 lectures, 25 periods, 19 rooms and 468 unavailable period constraints with a search space of 10^824. comp14 has 68 teachers, 60 curricula, 85 courses, 275 lectures, 25 periods, 17 rooms and 486 unavailable period constraints with a search space of 10^722.
3.3.1.3. Domain Model

3.3.2. Machine Reassignment (Google ROADEF 2012)
3.3.2.1. Problem Description
- Maximum capacity: The maximum capacity for each resource for each machine must not be exceeded.
- Conflict: Processes of the same service must run on distinct machines.
- Spread: Processes of the same service must be spread out across locations.
- Dependency: The processes of a service depending on another service must run in the neighborhood of a process of the other service.
- Transient usage: Some resources are transient and count towards the maximum capacity of both the original machine as the newly assigned machine.
- Load: The safety capacity for each resource for each machine should not be exceeded.
- Balance: Leave room for future assignments by balancing the available resources on each machine.
- Process move cost: A process has a move cost.
- Service move cost: A service has a move cost.
- Machine move cost: Moving a process from machine A to machine B has another A-B specific move cost.
3.3.2.2. Problem Size
model_a1_1 has 2 resources, 1 neighborhoods, 4 locations, 4 machines, 79 services, 100 processes and 1 balancePenalties with a search space of 10^60. model_a1_2 has 4 resources, 2 neighborhoods, 4 locations, 100 machines, 980 services, 1000 processes and 0 balancePenalties with a search space of 10^2000. model_a1_3 has 3 resources, 5 neighborhoods, 25 locations, 100 machines, 216 services, 1000 processes and 0 balancePenalties with a search space of 10^2000. model_a1_4 has 3 resources, 50 neighborhoods, 50 locations, 50 machines, 142 services, 1000 processes and 1 balancePenalties with a search space of 10^1698. model_a1_5 has 4 resources, 2 neighborhoods, 4 locations, 12 machines, 981 services, 1000 processes and 1 balancePenalties with a search space of 10^1079. model_a2_1 has 3 resources, 1 neighborhoods, 1 locations, 100 machines, 1000 services, 1000 processes and 0 balancePenalties with a search space of 10^2000. model_a2_2 has 12 resources, 5 neighborhoods, 25 locations, 100 machines, 170 services, 1000 processes and 0 balancePenalties with a search space of 10^2000. model_a2_3 has 12 resources, 5 neighborhoods, 25 locations, 100 machines, 129 services, 1000 processes and 0 balancePenalties with a search space of 10^2000. model_a2_4 has 12 resources, 5 neighborhoods, 25 locations, 50 machines, 180 services, 1000 processes and 1 balancePenalties with a search space of 10^1698. model_a2_5 has 12 resources, 5 neighborhoods, 25 locations, 50 machines, 153 services, 1000 processes and 0 balancePenalties with a search space of 10^1698. model_b_1 has 12 resources, 5 neighborhoods, 10 locations, 100 machines, 2512 services, 5000 processes and 0 balancePenalties with a search space of 10^10000. model_b_2 has 12 resources, 5 neighborhoods, 10 locations, 100 machines, 2462 services, 5000 processes and 1 balancePenalties with a search space of 10^10000. model_b_3 has 6 resources, 5 neighborhoods, 10 locations, 100 machines, 15025 services, 20000 processes and 0 balancePenalties with a search space of 10^40000. model_b_4 has 6 resources, 5 neighborhoods, 50 locations, 500 machines, 1732 services, 20000 processes and 1 balancePenalties with a search space of 10^53979. model_b_5 has 6 resources, 5 neighborhoods, 10 locations, 100 machines, 35082 services, 40000 processes and 0 balancePenalties with a search space of 10^80000. model_b_6 has 6 resources, 5 neighborhoods, 50 locations, 200 machines, 14680 services, 40000 processes and 1 balancePenalties with a search space of 10^92041. model_b_7 has 6 resources, 5 neighborhoods, 50 locations, 4000 machines, 15050 services, 40000 processes and 1 balancePenalties with a search space of 10^144082. model_b_8 has 3 resources, 5 neighborhoods, 10 locations, 100 machines, 45030 services, 50000 processes and 0 balancePenalties with a search space of 10^100000. model_b_9 has 3 resources, 5 neighborhoods, 100 locations, 1000 machines, 4609 services, 50000 processes and 1 balancePenalties with a search space of 10^150000. model_b_10 has 3 resources, 5 neighborhoods, 100 locations, 5000 machines, 4896 services, 50000 processes and 1 balancePenalties with a search space of 10^184948.
3.3.2.3. Domain Model

3.3.3. Vehicle Routing
3.3.3.1. Problem Description

- Vehicle capacity: a vehicle cannot carry more items then its capacity.
- Time windows (only in CVRPTW):
- Travel time: Traveling from one location to another takes time.
- Customer service duration: a vehicle must stay at the customer for the length of the service duration.
- Customer ready time: a vehicle may arrive before the customer's ready time, but it must wait until the ready time before servicing.
- Customer due time: a vehicle must arrive on time, before the customer's due time.
- Total distance: minimize the total distance driven (fuel consumption) of all vehicles.
3.3.3.2. Problem Size
A-n32-k5 has 1 depots, 5 vehicles and 31 customers with a search space of 10^46. A-n33-k5 has 1 depots, 5 vehicles and 32 customers with a search space of 10^48. A-n33-k6 has 1 depots, 6 vehicles and 32 customers with a search space of 10^48. A-n34-k5 has 1 depots, 5 vehicles and 33 customers with a search space of 10^50. A-n36-k5 has 1 depots, 5 vehicles and 35 customers with a search space of 10^54. A-n37-k5 has 1 depots, 5 vehicles and 36 customers with a search space of 10^56. A-n37-k6 has 1 depots, 6 vehicles and 36 customers with a search space of 10^56. A-n38-k5 has 1 depots, 5 vehicles and 37 customers with a search space of 10^58. A-n39-k5 has 1 depots, 5 vehicles and 38 customers with a search space of 10^60. A-n39-k6 has 1 depots, 6 vehicles and 38 customers with a search space of 10^60. A-n44-k7 has 1 depots, 7 vehicles and 43 customers with a search space of 10^70. A-n45-k6 has 1 depots, 6 vehicles and 44 customers with a search space of 10^72. A-n45-k7 has 1 depots, 7 vehicles and 44 customers with a search space of 10^72. A-n46-k7 has 1 depots, 7 vehicles and 45 customers with a search space of 10^74. A-n48-k7 has 1 depots, 7 vehicles and 47 customers with a search space of 10^78. A-n53-k7 has 1 depots, 7 vehicles and 52 customers with a search space of 10^89. A-n54-k7 has 1 depots, 7 vehicles and 53 customers with a search space of 10^91. A-n55-k9 has 1 depots, 9 vehicles and 54 customers with a search space of 10^93. A-n60-k9 has 1 depots, 9 vehicles and 59 customers with a search space of 10^104. A-n61-k9 has 1 depots, 9 vehicles and 60 customers with a search space of 10^106. A-n62-k8 has 1 depots, 8 vehicles and 61 customers with a search space of 10^108. A-n63-k10 has 1 depots, 10 vehicles and 62 customers with a search space of 10^111. A-n63-k9 has 1 depots, 9 vehicles and 62 customers with a search space of 10^111. A-n64-k9 has 1 depots, 9 vehicles and 63 customers with a search space of 10^113. A-n65-k9 has 1 depots, 9 vehicles and 64 customers with a search space of 10^115. A-n69-k9 has 1 depots, 9 vehicles and 68 customers with a search space of 10^124. A-n80-k10 has 1 depots, 10 vehicles and 79 customers with a search space of 10^149. F-n135-k7 has 1 depots, 7 vehicles and 134 customers with a search space of 10^285. F-n45-k4 has 1 depots, 4 vehicles and 44 customers with a search space of 10^72. F-n72-k4 has 1 depots, 4 vehicles and 71 customers with a search space of 10^131.
Solomon_025_C101 has 1 depots, 25 vehicles and 25 customers with a search space of 10^34. Solomon_025_C201 has 1 depots, 25 vehicles and 25 customers with a search space of 10^34. Solomon_025_R101 has 1 depots, 25 vehicles and 25 customers with a search space of 10^34. Solomon_025_R201 has 1 depots, 25 vehicles and 25 customers with a search space of 10^34. Solomon_025_RC101 has 1 depots, 25 vehicles and 25 customers with a search space of 10^34. Solomon_025_RC201 has 1 depots, 25 vehicles and 25 customers with a search space of 10^34. Solomon_100_C101 has 1 depots, 25 vehicles and 100 customers with a search space of 10^200. Solomon_100_C201 has 1 depots, 25 vehicles and 100 customers with a search space of 10^200. Solomon_100_R101 has 1 depots, 25 vehicles and 100 customers with a search space of 10^200. Solomon_100_R201 has 1 depots, 25 vehicles and 100 customers with a search space of 10^200. Solomon_100_RC101 has 1 depots, 25 vehicles and 100 customers with a search space of 10^200. Solomon_100_RC201 has 1 depots, 25 vehicles and 100 customers with a search space of 10^200. Homberger_0200_C1_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^460. Homberger_0200_C2_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^460. Homberger_0200_R1_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^460. Homberger_0200_R2_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^460. Homberger_0200_RC1_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^460. Homberger_0200_RC2_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^460. Homberger_0400_C1_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^1040. Homberger_0400_C2_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^1040. Homberger_0400_R1_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^1040. Homberger_0400_R2_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^1040. Homberger_0400_RC1_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^1040. Homberger_0400_RC2_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^1040. Homberger_0600_C1_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1666. Homberger_0600_C2_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1666. Homberger_0600_R1_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1666. Homberger_0600_R2_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1666. Homberger_0600_RC1_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1666. Homberger_0600_RC2_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1666. Homberger_0800_C1_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2322. Homberger_0800_C2_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2322. Homberger_0800_R1_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2322. Homberger_0800_R2_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2322. Homberger_0800_RC1_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2322. Homberger_0800_RC2_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2322. Homberger_1000_C110_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^3000. Homberger_1000_C210_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^3000. Homberger_1000_R110_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^3000. Homberger_1000_R210_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^3000. Homberger_1000_RC110_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^3000. Homberger_1000_RC210_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^3000.
3.3.3.3. Domain Model

arrivalTime and departureTime, are directly available on the domain model.
3.3.3.4. Road Distances Instead of Air Distances


optaplanner-webexamples-*.war has an example which demonstrates such rendering:



3.3.4. Project Job Scheduling
3.3.4.1. Problem Description

- Job precedence: a job can only start when all its predecessor jobs are finished.
- Resource capacity: do not use more resources then available.
- Resources are local (shared between jobs of the same project) or global (shared between all jobs)
- Resource are renewable (capacity available per day) or nonrenewable (capacity available for all days)
- Total project delay: minimize the duration (makespan) of each project.
- Total makespan: minimize the duration of the whole multi-project schedule.
3.3.4.2. Problem Size
Schedule A-1 has 2 projects, 24 jobs, 64 execution modes, 7 resources and 150 resource requirements. Schedule A-2 has 2 projects, 44 jobs, 124 execution modes, 7 resources and 420 resource requirements. Schedule A-3 has 2 projects, 64 jobs, 184 execution modes, 7 resources and 630 resource requirements. Schedule A-4 has 5 projects, 60 jobs, 160 execution modes, 16 resources and 390 resource requirements. Schedule A-5 has 5 projects, 110 jobs, 310 execution modes, 16 resources and 900 resource requirements. Schedule A-6 has 5 projects, 160 jobs, 460 execution modes, 16 resources and 1440 resource requirements. Schedule A-7 has 10 projects, 120 jobs, 320 execution modes, 22 resources and 900 resource requirements. Schedule A-8 has 10 projects, 220 jobs, 620 execution modes, 22 resources and 1860 resource requirements. Schedule A-9 has 10 projects, 320 jobs, 920 execution modes, 31 resources and 2880 resource requirements. Schedule A-10 has 10 projects, 320 jobs, 920 execution modes, 31 resources and 2970 resource requirements. Schedule B-1 has 10 projects, 120 jobs, 320 execution modes, 31 resources and 900 resource requirements. Schedule B-2 has 10 projects, 220 jobs, 620 execution modes, 22 resources and 1740 resource requirements. Schedule B-3 has 10 projects, 320 jobs, 920 execution modes, 31 resources and 3060 resource requirements. Schedule B-4 has 15 projects, 180 jobs, 480 execution modes, 46 resources and 1530 resource requirements. Schedule B-5 has 15 projects, 330 jobs, 930 execution modes, 46 resources and 2760 resource requirements. Schedule B-6 has 15 projects, 480 jobs, 1380 execution modes, 46 resources and 4500 resource requirements. Schedule B-7 has 20 projects, 240 jobs, 640 execution modes, 61 resources and 1710 resource requirements. Schedule B-8 has 20 projects, 440 jobs, 1240 execution modes, 42 resources and 3180 resource requirements. Schedule B-9 has 20 projects, 640 jobs, 1840 execution modes, 61 resources and 5940 resource requirements. Schedule B-10 has 20 projects, 460 jobs, 1300 execution modes, 42 resources and 4260 resource requirements.
3.3.5. Hospital Bed Planning (PAS - Patient Admission Scheduling)
3.3.5.1. Problem Description

- 2 patients must not be assigned to the same bed in the same night. Weight:
-1000hard * conflictNightCount. - A room can have a gender limitation: only females, only males, the same gender in the same night or no gender limitation at all. Weight:
-50hard * nightCount. - A department can have a minimum or maximum age. Weight:
-100hard * nightCount. - A patient can require a room with specific equipment(s). Weight:
-50hard * nightCount.
- Assign every patient to a bed, unless the dataset is overconstrained. Weight:
-1medium * nightCount.
- A patient can prefer a maximum room size, for example if he/she wants a single room. Weight:
-8soft * nightCount. - A patient is best assigned to a department that specializes in his/her problem. Weight:
-10soft * nightCount. - A patient is best assigned to a room that specializes in his/her problem. Weight:
-20soft * nightCount.- That room speciality should be priority 1. Weight:
-10soft * (priority - 1) * nightCount.
- A patient can prefer a room with specific equipment(s). Weight:
-20soft * nightCount.
3.3.5.2. Problem Size
testdata01 has 4 specialisms, 2 equipments, 4 departments, 98 rooms, 286 beds, 14 nights, 652 patients and 652 admissions with a search space of 10^1601. testdata02 has 6 specialisms, 2 equipments, 6 departments, 151 rooms, 465 beds, 14 nights, 755 patients and 755 admissions with a search space of 10^2013. testdata03 has 5 specialisms, 2 equipments, 5 departments, 131 rooms, 395 beds, 14 nights, 708 patients and 708 admissions with a search space of 10^1838. testdata04 has 6 specialisms, 2 equipments, 6 departments, 155 rooms, 471 beds, 14 nights, 746 patients and 746 admissions with a search space of 10^1994. testdata05 has 4 specialisms, 2 equipments, 4 departments, 102 rooms, 325 beds, 14 nights, 587 patients and 587 admissions with a search space of 10^1474. testdata06 has 4 specialisms, 2 equipments, 4 departments, 104 rooms, 313 beds, 14 nights, 685 patients and 685 admissions with a search space of 10^1709. testdata07 has 6 specialisms, 4 equipments, 6 departments, 162 rooms, 472 beds, 14 nights, 519 patients and 519 admissions with a search space of 10^1387. testdata08 has 6 specialisms, 4 equipments, 6 departments, 148 rooms, 441 beds, 21 nights, 895 patients and 895 admissions with a search space of 10^2366. testdata09 has 4 specialisms, 4 equipments, 4 departments, 105 rooms, 310 beds, 28 nights, 1400 patients and 1400 admissions with a search space of 10^3487. testdata10 has 4 specialisms, 4 equipments, 4 departments, 104 rooms, 308 beds, 56 nights, 1575 patients and 1575 admissions with a search space of 10^3919. testdata11 has 4 specialisms, 4 equipments, 4 departments, 107 rooms, 318 beds, 91 nights, 2514 patients and 2514 admissions with a search space of 10^6291. testdata12 has 4 specialisms, 4 equipments, 4 departments, 105 rooms, 310 beds, 84 nights, 2750 patients and 2750 admissions with a search space of 10^6851. testdata13 has 5 specialisms, 4 equipments, 5 departments, 125 rooms, 368 beds, 28 nights, 907 patients and 1109 admissions with a search space of 10^2845.
3.3.5.3. Domain Model

3.4. Difficult Examples
3.4.1. Exam Timetabling (ITC 2007 track 1 - Examination)
3.4.1.1. Problem Description

- Exam conflict: 2 exams that share students must not occur in the same period.
- Room capacity: A room's seating capacity must suffice at all times.
- Period duration: A period's duration must suffice for all of its exams.
- Period related hard constraints (specified per dataset):
- Coincidence: 2 specified exams must use the same period (but possibly another room).
- Exclusion: 2 specified exams must not use the same period.
- After: A specified exam must occur in a period after another specified exam's period.
- Room related hard constraints (specified per dataset):
- Exclusive: 1 specified exam should not have to share its room with any other exam.
- The same student should not have 2 exams in a row.
- The same student should not have 2 exams on the same day.
- Period spread: 2 exams that share students should be a number of periods apart.
- Mixed durations: 2 exams that share a room should not have different durations.
- Front load: Large exams should be scheduled earlier in the schedule.
- Period penalty (specified per dataset): Some periods have a penalty when used.
- Room penalty (specified per dataset): Some rooms have a penalty when used.
3.4.1.2. Problem Size
exam_comp_set1 has 7883 students, 607 exams, 54 periods, 7 rooms, 12 period constraints and 0 room constraints with a search space of 10^1564. exam_comp_set2 has 12484 students, 870 exams, 40 periods, 49 rooms, 12 period constraints and 2 room constraints with a search space of 10^2864. exam_comp_set3 has 16365 students, 934 exams, 36 periods, 48 rooms, 168 period constraints and 15 room constraints with a search space of 10^3023. exam_comp_set4 has 4421 students, 273 exams, 21 periods, 1 rooms, 40 period constraints and 0 room constraints with a search space of 10^360. exam_comp_set5 has 8719 students, 1018 exams, 42 periods, 3 rooms, 27 period constraints and 0 room constraints with a search space of 10^2138. exam_comp_set6 has 7909 students, 242 exams, 16 periods, 8 rooms, 22 period constraints and 0 room constraints with a search space of 10^509. exam_comp_set7 has 13795 students, 1096 exams, 80 periods, 15 rooms, 28 period constraints and 0 room constraints with a search space of 10^3374. exam_comp_set8 has 7718 students, 598 exams, 80 periods, 8 rooms, 20 period constraints and 1 room constraints with a search space of 10^1678.
3.4.1.3. Domain Model
Figure 3.3. Examination Domain Class Diagram

Exam class and a Topic class. The Exam instances change during solving (this is the planning entity class), when their period or room property changes. The Topic, Period and Room instances never change during solving (these are problem facts, just like some other classes).
3.4.2. Employee Rostering (INRC 2010 - Nurse Rostering)
3.4.2.1. Problem Description

- No unassigned shifts (build-in): Every shift need to be assigned to an employee.
- Shift conflict: An employee can have only 1 shift per day.
- Contract obligations. The business frequently violates these, so they decided to define these as soft constraints instead of hard constraints.
- Minimum and maximum assignments: Each employee needs to work more than x shifts and less than y shifts (depending on their contract).
- Minimum and maximum consecutive working days: Each employee needs to work between x and y days in a row (depending on their contract).
- Minimum and maximum consecutive free days: Each employee needs to be free between x and y days in a row (depending on their contract).
- Minimum and maximum consecutive working weekends: Each employee needs to work between x and y weekends in a row (depending on their contract).
- Complete weekends: Each employee needs to work every day in a weekend or not at all.
- Identical shift types during weekend: Each weekend shift for the same weekend of the same employee must be the same shift type.
- Unwanted patterns: A combination of unwanted shift types in a row. For example: a late shift followed by an early shift followed by a late shift.
- Employee wishes:
- Day on request: An employee wants to work on a specific day.
- Day off request: An employee does not want to work on a specific day.
- Shift on request: An employee wants to be assigned to a specific shift.
- Shift off request: An employee does not want to be assigned to a specific shift.
- Alternative skill: An employee assigned to a skill should have a proficiency in every skill required by that shift.


3.4.2.2. Problem Size
- sprint: must be solved in seconds.
- medium: must be solved in minutes.
- long: must be solved in hours.
toy1 has 1 skills, 3 shiftTypes, 2 patterns, 1 contracts, 6 employees, 7 shiftDates, 35 shiftAssignments and 0 requests with a search space of 10^27. toy2 has 1 skills, 3 shiftTypes, 3 patterns, 2 contracts, 20 employees, 28 shiftDates, 180 shiftAssignments and 140 requests with a search space of 10^234. sprint01 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint02 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint03 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint04 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint05 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint06 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint07 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint08 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint09 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint10 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_hint01 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_hint02 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_hint03 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_late01 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_late02 has 1 skills, 3 shiftTypes, 4 patterns, 3 contracts, 10 employees, 28 shiftDates, 144 shiftAssignments and 139 requests with a search space of 10^144. sprint_late03 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 160 shiftAssignments and 150 requests with a search space of 10^160. sprint_late04 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 160 shiftAssignments and 150 requests with a search space of 10^160. sprint_late05 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_late06 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_late07 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_late08 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 0 requests with a search space of 10^152. sprint_late09 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 0 requests with a search space of 10^152. sprint_late10 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. medium01 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 31 employees, 28 shiftDates, 608 shiftAssignments and 403 requests with a search space of 10^906. medium02 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 31 employees, 28 shiftDates, 608 shiftAssignments and 403 requests with a search space of 10^906. medium03 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 31 employees, 28 shiftDates, 608 shiftAssignments and 403 requests with a search space of 10^906. medium04 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 31 employees, 28 shiftDates, 608 shiftAssignments and 403 requests with a search space of 10^906. medium05 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 31 employees, 28 shiftDates, 608 shiftAssignments and 403 requests with a search space of 10^906. medium_hint01 has 1 skills, 4 shiftTypes, 7 patterns, 4 contracts, 30 employees, 28 shiftDates, 428 shiftAssignments and 390 requests with a search space of 10^632. medium_hint02 has 1 skills, 4 shiftTypes, 7 patterns, 3 contracts, 30 employees, 28 shiftDates, 428 shiftAssignments and 390 requests with a search space of 10^632. medium_hint03 has 1 skills, 4 shiftTypes, 7 patterns, 4 contracts, 30 employees, 28 shiftDates, 428 shiftAssignments and 390 requests with a search space of 10^632. medium_late01 has 1 skills, 4 shiftTypes, 7 patterns, 4 contracts, 30 employees, 28 shiftDates, 424 shiftAssignments and 390 requests with a search space of 10^626. medium_late02 has 1 skills, 4 shiftTypes, 7 patterns, 3 contracts, 30 employees, 28 shiftDates, 428 shiftAssignments and 390 requests with a search space of 10^632. medium_late03 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 30 employees, 28 shiftDates, 428 shiftAssignments and 390 requests with a search space of 10^632. medium_late04 has 1 skills, 4 shiftTypes, 7 patterns, 3 contracts, 30 employees, 28 shiftDates, 416 shiftAssignments and 390 requests with a search space of 10^614. medium_late05 has 2 skills, 5 shiftTypes, 7 patterns, 4 contracts, 30 employees, 28 shiftDates, 452 shiftAssignments and 390 requests with a search space of 10^667. long01 has 2 skills, 5 shiftTypes, 3 patterns, 3 contracts, 49 employees, 28 shiftDates, 740 shiftAssignments and 735 requests with a search space of 10^1250. long02 has 2 skills, 5 shiftTypes, 3 patterns, 3 contracts, 49 employees, 28 shiftDates, 740 shiftAssignments and 735 requests with a search space of 10^1250. long03 has 2 skills, 5 shiftTypes, 3 patterns, 3 contracts, 49 employees, 28 shiftDates, 740 shiftAssignments and 735 requests with a search space of 10^1250. long04 has 2 skills, 5 shiftTypes, 3 patterns, 3 contracts, 49 employees, 28 shiftDates, 740 shiftAssignments and 735 requests with a search space of 10^1250. long05 has 2 skills, 5 shiftTypes, 3 patterns, 3 contracts, 49 employees, 28 shiftDates, 740 shiftAssignments and 735 requests with a search space of 10^1250. long_hint01 has 2 skills, 5 shiftTypes, 9 patterns, 3 contracts, 50 employees, 28 shiftDates, 740 shiftAssignments and 0 requests with a search space of 10^1257. long_hint02 has 2 skills, 5 shiftTypes, 7 patterns, 3 contracts, 50 employees, 28 shiftDates, 740 shiftAssignments and 0 requests with a search space of 10^1257. long_hint03 has 2 skills, 5 shiftTypes, 7 patterns, 3 contracts, 50 employees, 28 shiftDates, 740 shiftAssignments and 0 requests with a search space of 10^1257. long_late01 has 2 skills, 5 shiftTypes, 9 patterns, 3 contracts, 50 employees, 28 shiftDates, 752 shiftAssignments and 0 requests with a search space of 10^1277. long_late02 has 2 skills, 5 shiftTypes, 9 patterns, 4 contracts, 50 employees, 28 shiftDates, 752 shiftAssignments and 0 requests with a search space of 10^1277. long_late03 has 2 skills, 5 shiftTypes, 9 patterns, 3 contracts, 50 employees, 28 shiftDates, 752 shiftAssignments and 0 requests with a search space of 10^1277. long_late04 has 2 skills, 5 shiftTypes, 9 patterns, 4 contracts, 50 employees, 28 shiftDates, 752 shiftAssignments and 0 requests with a search space of 10^1277. long_late05 has 2 skills, 5 shiftTypes, 9 patterns, 3 contracts, 50 employees, 28 shiftDates, 740 shiftAssignments and 0 requests with a search space of 10^1257.
3.4.2.3. Domain Model

3.4.3. Traveling Tournament Problem (TTP)
3.4.3.1. Problem Description

- Each team plays twice against every other team: once home and once away.
- Each team has exactly 1 match on each timeslot.
- No team must have more than 3 consecutive home or 3 consecutive away matches.
- No repeaters: no 2 consecutive matches of the same 2 opposing teams.
- Minimize the total distance traveled by all teams.
3.4.3.2. Problem Size
1-nl04 has 6 days, 4 teams and 12 matches with a search space of 10^9. 1-nl06 has 10 days, 6 teams and 30 matches with a search space of 10^30. 1-nl08 has 14 days, 8 teams and 56 matches with a search space of 10^64. 1-nl10 has 18 days, 10 teams and 90 matches with a search space of 10^112. 1-nl12 has 22 days, 12 teams and 132 matches with a search space of 10^177. 1-nl14 has 26 days, 14 teams and 182 matches with a search space of 10^257. 1-nl16 has 30 days, 16 teams and 240 matches with a search space of 10^354. 2-bra24 has 46 days, 24 teams and 552 matches with a search space of 10^917. 3-nfl16 has 30 days, 16 teams and 240 matches with a search space of 10^354. 3-nfl18 has 34 days, 18 teams and 306 matches with a search space of 10^468. 3-nfl20 has 38 days, 20 teams and 380 matches with a search space of 10^600. 3-nfl22 has 42 days, 22 teams and 462 matches with a search space of 10^749. 3-nfl24 has 46 days, 24 teams and 552 matches with a search space of 10^917. 3-nfl26 has 50 days, 26 teams and 650 matches with a search space of 10^1104. 3-nfl28 has 54 days, 28 teams and 756 matches with a search space of 10^1309. 3-nfl30 has 58 days, 30 teams and 870 matches with a search space of 10^1534. 3-nfl32 has 62 days, 32 teams and 992 matches with a search space of 10^1778. 4-super04 has 6 days, 4 teams and 12 matches with a search space of 10^9. 4-super06 has 10 days, 6 teams and 30 matches with a search space of 10^30. 4-super08 has 14 days, 8 teams and 56 matches with a search space of 10^64. 4-super10 has 18 days, 10 teams and 90 matches with a search space of 10^112. 4-super12 has 22 days, 12 teams and 132 matches with a search space of 10^177. 4-super14 has 26 days, 14 teams and 182 matches with a search space of 10^257. 5-galaxy04 has 6 days, 4 teams and 12 matches with a search space of 10^9. 5-galaxy06 has 10 days, 6 teams and 30 matches with a search space of 10^30. 5-galaxy08 has 14 days, 8 teams and 56 matches with a search space of 10^64. 5-galaxy10 has 18 days, 10 teams and 90 matches with a search space of 10^112. 5-galaxy12 has 22 days, 12 teams and 132 matches with a search space of 10^177. 5-galaxy14 has 26 days, 14 teams and 182 matches with a search space of 10^257. 5-galaxy16 has 30 days, 16 teams and 240 matches with a search space of 10^354. 5-galaxy18 has 34 days, 18 teams and 306 matches with a search space of 10^468. 5-galaxy20 has 38 days, 20 teams and 380 matches with a search space of 10^600. 5-galaxy22 has 42 days, 22 teams and 462 matches with a search space of 10^749. 5-galaxy24 has 46 days, 24 teams and 552 matches with a search space of 10^917. 5-galaxy26 has 50 days, 26 teams and 650 matches with a search space of 10^1104. 5-galaxy28 has 54 days, 28 teams and 756 matches with a search space of 10^1309. 5-galaxy30 has 58 days, 30 teams and 870 matches with a search space of 10^1534. 5-galaxy32 has 62 days, 32 teams and 992 matches with a search space of 10^1778. 5-galaxy34 has 66 days, 34 teams and 1122 matches with a search space of 10^2041. 5-galaxy36 has 70 days, 36 teams and 1260 matches with a search space of 10^2324. 5-galaxy38 has 74 days, 38 teams and 1406 matches with a search space of 10^2628. 5-galaxy40 has 78 days, 40 teams and 1560 matches with a search space of 10^2951.
3.4.4. Cheap Time Scheduling
3.4.4.1. Problem Description
- Start time limits: each task must start between its earliest start and latest start limit.
- Maximum capacity: the maximum capacity for each resource for each machine must not be exceeded.
- Startup and shutdown: each machine must be active in the periods during which it has assigned tasks. Between tasks it is allowed to be idle to avoid startup and shutdown costs.
- Power cost: minimize the total power cost of the whole schedule.
- Machine power cost: Each active or idle machine consumes power, which infers a power cost (depending on the power price during that time).
- Task power cost: Each task consumes power too, which infers a power cost (depending on the power price during its time).
- Machine startup and shutdown cost: Every time a machine starts up or shuts down, an extra cost is inflicted.
- Start early: prefer starting a task sooner rather than later.
3.4.4.2. Problem Size
sample01 has 3 resources, 2 machines, 288 periods and 25 tasks with a search space of 10^53. sample02 has 3 resources, 2 machines, 288 periods and 50 tasks with a search space of 10^114. sample03 has 3 resources, 2 machines, 288 periods and 100 tasks with a search space of 10^226. sample04 has 3 resources, 5 machines, 288 periods and 100 tasks with a search space of 10^266. sample05 has 3 resources, 2 machines, 288 periods and 250 tasks with a search space of 10^584. sample06 has 3 resources, 5 machines, 288 periods and 250 tasks with a search space of 10^673. sample07 has 3 resources, 2 machines, 288 periods and 1000 tasks with a search space of 10^2388. sample08 has 3 resources, 5 machines, 288 periods and 1000 tasks with a search space of 10^2748. sample09 has 4 resources, 20 machines, 288 periods and 2000 tasks with a search space of 10^6668. instance00 has 1 resources, 10 machines, 288 periods and 200 tasks with a search space of 10^595. instance01 has 1 resources, 10 machines, 288 periods and 200 tasks with a search space of 10^599. instance02 has 1 resources, 10 machines, 288 periods and 200 tasks with a search space of 10^599. instance03 has 1 resources, 10 machines, 288 periods and 200 tasks with a search space of 10^591. instance04 has 1 resources, 10 machines, 288 periods and 200 tasks with a search space of 10^590. instance05 has 2 resources, 25 machines, 288 periods and 200 tasks with a search space of 10^667. instance06 has 2 resources, 25 machines, 288 periods and 200 tasks with a search space of 10^660. instance07 has 2 resources, 25 machines, 288 periods and 200 tasks with a search space of 10^662. instance08 has 2 resources, 25 machines, 288 periods and 200 tasks with a search space of 10^651. instance09 has 2 resources, 25 machines, 288 periods and 200 tasks with a search space of 10^659. instance10 has 2 resources, 20 machines, 288 periods and 500 tasks with a search space of 10^1657. instance11 has 2 resources, 20 machines, 288 periods and 500 tasks with a search space of 10^1644. instance12 has 2 resources, 20 machines, 288 periods and 500 tasks with a search space of 10^1637. instance13 has 2 resources, 20 machines, 288 periods and 500 tasks with a search space of 10^1659. instance14 has 2 resources, 20 machines, 288 periods and 500 tasks with a search space of 10^1643. instance15 has 3 resources, 40 machines, 288 periods and 500 tasks with a search space of 10^1782. instance16 has 3 resources, 40 machines, 288 periods and 500 tasks with a search space of 10^1778. instance17 has 3 resources, 40 machines, 288 periods and 500 tasks with a search space of 10^1764. instance18 has 3 resources, 40 machines, 288 periods and 500 tasks with a search space of 10^1769. instance19 has 3 resources, 40 machines, 288 periods and 500 tasks with a search space of 10^1778. instance20 has 3 resources, 50 machines, 288 periods and 1000 tasks with a search space of 10^3689. instance21 has 3 resources, 50 machines, 288 periods and 1000 tasks with a search space of 10^3678. instance22 has 3 resources, 50 machines, 288 periods and 1000 tasks with a search space of 10^3706. instance23 has 3 resources, 50 machines, 288 periods and 1000 tasks with a search space of 10^3676. instance24 has 3 resources, 50 machines, 288 periods and 1000 tasks with a search space of 10^3681. instance25 has 3 resources, 60 machines, 288 periods and 1000 tasks with a search space of 10^3774. instance26 has 3 resources, 60 machines, 288 periods and 1000 tasks with a search space of 10^3737. instance27 has 3 resources, 60 machines, 288 periods and 1000 tasks with a search space of 10^3744. instance28 has 3 resources, 60 machines, 288 periods and 1000 tasks with a search space of 10^3731. instance29 has 3 resources, 60 machines, 288 periods and 1000 tasks with a search space of 10^3746. instance30 has 4 resources, 70 machines, 288 periods and 2000 tasks with a search space of 10^7718. instance31 has 4 resources, 70 machines, 288 periods and 2000 tasks with a search space of 10^7740. instance32 has 4 resources, 70 machines, 288 periods and 2000 tasks with a search space of 10^7686. instance33 has 4 resources, 70 machines, 288 periods and 2000 tasks with a search space of 10^7672. instance34 has 4 resources, 70 machines, 288 periods and 2000 tasks with a search space of 10^7695. instance35 has 4 resources, 80 machines, 288 periods and 2000 tasks with a search space of 10^7807. instance36 has 4 resources, 80 machines, 288 periods and 2000 tasks with a search space of 10^7814. instance37 has 4 resources, 80 machines, 288 periods and 2000 tasks with a search space of 10^7764. instance38 has 4 resources, 80 machines, 288 periods and 2000 tasks with a search space of 10^7736. instance39 has 4 resources, 80 machines, 288 periods and 2000 tasks with a search space of 10^7783. instance40 has 4 resources, 90 machines, 288 periods and 4000 tasks with a search space of 10^15976. instance41 has 4 resources, 90 machines, 288 periods and 4000 tasks with a search space of 10^15935. instance42 has 4 resources, 90 machines, 288 periods and 4000 tasks with a search space of 10^15887. instance43 has 4 resources, 90 machines, 288 periods and 4000 tasks with a search space of 10^15896. instance44 has 4 resources, 90 machines, 288 periods and 4000 tasks with a search space of 10^15885. instance45 has 4 resources, 100 machines, 288 periods and 5000 tasks with a search space of 10^20173. instance46 has 4 resources, 100 machines, 288 periods and 5000 tasks with a search space of 10^20132. instance47 has 4 resources, 100 machines, 288 periods and 5000 tasks with a search space of 10^20126. instance48 has 4 resources, 100 machines, 288 periods and 5000 tasks with a search space of 10^20110. instance49 has 4 resources, 100 machines, 288 periods and 5000 tasks with a search space of 10^20078.
3.4.5. Investment asset class allocation (portfolio optimization)
3.4.5.1. Problem Description
- Risk maximum: the total standard deviation must not be higher than the standard deviation maximum.
- Total standard deviation calculation takes asset class correlations into account by applying Markowitz Portfolio Theory.
- Region maximum: Each region has a quantity maximum.
- Sector maximum: Each sector has a quantity maximum.
- Maximize expected return.
3.4.5.2. Problem Size
de_smet_1 has 1 regions, 3 sectors and 11 asset classes with a search space of 10^4. irrinki_1 has 2 regions, 3 sectors and 6 asset classes with a search space of 10^3.
Chapter 4. Planner Configuration
4.1. Overview
- Model your planning problem as a class that implements the interface
Solution, for example the classNQueens. - Configure a
Solver, for example a First Fit and Tabu Search solver for anyNQueensinstance. - Load a problem data set from your data layer, for example a 4 Queens instance. That is the planning problem.
- Solve it with
Solver.solve(planningProblem). - Get the best solution found by the
SolverwithSolver.getBestSolution().

4.2. Solver Configuration
4.2.1. Solver Configuration by XML
Solver instance with the SolverFactory. Configure it with a solver configuration XML file, provided as a classpath resource (as definied by ClassLoader.getResource()):
SolverFactory solverFactory = SolverFactory.createFromXmlResource(
"org/optaplanner/examples/nqueens/solver/nqueensSolverConfig.xml");
Solver solver = solverFactory.buildSolver();$PROJECT_DIR/src/main/resources/org/optaplanner/examples/nqueens/solver/nqueensSolverConfig.xml. Alternatively, a SolverFactory can be created from a File, an InputStream or a Reader with methods such as SolverFactory.createFromXmlFile(). However, for portability reasons, a classpath resource is recommended.
ClassLoader of the optaplanner-core jar. In those cases, provide the ClassLoader of your classes as a parameter:
SolverFactory solverFactory = SolverFactory.createFromXmlResource(
".../nqueensSolverConfig.xml", getClass().getClassLoader());<?xml version="1.0" encoding="UTF-8"?>
<solver>
<!-- Define the model -->
<solutionClass>org.optaplanner.examples.nqueens.domain.NQueens</solutionClass>
<entityClass>org.optaplanner.examples.nqueens.domain.Queen</entityClass>
<!-- Define the score function -->
<scoreDirectorFactory>
<scoreDefinitionType>SIMPLE</scoreDefinitionType>
<scoreDrl>org/optaplanner/examples/nqueens/solver/nQueensScoreRules.drl</scoreDrl>
</scoreDirectorFactory>
<!-- Configure the optimization algorithm(s) -->
<termination>
...
</termination>
<constructionHeuristic>
...
</constructionHeuristic>
<localSearch>
...
</localSearch>
</solver>- Define the model
- Define the score function
- Configure the optimization algorithm(s)
Benchmarker utility which allows you to play out different configurations against each other and report the most appropriate configuration for your use case.
4.2.2. Solver Configuration by Java API
SolverConfig API. This is especially useful to change some values dynamically at runtime. For example, to change the running time based on user input, before building the Solver:
SolverFactory solverFactory = SolverFactory.createFromXmlResource(
"org/optaplanner/examples/nqueens/solver/nqueensSolverConfig.xml");
TerminationConfig terminationConfig = new TerminationConfig();
terminationConfig.setMinutesSpentLimit(userInput);
solverFactory.getSolverConfig().setTerminationConfig(terminationConfig);
Solver solver = solverFactory.buildSolver();*Config class or a property on a *Config class in the package namespace org.optaplanner.core.config. These *Config classes are the Java representation of the XML format. They build the runtime components (of the package namespace org.optaplanner.core.impl) and assemble them into an efficient Solver.
SolverFactory is only multi-thread safe after its configured. So the getSolverConfig() method is not thread-safe. To configure a SolverFactory dynamically for each user request, build a SolverFactory as base during initialization and clone it with the cloneSolverFactory() method for a user request:
private SolverFactory template;
public void init() {
SolverFactory base = SolverFactory.createFromXmlResource(
"org/optaplanner/examples/nqueens/solver/nqueensSolverConfig.xml");
base.getSolverConfig().setTerminationConfig(new TerminationConfig());
}
// Called concurrently from different threads
public void userRequest(..., long userInput)
SolverFactory solverFactory = base.cloneSolverFactory();
solverFactory.getSolverConfig().getTerminationConfig().setMinutesSpentLimit(userInput);
Solver solver = solverFactory.buildSolver();
...
}4.2.3. Annotations Configuration
4.2.3.1. Automatic Scanning for Annotations
@PlanningSolution or @PlanningEntity manually:
<solver> <!-- Define the model --> <solutionClass>org.optaplanner.examples.nqueens.domain.NQueens</solutionClass> <entityClass>org.optaplanner.examples.nqueens.domain.Queen</entityClass> ... </solver>
<solver> <!-- Define the model --> <scanAnnotatedClasses/> ... </solver>
<solver>
<!-- Define the model -->
<scanAnnotatedClasses>
<packageInclude>org.optaplanner.examples.cloudbalancing</packageInclude>
</scanAnnotatedClasses>
...
</solver>scanAnnotatedClasses is not specified, the org.reflections transitive maven dependency can be excluded.
4.2.3.2. Annotation Alternatives
- Add class annotations and JavaBean property annotations on the domain model (recommended). The property annotations must be the getter method, not on the setter method. Such a getter does not need to be public.
- Add class annotations and field annotations on the domain model. Such a field does not need to be public.
- No annotations: externalize the domain configuration in an XML file. This is not yet supported.
4.3. Model a Planning Problem
4.3.1. Is This Class a Problem Fact or Planning Entity?
- A unrelated class: not used by any of the score constraints. From a planning standpoint, this data is obsolete.
- A problem fact class: used by the score constraints, but does NOT change during planning (as long as the problem stays the same). For example:
Bed,Room,Shift,Employee,Topic,Period, ... All the properties of a problem fact class are problem properties. - A planning entity class: used by the score constraints and changes during planning. For example:
BedDesignation,ShiftAssignment,Exam, ... The properties that change during planning are planning variables. The other properties are problem properties.
Solver to change for me? That class is a planning entity. Most use cases have only one planning entity class. Most use cases also have only one planning variable per planning entity class.
- In a many to one relationship, it is normally the many side that is the planning entity class. The property referencing the other side is then the planning variable. For example in employee rostering: the planning entity class is
ShiftAssignment, notEmployee, and the planning variable isShiftAssignment.getEmployee()because oneEmployeehas multipleShiftAssignments but oneShiftAssignmenthas only oneEmployee. - A planning entity class should have at least one problem property. A planning entity class with only planning variables can normally be simplified by converting one of those planning variables into a problem property. That heavily decreases the search space size. For example in employee rostering: the
ShiftAssignment'sgetShift()is a problem property and thegetEmployee()is a planning variable. If both were a planning variable, solving it would be far less efficient.- A surrogate ID does not suffice as the required minimum of one problem property. It needs to be understandable by the business. A business key does suffice. This prevents an unassigned entity from being nameless (unidentifiable by the business).
- This way, there is no need to add a hard constraint to assure that two planning entities are different: they are already different due to their problem properties.
- In some cases, multiple planning entities have the same problem property. In such cases, it can be useful to create an extra problem property to distinguish them. For example in employee rostering:
ShiftAssignmenthas besides the problem propertyShiftalso the problem propertyindexInShift.
- The number of planning entities is recommended to be fixed during planning. When unsure of which property should be a planning variable and which should be a problem property, choose it so the number of planning entities is fixed. For example in employee rostering: if the planning entity class would have been
EmployeeAssignmentwith a problem propertygetEmployee()and a planning variablegetShift(), than it is impossible to accurately predict how manyEmployeeAssignmentinstances to make perEmployee.

4.3.2. Problem Fact
Serializable is recommended (but not required). For example in n queens, the columns and rows are problem facts:
public class Column implements Serializable {
private int index;
// ... getters
}public class Row implements Serializable {
private int index;
// ... getters
}public class Course implements Serializable {
private String code;
private Teacher teacher; // Other problem fact
private int lectureSize;
private int minWorkingDaySize;
private List<Curriculum> curriculumList; // Other problem facts
private int studentSize;
// ... getters
}Teacher instances for the same teacher that teaches at two different departments, it is harder to write a correct score constraint that constrains a teacher's spare time on the original model than on an adjusted model.
4.3.3. Planning Entity
4.3.3.1. Planning Entity Annotation
Queen that changes to another row. A planning problem has multiple planning entities, for example for a single n queens problem, each Queen is a planning entity. But there is usually only one planning entity class, for example the Queen class.
@PlanningEntity annotation.
Queen is defined by its Column and has a planning variable Row. This means that a Queen's column never changes during solving, while its row does change.
@PlanningEntity
public class Queen {
private Column column;
// Planning variables: changes during planning, between score calculations.
private Row row;
// ... getters and setters
}Lecture is defined by its Course and its index in that course (because one course has multiple lectures). Each Lecture needs to be scheduled into a Period and a Room so it has two planning variables (period and room). For example: the course Mathematics has eight lectures per week, of which the first lecture is Monday morning at 08:00 in room 212.
@PlanningEntity
public class Lecture {
private Course course;
private int lectureIndexInCourse;
// Planning variables: changes during planning, between score calculations.
private Period period;
private Room room;
// ...
}<solver> ... <entityClass>org.optaplanner.examples.nqueens.domain.Queen</entityClass> ... </solver>
Move implementations and slower score calculation.
Lecture planning entities change. Instead, calculate the free time in the score constraints and put the result per teacher into a logically inserted score object.
4.3.3.2. Planning Entity Difficulty
difficultyComparatorClass to the @PlanningEntity annotation:
@PlanningEntity(difficultyComparatorClass = CloudProcessDifficultyComparator.class)
public class CloudProcess {
// ...
}public class CloudProcessDifficultyComparator implements Comparator<CloudProcess> {
public int compare(CloudProcess a, CloudProcess b) {
return new CompareToBuilder()
.append(a.getRequiredMultiplicand(), b.getRequiredMultiplicand())
.append(a.getId(), b.getId())
.toComparison();
}
}difficultyWeightFactoryClass to the @PlanningEntity annotation, so that you have access to the rest of the problem facts from the Solution too:
@PlanningEntity(difficultyWeightFactoryClass = QueenDifficultyWeightFactory.class)
public class Queen {
// ...
}null anyway. For example, a Queen's row variable should not be used.
4.3.4. Planning Variable
4.3.4.1. Planning Variable Annotation
Queen's row property is a planning variable. Note that even though a Queen's row property changes to another Row during planning, no Row instance itself is changed.
@PlanningVariable annotation, which needs a non-empty valueRangeProviderRefs property.
@PlanningEntity
public class Queen {
...
private Row row;
@PlanningVariable(valueRangeProviderRefs = {"rowRange"})
public Row getRow() {
return row;
}
public void setRow(Row row) {
this.row = row;
}
}valueRangeProviderRefs property defines what are the possible planning values for this planning variable. It references one or more @ValueRangeProvider id's.
@PlanningEntity
public class Queen {
...
@PlanningVariable(valueRangeProviderRefs = {"rowRange"})
private Row row;
}4.3.4.2. Nullable Planning Variable
null, so an initialized solution will never use null for any of its planning variables. In an over-constrained use case, this can be counterproductive. For example: in task assignment with too many tasks for the workforce, we would rather leave low priority tasks unassigned instead of assigning them to an overloaded worker.
null, set nullable to true:
@PlanningVariable(..., nullable = true)
public Worker getWorker() {
return worker;
}null to the value range. There is no need to add null in a collection used by a ValueRangeProvider.
null variables again, which can be a huge waste of time. One way to deal with this, is to change when a planning entity should be reinitialized with an reinitializeVariableEntityFilter:
@PlanningVariable(..., nullable = true, reinitializeVariableEntityFilter = ReinitializeTaskFilter.class)
public Worker getWorker() {
return worker;
}4.3.4.3. When is a Planning Variable Considered Initialized?
null or if the variable is nullable. So a nullable variable is always considered initialized, even when a custom reinitializeVariableEntityFilter triggers a reinitialization during construction heuristics.
Solution is initialized if all of its planning entities are initialized.
4.3.5. Planning Value and Planning Value Range
4.3.5.1. Planning Value
double. It can even be another planning entity or even a interface implemented by both a planning entity and a problem fact.
1, 2, 3 or 4) or uncountable (for example any double between 0.0 and 1.0).
4.3.5.2. Planning Value Range Provider
4.3.5.2.1. Overview
@ValueRangeProvider annotation. A @ValueRangeProvider annotation always has a property id, which is referenced by the @PlanningVariable's property valueRangeProviderRefs.
- On the Solution: All planning entities share the same value range.
- On the planning entity: The value range differs per planning entity. This is less common.
Collection: The value range is defined by aCollection(usually aList) of its possible values.ValueRange: The value range is defined by its bounds. This is less common.
4.3.5.2.2. ValueRangeProvider on the Solution
Solution implementation has method that returns a Collection (or a ValueRange). Any value from that Collection is a possible planning value for this planning variable.
@PlanningVariable(valueRangeProviderRefs = {"rowRange"})
public Row getRow() {
return row;
}@PlanningSolution
public class NQueens implements Solution<SimpleScore> {
// ...
@ValueRangeProvider(id = "rowRange")
public List<Row> getRowList() {
return rowList;
}
}Collection (or ValueRange) must not contain the value null, not even for a nullable planning variable.
@PlanningSolution
public class NQueens implements Solution<SimpleScore> {
...
@ValueRangeProvider(id = "rowRange")
private List<Row> rowList;
}4.3.5.2.3. ValueRangeProvider on the Planning Entity
@PlanningVariable(valueRangeProviderRefs = {"departmentRoomRange"})
public Room getRoom() {
return room;
}
@ValueRangeProvider(id = "departmentRoomRange")
public List<Room> getPossibleRoomList() {
return getCourse().getTeacher().getDepartment().getRoomList();
}List instance, unless multiple entities have the same value range. For example, if teacher A and B belong to the same department, they use the same List<Room> instance. Furthermore, each List contains a subset of the same set of planning value instances. For example, if department A and B can both use room X, then their List<Room> instances contain the same Room instance.
ValueRangeProvider on the planning entity consumes more memory than ValueRangeProvider on the Solution and disables certain automatic performance optimizations.
ValueRangeProvider on the planning entity is not currently compatible with a chained variable.
4.3.5.2.4. ValueRangeFactory
Collection, you can also return a ValueRange or CountableValueRange, build by the ValueRangeFactory:
@ValueRangeProvider(id = "delayRange")
public CountableValueRange<Integer> getDelayRange() {
return ValueRangeFactory.createIntValueRange(0, 5000);
}ValueRange uses far less memory, because it only holds the bounds. In the example above, a Collection would need to hold all 5000 ints, instead of just the two bounds.
incrementUnit can be specified, for example if you have to buy stocks in units of 200 pieces:
@ValueRangeProvider(id = "stockAmountRange")
public CountableValueRange<Integer> getStockAmountRange() {
// Range: 0, 200, 400, 600, ..., 9999600, 9999800, 10000000
return ValueRangeFactory.createIntValueRange(0, 10000000, 200);
}CountableValueRange instead of ValueRange whenever possible (so Planner knows that it's countable).
ValueRangeFactory has creation methods for several value class types:
int: A 32bit integer range.long: A 64bit integer range.double: A 64bit floating point range which only supports random selection (because it does not implementCountableValueRange).BigInteger: An arbitrary-precision integer range.BigDecimal: A decimal point range. By default, the increment unit is the lowest non-zero value in the scale of the bounds.
4.3.5.2.5. Combine ValueRangeProviders
@PlanningVariable(valueRangeProviderRefs = {"companyCarRange", "personalCarRange"})
public Car getCar() {
return car;
} @ValueRangeProvider(id = "companyCarRange")
public List<CompanyCar> getCompanyCarList() {
return companyCarList;
}
@ValueRangeProvider(id = "personalCarRange")
public List<PersonalCar> getPersonalCarList() {
return personalCarList;
}4.3.5.3. Planning Value Strength
strengthComparatorClass to the @PlanningVariable annotation:
@PlanningVariable(..., strengthComparatorClass = CloudComputerStrengthComparator.class)
public CloudComputer getComputer() {
// ...
}public class CloudComputerStrengthComparator implements Comparator<CloudComputer> {
public int compare(CloudComputer a, CloudComputer b) {
return new CompareToBuilder()
.append(a.getMultiplicand(), b.getMultiplicand())
.append(b.getCost(), a.getCost()) // Descending (but this is debatable)
.append(a.getId(), b.getId())
.toComparison();
}
}strengthComparatorClass needs to implement a Comparator of a common superclass (for example Comparator<Object>) and be able to handle comparing instances of those different classes.
strengthWeightFactoryClass to the @PlanningVariable annotation, so you have access to the rest of the problem facts from the solution too:
@PlanningVariable(..., strengthWeightFactoryClass = RowStrengthWeightFactory.class)
public Row getRow() {
// ...
}null. For example, none of the row variables of any Queen may be used to determine the strength of a Row.
4.3.5.4. Chained Planning Variable (TSP, VRP, ...)
- Directly points to a problem fact (or planning entity), which is called an anchor.
- Points to another planning entity with the same planning variable, which recursively points to an anchor.

- A chain is never a loop. The tail is always open.
- Every chain always has exactly one anchor. The anchor is a problem fact, never a planning entity.
- A chain is never a tree, it is always a line. Every anchor or planning entity has at most one trailing planning entity.
- Every initialized planning entity is part of a chain.
- An anchor with no planning entities pointing to it, is also considered a chain.
Solver must be valid.
Moves do chain correction to guarantee that the model stays valid:

Move implementation must leave the model in a valid state.
Domicile (in vehicle routing it is Vehicle):
public class Domicile ... implements Standstill {
...
public City getCity() {...}
}Standstill:
public interface Standstill {
City getCity();
}Visit (in vehicle routing it is Customer):
@PlanningEntity
public class Visit ... implements Standstill {
...
public City getCity() {...}
@PlanningVariable(graphType = PlanningVariableGraphType.CHAINED, valueRangeProviderRefs = {"domicileRange", "visitRange"})
public Standstill getPreviousStandstill() {
return previousStandstill;
}
public void setPreviousStandstill(Standstill previousStandstill) {
this.previousStandstill = previousStandstill;
}
}- The value range provider that holds the anchors, for example
domicileList. - The value range provider that holds the initialized planning entities, for example
visitList.
4.3.6. Shadow Variable
4.3.6.1. Introduction

4.3.6.2. Bi-directional Variable (Inverse Relation Shadow Variable)
null and the other side does not exist). So if A references B, then B references A.

@PlanningEntity
public class CloudProcess {
@PlanningVariable(...)
public CloudComputer getComputer() {
return computer;
}
public void setComputer(CloudComputer computer) {...}
}@InverseRelationShadowVariable annotation on a Collection (usually a Set or List) property:
@PlanningEntity
public class CloudComputer {
@InverseRelationShadowVariable(sourceVariableName = "computer")
public List<CloudProcess> getProcessList() {
return processList;
}
}sourceVariableName property is the name of the genuine planning variable on the return type of the getter (so the name of the genuine planning variable on the other side).
Collection, can never be null. If no genuine variable is referencing that shadow entity, then it is an empty Collection. Furthermore it must be a mutable Collection because once the Solver starts initializing or changing genuine planning variables, it will add and remove to the Collections of those shadow variables accordingly.
@PlanningEntity
public class Customer ... {
@PlanningVariable(graphType = PlanningVariableGraphType.CHAINED, ...)
public Standstill getPreviousStandstill() {
return previousStandstill;
}
public void setPreviousStandstill(Standstill previousStandstill) {...}
}@PlanningEntity
public class Standstill {
@InverseRelationShadowVariable(sourceVariableName = "previousStandstill")
public Customer getNextCustomer() {
return nextCustomer;
}
public void setNextCustomer(Customer nextCustomer) {...}
}Solver must not violate bi-directional relationships. If A points to B, then B must point to A. Planner will not violate that principle during planning, but the input must not violate it.
4.3.6.3. Anchor Shadow Variable
@AnchorShadowVariable annotation:
@PlanningEntity
public class Customer {
@AnchorShadowVariable(sourceVariableName = "previousStandstill")
public Vehicle getVehicle() {...}
public void setVehicle(Vehicle vehicle) {...}
}sourceVariableName property is the name of the chained variable on the same entity class.
4.3.6.4. Custom VariableListener
VariableListener. To define a custom shadow variable, write a custom VariableListener: implement the interface and annotate it on the shadow variable that needs to change.
@PlanningVariable(...)
public Standstill getPreviousStandstill() {
return previousStandstill;
}
@CustomShadowVariable(variableListenerClass = VehicleUpdatingVariableListener.class,
sources = {@CustomShadowVariable.Source(variableName = "previousStandstill")})
public Vehicle getVehicle() {
return vehicle;
}variableName is the variable that triggers changes in the shadow variable(s).
entityClass on @CustomShadowVariable.Source. In that case, make sure that that entityClass is also properly configured as a planning entity class in the solver config, or the VariableListener will simply never trigger.
VehicleUpdatingVariableListener assures that every Customer in a chain has the same Vehicle, namely the chain's anchor.
public class VehicleUpdatingVariableListener implements VariableListener<Customer> {
public void afterEntityAdded(ScoreDirector scoreDirector, Customer customer) {
updateVehicle(scoreDirector, customer);
}
public void afterVariableChanged(ScoreDirector scoreDirector, Customer customer) {
updateVehicle(scoreDirector, customer);
}
...
protected void updateVehicle(ScoreDirector scoreDirector, Customer sourceCustomer) {
Standstill previousStandstill = sourceCustomer.getPreviousStandstill();
Vehicle vehicle = previousStandstill == null ? null : previousStandstill.getVehicle();
Customer shadowCustomer = sourceCustomer;
while (shadowCustomer != null && shadowCustomer.getVehicle() != vehicle) {
scoreDirector.beforeVariableChanged(shadowCustomer, "vehicle");
shadowCustomer.setVehicle(vehicle);
scoreDirector.afterVariableChanged(shadowCustomer, "vehicle");
shadowCustomer = shadowCustomer.getNextCustomer();
}
}
}VariableListener can only change shadow variables. It must never change a genuine planning variable or a problem fact.
ScoreDirector.
VariableListener changes two shadow variables (because having two separate VariableListeners would be inefficient), then annotate only the first shadow variable with the variableListenerClass and let the other shadow variable(s) reference the first shadow variable:
@PlanningVariable(...)
public Standstill getPreviousStandstill() {
return previousStandstill;
}
@CustomShadowVariable(variableListenerClass = TransportTimeAndCapacityUpdatingVariableListener.class,
sources = {@CustomShadowVariable.Source(variableName = "previousStandstill")})
public Integer getTransportTime() {
return transportTime;
}
@CustomShadowVariable(variableListenerRef = @PlanningVariableReference(variableName = "transportTime"))
public Integer getCapacity() {
return capacity;
}4.3.6.5. VariableListener triggering order
VariableListener, regardless if it's a build-in or a custom shadow variable. The genuine and shadow variables form a graph, that determines the order in which the afterEntityAdded(), afterVariableChanged() and afterEntityRemoved() methods are called:

- The first
VariableListener'safter*()methods trigger after the last genuine variable has changed. Therefore the genuine variables (A and B in the example above) are guaranteed to be in a consistent state across all its instances (with values A1, A2 and B1 in the example above) because the entireMovehas been applied. - The second
VariableListener'safter*()methods trigger after the last first shadow variable has changed. Therefore the first shadow variable (C in the example above) are guaranteed to be in consistent state across all its instances (with values C1 and C2 in the example above). And of course the genuine variables too. - And so forth.
after*() methods are called for the same VariableListener with different parameters (such as A1 and A2 in the example above), although they are likely to be in the order in which they were affected.
4.3.7. Planning Problem and Planning Solution
4.3.7.1. Planning Problem Instance
Solver to solve. You must implement this class. For example in n queens, this in the NQueens class, which contains a Column list, a Row list, and a Queen list.
Solution. Therefore, that wrapping class must implement the Solution interface. For example in n queens, that NQueens class implements Solution, yet every Queen in a fresh NQueens class is not yet assigned to a Row (their row property is null). This is not a feasible solution. It's not even a possible solution. It's an uninitialized solution.
4.3.7.2. Solution Interface
Solution instance to the Solver. So your class needs to implement the Solution interface:
public interface Solution<S extends Score> {
S getScore();
void setScore(S score);
Collection<? extends Object> getProblemFacts();
}NQueens instance holds a list of all columns, all rows and all Queen instances:
@PlanningSolution
public class NQueens implements Solution<SimpleScore> {
private int n;
// Problem facts
private List<Column> columnList;
private List<Row> rowList;
// Planning entities
private List<Queen> queenList;
// ...
}@PlanningSolution annotation. Without automated scanning, the solver configuration also needs to declare the planning solution class:
<solver> ... <solutionClass>org.optaplanner.examples.nqueens.domain.NQueens</solutionClass> ... </solver>
4.3.7.3. The getScore() and setScore() Methods
Solution requires a score property. The score property is null if the Solution is uninitialized or if the score has not yet been (re)calculated. The score property is usually typed to the specific Score implementation you use. For example, NQueens uses a SimpleScore:
@PlanningSolution
public class NQueens implements Solution<SimpleScore> {
private SimpleScore score;
public SimpleScore getScore() {
return score;
}
public void setScore(SimpleScore score) {
this.score = score;
}
// ...
}HardSoftScore instead:
@PlanningSolution
public class CourseSchedule implements Solution<HardSoftScore> {
private HardSoftScore score;
public HardSoftScore getScore() {
return score;
}
public void setScore(HardSoftScore score) {
this.score = score;
}
// ...
}Score implementations.
4.3.7.4. The getProblemFacts() Method
getProblemFacts() method will be asserted into the Drools working memory, so the score rules can access them. For example, NQueens just returns all Column and Row instances.
public Collection<? extends Object> getProblemFacts() {
List<Object> facts = new ArrayList<Object>();
facts.addAll(columnList);
facts.addAll(rowList);
// Do not add the planning entity's (queenList) because that will be done automatically
return facts;
}getProblemFacts().
facts.add(...) instead of fact.addAll(...) for a Collection, which leads to score rules failing to match because the elements of that Collection are not in the Drools working memory.
getProblemFacts() method is not called often: at most only once per solver phase per solver thread.
4.3.7.4.1. Cached Problem Fact
Solver really starts solving. The getProblemFacts() method has the chance to enrich the domain model with such cached problem facts, which can lead to simpler and faster score constraints.
TopicConflict is created for every two Topics which share at least one Student.
public Collection<? extends Object> getProblemFacts() {
List<Object> facts = new ArrayList<Object>();
// ...
facts.addAll(calculateTopicConflictList());
// ...
return facts;
}
private List<TopicConflict> calculateTopicConflictList() {
List<TopicConflict> topicConflictList = new ArrayList<TopicConflict>();
for (Topic leftTopic : topicList) {
for (Topic rightTopic : topicList) {
if (leftTopic.getId() < rightTopic.getId()) {
int studentSize = 0;
for (Student student : leftTopic.getStudentList()) {
if (rightTopic.getStudentList().contains(student)) {
studentSize++;
}
}
if (studentSize > 0) {
topicConflictList.add(new TopicConflict(leftTopic, rightTopic, studentSize));
}
}
}
}
return topicConflictList;
}TopicConflict instance can be used as a problem fact, rather than having to combine every two Student instances.
4.3.7.5. Extract the entities from the Solution
Solution instance. It gets those collection(s) by calling every getter (or field) that is annotated with @PlanningEntityCollectionProperty:
@PlanningSolution
public class NQueens implements Solution<SimpleScore> {
...
private List<Queen> queenList;
@PlanningEntityCollectionProperty
public List<Queen> getQueenList() {
return queenList;
}
}@PlanningEntityCollectionProperty annotated members. Those can even return a Collection with the same entity class type.
@PlanningEntityProperty on its getter (or field) instead.
4.3.7.6. Cloning a Solution
Solution must fulfill these requirements:
- The clone must represent the same planning problem. Usually it reuses the same instances of the problem facts and problem fact collections as the original.
- The clone must use different, cloned instances of the entities and entity collections. Changes to an original
Solutionentity's variables must not affect its clone.

4.3.7.6.1. FieldAccessingSolutionCloner
SolutionCloner is used by default. It works well for most use cases.
FieldAccessingSolutionCloner clones your entity collection, it may not recognize the implementation and replace it with ArrayList, LinkedHashSet or TreeSet (whichever is more applicable). It recognizes most of the common JDK Collection implementations.
FieldAccessingSolutionCloner does not clone problem facts by default. If any of your problem facts needs to be deep cloned for a planning clone, for example if the problem fact references a planning entity or the planning solution, mark it with a @DeepPlanningClone annotation:
@DeepPlanningClone
public class SeatDesignationDependency {
private SeatDesignation leftSeatDesignation; // planning entity
private SeatDesignation rightSeatDesignation; // planning entity
...
}SeatDesignation is a planning entity (which is deep planning cloned automatically), SeatDesignationDependency must also be deep planning cloned.
@DeepPlanningClone annotation can also be used on a getter method.
4.3.7.6.2. Custom Cloning: Make Solution Implement PlanningCloneable
Solution implements PlanningCloneable, Planner will automatically choose to clone it by calling the planningClone() method.
public interface PlanningCloneable<T> {
T planningClone();
}NQueens implements PlanningCloneable, it would only deep clone all Queen instances. When the original solution is changed during planning, by changing a Queen, the clone stays the same.
public class NQueens implements Solution<...>, PlanningCloneable<NQueens> {
...
/**
* Clone will only deep copy the {@link #queenList}.
*/
public NQueens planningClone() {
NQueens clone = new NQueens();
clone.id = id;
clone.n = n;
clone.columnList = columnList;
clone.rowList = rowList;
List<Queen> clonedQueenList = new ArrayList<Queen>(queenList.size());
for (Queen queen : queenList) {
clonedQueenList.add(queen.planningClone());
}
clone.queenList = clonedQueenList;
clone.score = score;
return clone;
}
}planningClone() method should only deep clone the planning entities. Notice that the problem facts, such as Column and Row are not normally cloned: even their List instances are not cloned. If you were to clone the problem facts too, then you would have to make sure that the new planning entity clones also refer to the new problem facts clones used by the solution. For example, if you were to clone all Row instances, then each Queen clone and the NQueens clone itself should refer to those new Row clones.
4.3.7.7. Create an Uninitialized Solution
Solution instance to represent your planning problem's dataset, so it can be set on the Solver as the planning problem to solve. For example in n queens, an NQueens instance is created with the required Column and Row instances and every Queen set to a different column and every row set to null.
private NQueens createNQueens(int n) {
NQueens nQueens = new NQueens();
nQueens.setId(0L);
nQueens.setN(n);
nQueens.setColumnList(createColumnList(nQueens));
nQueens.setRowList(createRowList(nQueens));
nQueens.setQueenList(createQueenList(nQueens));
return nQueens;
}
private List<Queen> createQueenList(NQueens nQueens) {
int n = nQueens.getN();
List<Queen> queenList = new ArrayList<Queen>(n);
long id = 0L;
for (Column column : nQueens.getColumnList()) {
Queen queen = new Queen();
queen.setId(id);
id++;
queen.setColumn(column);
// Notice that we leave the PlanningVariable properties on null
queenList.add(queen);
}
return queenList;
}Figure 4.1. Uninitialized Solution for the 4 Queens Puzzle

Solution implementation just aggregates that data and creates the uninitialized planning entity instances to plan:
private void createLectureList(CourseSchedule schedule) {
List<Course> courseList = schedule.getCourseList();
List<Lecture> lectureList = new ArrayList<Lecture>(courseList.size());
long id = 0L;
for (Course course : courseList) {
for (int i = 0; i < course.getLectureSize(); i++) {
Lecture lecture = new Lecture();
lecture.setId(id);
id++;
lecture.setCourse(course);
lecture.setLectureIndexInCourse(i);
// Notice that we leave the PlanningVariable properties (period and room) on null
lectureList.add(lecture);
}
}
schedule.setLectureList(lectureList);
}4.4. Use the Solver
4.4.1. The Solver Interface
Solver implementation will solve your planning problem.
public interface Solver {
void solve(Solution planningProblem);
Solution getBestSolution();
// ...
}Solver can only solve one planning problem instance at a time. A Solver should only be accessed from a single thread, except for the methods that are specifically javadocced as being thread-safe. It is built with a SolverFactory, do not implement or build it yourself.
4.4.2. Solving a Problem
- A
Solverbuilt from a solver configuration - A
Solutionthat represents the planning problem instance
solver.solve(planningProblem);
Solution bestSolution = solver.getBestSolution();getBestSolution() method will return an NQueens instance with every Queen assigned to a Row.
Figure 4.2. Best Solution for the 4 Queens Puzzle in 8ms (Also an Optimal Solution)

solve(Solution) method can take a long time (depending on the problem size and the solver configuration). The Solver will remember (actually clone) the best solution it encounters during its solving. Depending on a number factors (including problem size, how much time the Solver has, the solver configuration, ...), that best solution will be a feasible or even an optimal solution.
Solution instance given to the method solve(Solution) will be changed by the Solver, but do not mistake it for the best solution.
Solution instance returned by the getBestSolution() method will most likely be a clone of the instance given to the method solve(Solution), which means it is a different instance.
Solution instance given to the solve(Solution) method does not need to be uninitialized. It can be partially or fully initialized, which is likely to be the case in repeated planning.
4.4.3. Environment Mode: Are There Bugs in my Code?
<solver> <environmentMode>FAST_ASSERT</environmentMode> ... </solver>
Random instance. Some solver configurations use the Random instance a lot more than others. For example Simulated Annealing depends highly on random numbers, while Tabu Search only depends on it to deal with score ties. The environment mode influences the seed of that Random instance.
4.4.3.1. FULL_ASSERT
calculateScore() more frequently than a non-assert mode.
4.4.3.2. NON_INTRUSIVE_FULL_ASSERT
calculateScore() more frequently than a non assert mode.
4.4.3.3. FAST_ASSERT
calculateScore() more frequently than a non assert mode.
4.4.3.4. REPRODUCIBLE (default)
- Use of
HashSet(or anotherCollectionwhich has an inconsistent order between JVM runs) for collections of planning entities or planning values (but not normal problem facts), especially in theSolutionimplementation. Replace it withLinkedHashSet. - Combining a time gradient dependent algorithms (most notably Simulated Annealing) together with time spent termination. A sufficiently large difference in allocated CPU time will influence the time gradient values. Replace Simulated Annealing with Late Acceptance. Or instead, replace time spent termination with step count termination.
4.4.3.5. PRODUCTION
4.4.4. Logging Level: What is the Solver Doing?
Solver, is to play with the logging level:
- error: Log errors, except those that are thrown to the calling code as a
RuntimeException.NoteIf an error happens, Planner normally fails fast: it throws a subclass ofRuntimeExceptionwith a detailed message to the calling code. It does not log it as an error itself to avoid duplicate log messages. Except if the calling code explicitly catches and eats thatRuntimeException, aThread's defaultExceptionHandlerwill log it as an error anyway. Meanwhile, the code is disrupted from doing further harm or obfuscating the error. - warn: Log suspicious circumstances.
- info: Log every phase and the solver itself. See scope overview.
- debug: Log every step of every phase. See scope overview.
- trace: Log every move of every step of every phase. See scope overview.NoteTurning on
tracelogging, will slow down performance considerably: it is often four times slower. However, it is invaluable during development to discover a bottleneck.Even debug logging can slow down performance considerably for fast stepping algorithms (such as Late Acceptance and Simulated Annealing), but not for slow stepping algorithms (such as Tabu Search).
debug logging, to see when the phases end and how fast steps are taken:
INFO Solving started: time spent (3), best score (uninitialized/0), random (JDK with seed 0).
DEBUG CH step (0), time spent (5), score (0), selected move count (1), picked move (Queen-2 {null -> Row-0}).
DEBUG CH step (1), time spent (7), score (0), selected move count (3), picked move (Queen-1 {null -> Row-2}).
DEBUG CH step (2), time spent (10), score (0), selected move count (4), picked move (Queen-3 {null -> Row-3}).
DEBUG CH step (3), time spent (12), score (-1), selected move count (4), picked move (Queen-0 {null -> Row-1}).
INFO Construction Heuristic phase (0) ended: step total (4), time spent (12), best score (-1).
DEBUG LS step (0), time spent (19), score (-1), best score (-1), accepted/selected move count (12/12), picked move (Queen-1 {Row-2 -> Row-3}).
DEBUG LS step (1), time spent (24), score (0), new best score (0), accepted/selected move count (9/12), picked move (Queen-3 {Row-3 -> Row-2}).
INFO Local Search phase (1) ended: step total (2), time spent (24), best score (0).
INFO Solving ended: time spent (24), best score (0), average calculate count per second (1625). <dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.x</version>
</dependency>org.optaplanner package in your logback.xml file:
<configuration> <logger name="org.optaplanner" level="debug"/> ... <configuration>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.x</version>
</dependency>org.optaplanner in your log4j.xml file:
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">
<category name="org.optaplanner">
<priority value="debug" />
</category>
...
</log4j:configuration>Solver instances might be running at the same time. To separate their logging into distinct files, surround the solve() call with an MDC:
MDC.put("tenant.name",tenantName);
solver.solve(planningProblem);
Solution bestSolution = solver.getBestSolution();
MDC.remove("tenant.name");${tenant.name}. For example in Logback, use a SiftingAppender in logback.xml:
<appender name="fileAppender" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<key>tenant.name</key>
<defaultValue>unknown</defaultValue>
</discriminator>
<sift>
<appender name="fileAppender.${tenant.name}" class="...FileAppender">
<file>local/log/optaplanner-${tenant.name}.log</file>
...
</appender>
</sift>
</appender>4.4.5. Random Number Generator
Random instance is reused to improve reproducibility, performance and uniform distribution of random values.
Random instance, specify a randomSeed:
<solver> <randomSeed>0</randomSeed> ... </solver>
randomType:
<solver> <randomType>MERSENNE_TWISTER</randomType> ... </solver>
JDK(default): Standard implementation (java.util.Random).MERSENNE_TWISTER: Implementation by Commons Math.WELL512A,WELL1024A,WELL19937A,WELL19937C,WELL44497AandWELL44497B: Implementation by Commons Math.
Chapter 5. Score Calculation
5.1. Score Terminology
5.1.1. What is a Score?
Solution has a score. That score is an objective way to compare 2 solutions: the solution with the higher score is better. The Solver aims to find the Solution with the highest Score of all possible solutions. The best solution is the Solution with the highest Score that Solver has encountered during solving, which might be the optimal solution.
Solution is best for your business, so you need to tell it how to calculate the score of a given Solution according to your business needs. If you forget or unable to implement an important constraint, the solution is probably useless:

- Score signum (positive or negative): maximize or minimize a constraint type.
- Score weight: put a cost/profit on a constraint type.
- Score level: prioritize a group of constraint types
- Pareto scoring
5.1.2. Score Constraint Signum (Positive or Negative)

5.1.3. Score Constraint Weight


Computer is the cost of that Computer.
5.1.4. Score Level


Computer has 7 CPU too little for its Processes, then it must be weighted 7 times as much as if it had only 1 CPU too little. This way, there is an incentive to move a Process with 6 CPU or less away from that Computer.
5.1.5. Pareto Scoring (AKA Multi-objective Optimization Scoring)

ScoreDefinition and Score (and replace the BestSolutionRecaller). Future versions will provide out-of-the-box support.
Score's compareTo method is not transitive because it does a pareto comparison. For example: having 2 apples is greater than 1 apple. 1 apple is equal to 1 orange. Yet, 2 apples are not greater than 1 orange (but actually equal). Pareto comparison violates the contract of the interface java.lang.Comparable's compareTo method, but Planner's systems are pareto comparison safe, unless explicitly stated otherwise in this documentation.
5.1.6. Combining Score Techniques

5.1.7. Score interface
Score interface, which naturally extends Comparable:
public interface Score<...> extends Comparable<...> {
...
}Score implementation to use depends on your use case. Your score might not efficiently fit in a single long value. Planner has several built-in Score implementations, but you can implement a custom Score too. Most use cases tend to use the built-in HardSoftScore.

Score implementation (for example HardSoftScore) must be the same throughout a Solver runtime. The Score implementation is configured in the solver configuration as a ScoreDefinition:
<scoreDirectorFactory>
<scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
...
</scoreDirectorFactory>5.1.8. Avoid Floating Point Numbers in Score Calculation
float and double for score calculation. Use BigDecimal instead.
float and double) cannot represent a decimal number correctly. For example: a double cannot hold the value 0.05 correctly. Instead, it holds the nearest representable value. Arithmetic (including addition and subtraction) with floating point numbers, especially for planning problems, leads to incorrect decisions:

System.out.println( ((0.01 + 0.02) + 0.03) == (0.01 + (0.02 + 0.03)) ); // returns false
BigDecimal) have none of these problems.
int, long or double arithmetic. In experiments we've seen the average calculation count get divided by 5.
1000), so the score weight fits in an int or long.
5.2. Choose a Score Definition
Score implementation also has a ScoreDefinition implementation. For example: SimpleScore is defined by SimpleScoreDefinition.
Score to database (with JPA/Hibernate) or to XML/JSON (with XStream/JAXB), see the integration chapter.
5.2.1. SimpleScore
SimpleScore has a single int value, for example -123. It has a single score level.
<scoreDirectorFactory>
<scoreDefinitionType>SIMPLE</scoreDefinitionType>
...
</scoreDirectorFactory>scoreDefinitionType:
SIMPLE_LONG: UsesSimpleLongScorewhich has alongvalue instead of anintvalue.SIMPLE_DOUBLE: UsesSimpleDoubleScorewhich has adoublevalue instead of anintvalue. Not recommended to use.SIMPLE_BIG_DECIMAL: UsesSimpleBigDecimalScorewhich has aBigDecimalvalue instead of anintvalue.
5.2.2. HardSoftScore (Recommended)
HardSoftScore has a hard int value and a soft int value, for example -123hard/-456soft. It has 2 score levels (hard and soft).
<scoreDirectorFactory>
<scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
...
</scoreDirectorFactory>scoreDefinitionType:
HARD_SOFT_LONG: UsesHardSoftLongScorewhich haslongvalues instead ofintvalues.HARD_SOFT_DOUBLE: UsesHardSoftDoubleScorewhich hasdoublevalues instead ofintvalues. Not recommended to use.HARD_SOFT_BIG_DECIMAL: UsesHardSoftBigDecimalScorewhich hasBigDecimalvalues instead ofintvalues.
5.2.3. HardMediumSoftScore
HardMediumSoftScore which has a hard int value, a medium int value and a soft int value, for example -123hard/-456medium/-789soft. It has 3 score levels (hard, medium and soft).
<scoreDirectorFactory>
<scoreDefinitionType>HARD_MEDIUM_SOFT</scoreDefinitionType>
...
</scoreDirectorFactory>scoreDefinitionType:
HARD_MEDIUM_SOFT_LONG: UsesHardMediumSoftLongScorewhich haslongvalues instead ofintvalues.
5.2.4. BendableScore
BendableScore has a configurable number of score levels. It has an array of hard int values and an array of soft int value, for example with 2 hard levels and 3 soft levels, the score can be -123/-456/-789/-012/-345.
<scoreDirectorFactory>
<scoreDefinitionType>BENDABLE</scoreDefinitionType>
<bendableHardLevelsSize>2</bendableHardLevelsSize>
<bendableSoftLevelsSize>3</bendableSoftLevelsSize>
...
</scoreDirectorFactory>scoreDefinitionType:
BENDABLE_Long: UsesBendableLongScorewhich haslongvalues instead ofintvalues.BENDABLE_BIG_DECIMAL: UsesBendableBigDecimalScorewhich hasBigDecimalvalues instead ofintvalues.
5.2.5. Implementing a Custom Score
ScoreDefinition interface defines the score representation.
Score, you'll also need to implement a custom ScoreDefinition. Extend AbstractScoreDefinition (preferably by copy pasting HardSoftScoreDefinition) and start from there.
ScoreDefinition in your SolverConfig.xml:
<scoreDirectorFactory>
<scoreDefinitionClass>...MyScoreDefinition</scoreDefinitionClass>
...
</scoreDirectorFactory>5.3. Calculate the Score
5.3.1. Score Calculation Types
Score of a Solution:
- Easy Java score calculation: implement a single Java method
- Incremental Java score calculation: implement multiple Java methods
- Drools score calculation (recommended): implement score rules
HardSoftScore.
Solution if it can predict it (unless an environmentMode assertion is enabled). For example, after a winning step is done, there is no need to calculate the score because that move was done and undone earlier. As a result, there's no guarantee that such changes applied during score calculation are actually done.
5.3.2. Easy Java Score Calculation
- Advantages:
- Plain old Java: no learning curve
- Opportunity to delegate score calculation to an existing code base or legacy system
- Disadvantages:
- Slower and less scalable
- Because there is no incremental score calculation
EasyScoreCalculator:
public interface EasyScoreCalculator<Sol extends Solution> {
Score calculateScore(Sol solution);
}public class NQueensEasyScoreCalculator implements EasyScoreCalculator<NQueens> {
public SimpleScore calculateScore(NQueens nQueens) {
int n = nQueens.getN();
List<Queen> queenList = nQueens.getQueenList();
int score = 0;
for (int i = 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
Queen leftQueen = queenList.get(i);
Queen rightQueen = queenList.get(j);
if (leftQueen.getRow() != null && rightQueen.getRow() != null) {
if (leftQueen.getRowIndex() == rightQueen.getRowIndex()) {
score--;
}
if (leftQueen.getAscendingDiagonalIndex() == rightQueen.getAscendingDiagonalIndex()) {
score--;
}
if (leftQueen.getDescendingDiagonalIndex() == rightQueen.getDescendingDiagonalIndex()) {
score--;
}
}
}
}
return SimpleScore.valueOf(score);
}
} <scoreDirectorFactory>
<scoreDefinitionType>...</scoreDefinitionType>
<easyScoreCalculatorClass>org.optaplanner.examples.nqueens.solver.score.NQueensEasyScoreCalculator</easyScoreCalculatorClass>
</scoreDirectorFactory>EasyScoreCalculator instance at runtime and set it with the programmatic API:
solverFactory.getSolverConfig().getScoreDirectorFactoryConfig.setEasyScoreCalculator(easyScoreCalculator);
5.3.3. Incremental Java Score Calculation
- Advantages:
- Very fast and scalable
- Currently the fastest if implemented correctly
- Disadvantages:
- Hard to write
- A scalable implementation heavily uses maps, indexes, ... (things the Drools rule engine can do for you)
- You have to learn, design, write and improve all these performance optimizations yourself
- Hard to read
- Regular score constraint changes can lead to a high maintenance cost
IncrementalScoreCalculator and extend the class AbstractIncrementalScoreCalculator:
public interface IncrementalScoreCalculator<Sol extends Solution> {
void resetWorkingSolution(Sol workingSolution);
void beforeEntityAdded(Object entity);
void afterEntityAdded(Object entity);
void beforeVariableChanged(Object entity, String variableName);
void afterVariableChanged(Object entity, String variableName);
void beforeEntityRemoved(Object entity);
void afterEntityRemoved(Object entity);
Score calculateScore();
}
public class NQueensAdvancedIncrementalScoreCalculator extends AbstractIncrementalScoreCalculator<NQueens> {
private Map<Integer, List<Queen>> rowIndexMap;
private Map<Integer, List<Queen>> ascendingDiagonalIndexMap;
private Map<Integer, List<Queen>> descendingDiagonalIndexMap;
private int score;
public void resetWorkingSolution(NQueens nQueens) {
int n = nQueens.getN();
rowIndexMap = new HashMap<Integer, List<Queen>>(n);
ascendingDiagonalIndexMap = new HashMap<Integer, List<Queen>>(n * 2);
descendingDiagonalIndexMap = new HashMap<Integer, List<Queen>>(n * 2);
for (int i = 0; i < n; i++) {
rowIndexMap.put(i, new ArrayList<Queen>(n));
ascendingDiagonalIndexMap.put(i, new ArrayList<Queen>(n));
descendingDiagonalIndexMap.put(i, new ArrayList<Queen>(n));
if (i != 0) {
ascendingDiagonalIndexMap.put(n - 1 + i, new ArrayList<Queen>(n));
descendingDiagonalIndexMap.put((-i), new ArrayList<Queen>(n));
}
}
score = 0;
for (Queen queen : nQueens.getQueenList()) {
insert(queen);
}
}
public void beforeEntityAdded(Object entity) {
// Do nothing
}
public void afterEntityAdded(Object entity) {
insert((Queen) entity);
}
public void beforeVariableChanged(Object entity, String variableName) {
retract((Queen) entity);
}
public void afterVariableChanged(Object entity, String variableName) {
insert((Queen) entity);
}
public void beforeEntityRemoved(Object entity) {
retract((Queen) entity);
}
public void afterEntityRemoved(Object entity) {
// Do nothing
}
private void insert(Queen queen) {
Row row = queen.getRow();
if (row != null) {
int rowIndex = queen.getRowIndex();
List<Queen> rowIndexList = rowIndexMap.get(rowIndex);
score -= rowIndexList.size();
rowIndexList.add(queen);
List<Queen> ascendingDiagonalIndexList = ascendingDiagonalIndexMap.get(queen.getAscendingDiagonalIndex());
score -= ascendingDiagonalIndexList.size();
ascendingDiagonalIndexList.add(queen);
List<Queen> descendingDiagonalIndexList = descendingDiagonalIndexMap.get(queen.getDescendingDiagonalIndex());
score -= descendingDiagonalIndexList.size();
descendingDiagonalIndexList.add(queen);
}
}
private void retract(Queen queen) {
Row row = queen.getRow();
if (row != null) {
List<Queen> rowIndexList = rowIndexMap.get(queen.getRowIndex());
rowIndexList.remove(queen);
score += rowIndexList.size();
List<Queen> ascendingDiagonalIndexList = ascendingDiagonalIndexMap.get(queen.getAscendingDiagonalIndex());
ascendingDiagonalIndexList.remove(queen);
score += ascendingDiagonalIndexList.size();
List<Queen> descendingDiagonalIndexList = descendingDiagonalIndexMap.get(queen.getDescendingDiagonalIndex());
descendingDiagonalIndexList.remove(queen);
score += descendingDiagonalIndexList.size();
}
}
public SimpleScore calculateScore() {
return SimpleScore.valueOf(score);
}
} <scoreDirectorFactory>
<scoreDefinitionType>...</scoreDefinitionType>
<incrementalScoreCalculatorClass>org.optaplanner.examples.nqueens.solver.score.NQueensAdvancedIncrementalScoreCalculator</incrementalScoreCalculatorClass>
</scoreDirectorFactory>ScoreDirector.getConstraintMatchTotals() or to get better output when the IncrementalScoreCalculator is corrupted in FAST_ASSERT or FULL_ASSERT environmentMode, implement also the ConstraintMatchAwareIncrementalScoreCalculator interface:
public interface ConstraintMatchAwareIncrementalScoreCalculator<Sol extends Solution> {
void resetWorkingSolution(Sol workingSolution, boolean constraintMatchEnabled);
Collection<ConstraintMatchTotal> getConstraintMatchTotals();
}5.3.4. Drools Score Calculation
5.3.4.1. Overview
- Advantages:
- Incremental score calculation for free
- Because most DRL syntax uses forward chaining, it does incremental calculation without any extra code
- Score constraints are isolated as separate rules
- Easy to add or edit existing score rules
- Flexibility to augment your score constraints by
- Defining them in decision tables
- Excel (XLS) spreadsheet
- KIE Workbench WebUI
- Translate them into natural language with DSL
- Store and release in the KIE Workbench repository
- Performance optimizations in future versions for free
- In every release, the Drools rule engine tends to become faster
- Disadvantages:
- DRL learning curve
- Usage of DRL
- Polyglot fear can prohibit the use of a new language such as DRL in some organizations
5.3.4.2. Drools Score Rules Configuration
5.3.4.2.1. A scoreDrl Resource on the Classpath
<scoreDrl> element:
<scoreDirectorFactory>
<scoreDefinitionType>...</scoreDefinitionType>
<scoreDrl>org/optaplanner/examples/nqueens/solver/nQueensScoreRules.drl</scoreDrl>
</scoreDirectorFactory>$PROJECT_DIR/src/main/resources/org/optaplanner/examples/nqueens/solver/nQueensScoreRules.drl (even for a war project).
<scoreDrl> element expects a classpath resource, as defined by ClassLoader.getResource(String), it does not accept a File, nor an URL, nor a webapp resource. See below to use a File instead.
<scoreDrl> elements if the score rules are split across multiple DRL files.
<scoreDirectorFactory>
...
<scoreDrl>org/optaplanner/examples/nqueens/solver/nQueensScoreRules.drl</scoreDrl>
<kieBaseConfigurationProperties>
<drools.equalityBehavior>...</drools.equalityBehavior>
</kieBaseConfigurationProperties>
</scoreDirectorFactory>5.3.4.2.2. A scoreDrlFile
File on the local file system, instead of a classpath resource, add the score rules DRL file in the solver configuration as a <scoreDrlFile> element:
<scoreDirectorFactory>
<scoreDefinitionType>...</scoreDefinitionType>
<scoreDrlFile>/home/ge0ffrey/tmp/nQueensScoreRules.drl</scoreDrlFile>
</scoreDirectorFactory><scoreDrlFile> elements if the score rules are split across multiple DRL files.
5.3.4.2.3. A KieBase (Possibly Defined by Drools Workbench)
KieBase yourself or to combine Planner with KIE Workbench (formerly known as Guvnor), set the KieBase on the SolverFactory before building the Solver:
solverFactory.getSolverConfig().getScoreDirectorFactoryConfig.setKieBase(kieBase);
- Upload the
optaplanner-corejar as a POJO model. - Add a global variable called
scoreHolder(see below).
5.3.4.3. Implementing a Score Rule
rule "multipleQueensHorizontal"
when
Queen($id : id, row != null, $i : rowIndex)
Queen(id > $id, rowIndex == $i)
then
scoreHolder.addConstraintMatch(kcontext, -1);
endrowIndex. The (id > $id) condition is needed to assure that for 2 queens A and B, it can only fire for (A, B) and not for (B, A), (A, A) or (B, B). Let's take a closer look at this score rule on this solution of 4 queens:

-6. An optimal solution of 4 queens has a score of 0.
kcontext variable is a magic variable in Drools Expert. The scoreHolder's method uses it to do incremental score calculation correctly and to create a ConstraintMatch instance.
5.3.4.4. Weighing Score Rules
ScoreHolder instance is asserted into the KieSession as a global called scoreHolder. Your score rules need to (directly or indirectly) update that instance.
global SimpleScoreHolder scoreHolder;
rule "multipleQueensHorizontal"
when
Queen($id : id, row != null, $i : rowIndex)
Queen(id > $id, rowIndex == $i)
then
scoreHolder.addConstraintMatch(kcontext, -1);
end
// multipleQueensVertical is obsolete because it is always 0
rule "multipleQueensAscendingDiagonal"
when
Queen($id : id, row != null, $i : ascendingDiagonalIndex)
Queen(id > $id, ascendingDiagonalIndex == $i)
then
scoreHolder.addConstraintMatch(kcontext, -1);
end
rule "multipleQueensDescendingDiagonal"
when
Queen($id : id, row != null, $i : descendingDiagonalIndex)
Queen(id > $id, descendingDiagonalIndex == $i)
then
scoreHolder.addConstraintMatch(kcontext, -1);
endLecture to a Room which is missing 2 seats is weighted equally bad as having 1 isolated Lecture in a Curriculum:
global HardSoftScoreHolder scoreHolder;
// RoomCapacity: For each lecture, the number of students that attend the course must be less or equal
// than the number of seats of all the rooms that host its lectures.
// Each student above the capacity counts as 1 point of penalty.
rule "roomCapacity"
when
$room : Room($capacity : capacity)
$lecture : Lecture(room == $room, studentSize > $capacity, $studentSize : studentSize)
then
scoreHolder.addSoftConstraintMatch(kcontext, ($capacity - $studentSize));
end
// CurriculumCompactness: Lectures belonging to a curriculum should be adjacent
// to each other (i.e., in consecutive periods).
// For a given curriculum we account for a violation every time there is one lecture not adjacent
// to any other lecture within the same day.
// Each isolated lecture in a curriculum counts as 2 points of penalty.
rule "curriculumCompactness"
when
...
then
scoreHolder.addSoftConstraintMatch(kcontext, -2);
end5.3.5. InitializingScoreTrend
InitializingScoreTrend specifies how the Score will change as more and more variables are initialized (while the already initialized variables don't change). Some optimization algorithms (such Construction Heuristics and Exhaustive Search) run faster if they have such information.
ANY(default): Initializing an extra variable can change the score positively or negatively. Gives no performance gain.ONLY_UP(rare): Initializing an extra variable can only change the score positively. Implies that:- There are only positive constraints
- And initializing the next variable can not unmatch a positive constraint that was matched by a previous initialized variable.
ONLY_DOWN: Initializing an extra variable can only change the score negatively. Implies that:- There are only negative constraints
- And initializing the next variable can not unmatch a negative constraint that was matched by a previous initialized variable.
InitializingScoreTrend that only goes down:
<scoreDirectorFactory>
<scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
<scoreDrl>.../cloudBalancingScoreRules.drl</scoreDrl>
<initializingScoreTrend>ONLY_DOWN</initializingScoreTrend>
</scoreDirectorFactory> <scoreDirectorFactory>
<scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
<scoreDrl>.../cloudBalancingScoreRules.drl</scoreDrl>
<initializingScoreTrend>ONLY_DOWN/ONLY_DOWN</initializingScoreTrend>
</scoreDirectorFactory>5.3.6. Invalid Score Detection
environmentMode in FULL_ASSERT (or FAST_ASSERT) to detect corruption in the incremental score calculation. For more information, see the section about environmentMode. However, that will not verify that your score calculator implements your score constraints as your business actually desires.
EasyScoreCalculator) to do the assertions triggered by the environmentMode. Just configure the different implementation as a assertionScoreDirectorFactory:
<environmentMode>FAST_ASSERT</environmentMode>
...
<scoreDirectorFactory>
<scoreDefinitionType>...</scoreDefinitionType>
<scoreDrl>org/optaplanner/examples/nqueens/solver/nQueensScoreRules.drl</scoreDrl>
<assertionScoreDirectorFactory>
<easyScoreCalculatorClass>org.optaplanner.examples.nqueens.solver.score.NQueensEasyScoreCalculator</easyScoreCalculatorClass>
</assertionScoreDirectorFactory>
</scoreDirectorFactory>scoreDrl will be validated by the EasyScoreCalculator.
5.4. Score Calculation Performance Tricks
5.4.1. Overview
Solver will normally spend most of its execution time running the score calculation (which is called in its deepest loops). Faster score calculation will return the same solution in less time with the same algorithm, which normally means a better solution in equal time.
5.4.2. Average Calculation Count Per Second
Solver will log the average calculation count per second. This is a good measurement of Score calculation performance, despite that it is affected by non score calculation execution time. It depends on the problem scale of the problem dataset. Normally, even for high scale problems, it is higher than 1000, except when you're using EasyScoreCalculator.
5.4.3. Incremental Score Calculation (with Delta's)
Solution changes, incremental score calculation (AKA delta based score calculation), will calculate the delta with the previous state to find the new Score, instead of recalculating the entire score on every solution evaluation.
1 to 2, it won't bother to check if queen B and C can attack each other, since neither of them changed.
Figure 5.1. Incremental Score Calculation for the 4 Queens Puzzle

5.4.4. Avoid Calling Remote Services During Score Calculation
EasyScoreCalculator to a legacy system). The network latency will kill your score calculation performance. Cache the results of those remote services if possible.
Solver starts, and never change during solving, then turn them into cached problem facts.
5.4.5. Pointless Constraints
Queen's column never changes and every Solution starts with each Queen on a different column.
5.4.6. Built-in Hard Constraint
Lecture A should never be assigned to Room X, but it uses ValueRangeProvider on Solution, so the Solver will often try to assign it to Room X too (only to find out that it breaks a hard constraint). Use a ValueRangeProvider on the planning entity or filtered selection to define that Course A should only be assigned a Room different than X.
- Many optimization algorithms rely on the freedom to break hard constraints when changing planning entities, to get out of local optima.
- Both implementation approaches have limitations (feature compatibility, disabling automatic performance optimizations, ...), as explained in their documentation.
5.4.7. Other Score Calculation Performance Tricks
- Verify that your score calculation happens in the correct
Numbertype. If you're making the sum ofintvalues, don't let Drools sum it in adoublewhich takes longer. - For optimal performance, always use server mode (
java -server). We have seen performance increases of 50% by turning on server mode. - For optimal performance, use the latest Java version. For example, in the past we have seen performance increases of 30% by switching from java 1.5 to 1.6.
- Always remember that premature optimization is the root of all evil. Make sure your design is flexible enough to allow configuration based tweaking.
5.4.8. Score Trap
- If you need 2 doctors at each table, but you're only moving 1 doctor at a time. So the solver has no incentive to move a doctor to a table with no doctors. Punish a table with no doctors more then a table with only 1 doctor in that score constraint in the score function.
- 2 exams need to be conducted at the same time, but you're only moving 1 exam at a time. So the solver has to move one of those exams to another timeslot without moving the other in the same move. Add a coarse-grained move that moves both exams at the same time.

- Improve the score constraint to make a distinction in the score weight. For example: penalize
-1hardfor every missing CPU, instead of just-1hardif any CPU is missing. - If changing the score constraint is not allowed from the business perspective, add a lower score level with a score constraint that makes such a distinction. For example: penalize
-1subsoftfor every missing CPU, on top of-1hardif any CPU is missing. The business ignores the subsoft score level. - Add coarse-grained moves and union select them with the existing fine-grained moves. A coarse-grained move effectively does multiple moves to directly get out of a score trap with a single move. For example: move multiple items from the same container to another container.
5.4.9. stepLimit Benchmark
5.4.10. Fairness Score Constraints
- Fairly distribute the workload amongst the employees, to avoid envy.
- Evenly distribute the workload amongst assets, to improve reliability.
w and subtract w² from the score.


0 score, instead of the distracting -34soft in the image above (for the last solution which is almost perfectly balanced). To nullify this, either add the average multiplied by the number of entities to the score or instead show the variance or standard deviation in the UI.
5.5. Explaining the Score: Using Score Calculation Outside the Solver
ScoreDirectorFactory of the Solver to build a separate ScoreDirector for that webUI:
ScoreDirectorFactory scoreDirectorFactory = solver.getScoreDirectorFactory(); ScoreDirector guiScoreDirector = scoreDirectorFactory.buildScoreDirector();
Score of a Solution:
guiScoreDirector.setWorkingSolution(solution); Score score = guiScoreDirector.calculateScore();
Score, get the ConstraintMatch objects from the ScoreDirector:
for (ConstraintMatchTotal constraintMatchTotal : guiScoreDirector.getConstraintMatchTotals()) {
String constraintName = constraintMatchTotal.getConstraintName();
Number weightTotal = constraintMatchTotal.getWeightTotalAsNumber();
for (ConstraintMatch constraintMatch : constraintMatchTotal.getConstraintMatchSet()) {
List<Object> justificationList = constraintMatch.getJustificationList();
Number weight = constraintMatch.getWeightAsNumber();
...
}
}Chapter 6. Optimization Algorithms
6.1. Search Space Size in the Real World
- 4 queens has
256possible solutions (4^4) and 2 optimal solutions. - 5 queens has
3125possible solutions (5^5) and 1 optimal solution. - 8 queens has
16777216possible solutions (8^8) and 92 optimal solutions. - 64 queens has more than
10^115possible solutions (64^64). - Most real-life planning problems have an incredible number of possible solutions and only 1 or a few optimal solutions.


- The optimal solution might be infeasible.
- There are many types of hard constraints which cannot be incorporated in the formula practically. For example in Cloud Balancing, try incorporating the CPU capacity constraint in the formula.
6.2. Does Planner Find the Optimal Solution?
- Scale out: Large production datasets must not crash and have good results too.
- Optimize the right problem: The constraints must match the actual business needs.
- Available time: The solution must be found in time, before it becomes useless to execute.
- Reliability: Every dataset must have at least a decent result (better than a human planner).
6.3. Architecture Overview
- A rule engine such as Drools Expert is great for calculating the score of a solution of a planning problem. It makes it easy and scalable to add additional soft or hard constraints such as "a teacher shouldn't teach more then 7 hours a day". It does delta based score calculation without any extra code. However it tends to be not suitable to actually find new solutions.
- An optimization algorithm is great at finding new improving solutions for a planning problem, without necessarily brute-forcing every possibility. However it needs to know the score of a solution and offers no support in calculating that score efficiently.

6.4. Optimization Algorithms Overview

Table 6.1. Optimization Algorithms Overview
| Algorithm | Scalable? | Optimal? | Easy to use? | Tweakable? | Requires CH? |
|---|---|---|---|---|---|
| Exhaustive Search (ES) | |||||
| Brute Force | 0/5 | 5/5 | 5/5 | 0/5 | No |
| Branch And Bound | 0/5 | 5/5 | 4/5 | 2/5 | No |
| Construction heuristics (CH) | |||||
| First Fit | 5/5 | 1/5 | 5/5 | 1/5 | No |
| First Fit Decreasing | 5/5 | 2/5 | 4/5 | 2/5 | No |
| Weakest Fit | 5/5 | 2/5 | 4/5 | 2/5 | No |
| Weakest Fit Decreasing | 5/5 | 2/5 | 4/5 | 2/5 | No |
| Strongest Fit | 5/5 | 2/5 | 4/5 | 2/5 | No |
| Strongest Fit Decreasing | 5/5 | 2/5 | 4/5 | 2/5 | No |
| Cheapest Insertion | 3/5 | 2/5 | 5/5 | 2/5 | No |
| Regret Insertion | 3/5 | 2/5 | 5/5 | 2/5 | No |
| Metaheuristics (MH) | |||||
| Local Search | |||||
| Hill Climbing | 5/5 | 2/5 | 4/5 | 3/5 | Yes |
| Tabu Search | 5/5 | 4/5 | 3/5 | 5/5 | Yes |
| Simulated Annealing | 5/5 | 4/5 | 2/5 | 5/5 | Yes |
| Late Acceptance | 5/5 | 4/5 | 3/5 | 5/5 | Yes |
| Step Counting Hill Climbing | 5/5 | 4/5 | 3/5 | 5/5 | Yes |
| Evolutionary Algorithms | |||||
| Evolutionary Strategies | 4/5 | 3/5 | 2/5 | 5/5 | Yes |
| Genetic Algorithms | 4/5 | 3/5 | 2/5 | 5/5 | Yes |
6.5. Which Optimization Algorithms Should I Use?
- First Fit Decreasing
- Late Acceptance. A Late Acceptance size of 400 usually works well.
- First Fit Decreasing
- Tabu Search. An entity tabu size of 7 usually works well.
- First Fit Decreasing
- Late Acceptance (relatively long time)
- Tabu Search (relatively short time)
6.6. Power tweaking or default parameter values
6.7. Solver Phase
Solver can use multiple optimization algorithms in sequence. Each optimization algorithm is represented by a solver Phase. There is never more than 1 Phase solving at the same time.
Phase implementations can combine techniques from multiple optimization algorithms, but it is still just 1 Phase. For example: a Local Search Phase can do Simulated Annealing with entity Tabu.
<solver>
...
<constructionHeuristic>
... <!-- First phase: First Fit Decreasing -->
</constructionHeuristic>
<localSearch>
... <!-- Second phase: Late Acceptance -->
</localSearch>
<localSearch>
... <!-- Third phase: Tabu Search -->
</localSearch>
</solver>Phase terminates, the second Phase starts, and so on. When the last Phase terminates, the Solver terminates. Usually, a Solver will first run a construction heuristic and then run 1 or multiple metaheuristics:

Phase is configured to terminate:
<solver>
...
<termination><!-- Solver termination -->
<secondsSpentLimit>90</secondsSpentLimit>
</termination>
<localSearch>
<termination><!-- Phase termination -->
<secondsSpentLimit>60</secondsSpentLimit><!-- Give the next phase a chance to run too, before the Solver terminates -->
</termination>
...
</localSearch>
<localSearch>
...
</localSearch>
</solver>Solver terminates (before the last Phase terminates itself), the current phase is terminated and all subsequent phases won't run.
6.8. Scope Overview

6.9. Termination
Solver can be terminated synchronously by up-front configuration or asynchronously from another thread.
Solver will run forever, until terminateEarly() is called from another thread. This is especially common during real-time planning.
Termination on a Solver or a Phase when it needs to stop. You can implement your own Termination, but the built-in implementations should suffice for most needs. Every Termination can calculate a time gradient (needed for some optimization algorithms), which is a ratio between the time already spent solving and the estimated entire solving time of the Solver or Phase.
6.9.1. TimeMillisSpentTermination
<termination>
<millisecondsSpentLimit>500</millisecondsSpentLimit>
</termination> <termination>
<secondsSpentLimit>10</secondsSpentLimit>
</termination> <termination>
<minutesSpentLimit>5</minutesSpentLimit>
</termination> <termination>
<hoursSpentLimit>1</hoursSpentLimit>
</termination> <termination>
<daysSpentLimit>2</daysSpentLimit>
</termination> <termination>
<minutesSpentLimit>150</minutesSpentLimit>
</termination> <termination>
<hoursSpentLimit>2</hoursSpentLimit>
<minutesSpentLimit>30</minutesSpentLimit>
</termination>Termination will most likely sacrifice perfect reproducibility (even with environmentMode REPRODUCIBLE) because the available CPU time differs frequently between runs:
- The available CPU time influences the number of steps that can be taken, which might be a few more or less.
- The
Terminationmight produce slightly different time gradient values, which will send time gradient based algorithms (such as Simulated Annealing) on a radically different path.
6.9.2. UnimprovedTimeMillisSpentTermination
<localSearch>
<termination>
<unimprovedMillisecondsSpentLimit>500</unimprovedMillisecondsSpentLimit>
</termination>
</localSearch> <localSearch>
<termination>
<unimprovedSecondsSpentLimit>10</unimprovedSecondsSpentLimit>
</termination>
</localSearch> <localSearch>
<termination>
<unimprovedMinutesSpentLimit>5</unimprovedMinutesSpentLimit>
</termination>
</localSearch> <localSearch>
<termination>
<unimprovedHoursSpentLimit>1</unimprovedHoursSpentLimit>
</termination>
</localSearch> <localSearch>
<termination>
<unimprovedDaysSpentLimit>1</unimprovedDaysSpentLimit>
</termination>
</localSearch>Phase (such as <localSearch>), instead of on the Solver itself.
Termination will most likely sacrifice perfect reproducibility (even with environmentMode REPRODUCIBLE) because the available CPU time differs frequently between runs:
- The available CPU time influences the number of steps that can be taken, which might be a few more or less.
- The
Terminationmight produce slightly different time gradient values, which will send time gradient based algorithms (such as Simulated Annealing) on a radically different path.
6.9.3. BestScoreTermination
Termination if you know the perfect score, for example for 4 queens (which uses a SimpleScore):
<termination>
<bestScoreLimit>0</bestScoreLimit>
</termination> <termination>
<bestScoreLimit>0hard/-5000soft</bestScoreLimit>
</termination> <termination>
<bestScoreLimit>0/0/0/-5000</bestScoreLimit>
</termination>Termination isn't practical because it requires a bestScoreLimit such as 0hard/-2147483648soft. Instead, use the next termination.
6.9.4. BestScoreFeasibleTermination
Score implementation implements FeasibilityScore.
<termination>
<bestScoreFeasible>true</bestScoreFeasible>
</termination>Termination is usually combined with other terminations.
6.9.5. StepCountTermination
<localSearch>
<termination>
<stepCountLimit>100</stepCountLimit>
</termination>
</localSearch>Termination can only be used for a Phase (such as <localSearch>), not for the Solver itself.
6.9.6. UnimprovedStepCountTermination
<localSearch>
<termination>
<unimprovedStepCountLimit>100</unimprovedStepCountLimit>
</termination>
</localSearch>Termination can only be used for a Phase (such as <localSearch>), not for the Solver itself.
6.9.7. Combining Multiple Terminations
100 steps or if a score of 0 has been reached:
<termination>
<terminationCompositionStyle>OR</terminationCompositionStyle>
<stepCountLimit>100</stepCountLimit>
<bestScoreLimit>0</bestScoreLimit>
</termination>-100 and no improvements in 5 steps:
<termination>
<terminationCompositionStyle>AND</terminationCompositionStyle>
<unimprovedStepCountLimit>5</unimprovedStepCountLimit>
<bestScoreLimit>-100</bestScoreLimit>
</termination>6.9.8. Asynchronous Termination from Another Thread
Termination as it's impossible to predict when and if it will occur. Therefore the Solver interface has these 2 thread-safe methods:
public interface Solver {
// ...
boolean terminateEarly();
boolean isTerminateEarly();
}terminateEarly() method from another thread, the Solver will terminate at its earliest convenience and the solve(Solution) method will return in the original Solver thread.
6.10. SolverEventListener
Solver fires a BestSolutionChangedEvent, in the solver's thread.
SolverEventListener to the Solver:
public interface Solver {
// ...
void addEventListener(SolverEventListener<? extends Solution> eventListener);
void removeEventListener(SolverEventListener<? extends Solution> eventListener);
}BestSolutionChangedEvent's newBestSolution might not be initialized or feasible. Use the methods on BestSolutionChangedEvent to detect such cases:
solver.addEventListener(new SolverEventListener<CloudBalance>() {
public void bestSolutionChanged(BestSolutionChangedEvent<CloudBalance> event) {
// Ignore invalid solutions
if (event.isNewBestSolutionInitialized()
&& event.getNewBestSolution().getScore().isFeasible()) {
...
}
}
});bestSolutionChanged() method is called in the solver's thread, as part of Solver.solve(). So it should return quickly to avoid slowing down the solving.
6.11. Custom Solver Phase
Solution to get a better score. Yet you'll still want to reuse the score calculation. For example, to implement a custom construction heuristic without implementing an entire Phase.
Termination aware and support partially initialized solutions too.
CustomPhaseCommand interface:
public interface CustomPhaseCommand {
void changeWorkingSolution(ScoreDirector scoreDirector);
}public class ToOriginalMachineSolutionInitializer implements CustomPhaseCommand {
public void changeWorkingSolution(ScoreDirector scoreDirector) {
MachineReassignment machineReassignment = (MachineReassignment) scoreDirector.getWorkingSolution();
for (MrProcessAssignment processAssignment : machineReassignment.getProcessAssignmentList()) {
scoreDirector.beforeVariableChanged(processAssignment, "machine");
processAssignment.setMachine(processAssignment.getOriginalMachine());
scoreDirector.afterVariableChanged(processAssignment, "machine");
scoreDirector.triggerVariableListeners();
}
}
}CustomPhaseCommand must be notified to the ScoreDirector.
CustomPhaseCommand. That will corrupt the Solver because any previous score or solution was for a different problem. To do that, read about repeated planning and do it with a ProblemFactChange instead.
CustomPhaseCommand like this:
<solver>
...
<customPhase>
<customPhaseCommandClass>org.optaplanner.examples.machinereassignment.solver.solution.initializer.ToOriginalMachineSolutionInitializer</customPhaseCommandClass>
</customPhase>
... <!-- Other phases -->
</solver>customPhaseCommandClass instances to run them in sequence.
CustomPhaseCommand don't result in a better score, the best solution won't be changed (so effectively nothing will have changed for the next Phase or CustomPhaseCommand). To force such changes anyway, use forceUpdateBestSolution:
<customPhase>
<customPhaseCommandClass>...MyUninitializer</customPhaseCommandClass>
<forceUpdateBestSolution>true</forceUpdateBestSolution>
</customPhase>Solver or a Phase wants to terminate while a CustomPhaseCommand is still running, it will wait to terminate until the CustomPhaseCommand is done, however long that takes. The build-in solver phases don't suffer from this problem.
Chapter 7. Move and Neighborhood Selection
7.1. Move and Neighborhood Introduction
7.1.1. What is a Move?
Move is a change (or set of changes) from a solution A to a solution B. For example, the move below changes queen C from row 0 to row 2:

Move. Although a single move can change multiple queens, the neighbors of a solution should always be a very small subset of all possible solutions. For example, on that original solution, these are all possible changeMove's:

changeMove's that have not impact and are therefore not doable, we can see that number of moves is n * (n - 1) = 12. This is far less than the number of possible solutions, which is n ^ n = 256. As the problem scales out, the number of possible moves increases far less than the number of possible solutions.
changeMove's or less we can reach any solution. For example we can reach a very different solution in 3 changeMove's:

changeMove's. Many move types are included out-of-the-box, but you can also implement custom moves.
Move can affect multiple entities or even create/delete entities. But it must not change the problem facts.
Move's to transition from one solution to a neighbor solution. Therefore, all the optimization algorithms are confronted with Move selection: the craft of creating and iterating moves efficiently and the art of finding the most promising subset of random moves to evaluate first.
7.1.2. What is a MoveSelector?
MoveSelector's main function is to create Iterator<Move> when needed. An optimization algorithm will iterate through a subset of those moves.
changeMoveSelector for the optimization algorithm Local Search:
<localSearch>
<changeMoveSelector/>
...
</localSearch>changeMoveSelector are defaulted sensibly (unless that fails fast due to ambiguity). On the other hand, the configuration can be customized significantly for specific use cases. For example: you might want to configure a filter to discard pointless moves.
7.1.3. Subselecting of Entities, Values and Other Moves
Move, a MoveSelector needs to select 1 or more planning entities and/or planning values to move. Just like MoveSelectors, EntitySelectors and ValueSelectors need to support a similar feature set (such as scalable just-in-time selection). Therefore, they all implement a common interface Selector and they are configured similarly.
EntitySelectors, ValueSelectors or even other MoveSelectors, which can be configured individually if desired:
<unionMoveSelector>
<changeMoveSelector>
<entitySelector>
...
</entitySelector>
<valueSelector>
...
</valueSelector>
...
</changeMoveSelector>
<swapMoveSelector>
...
</swapMoveSelector>
</unionMoveSelector>Selector tree:

MoveSelector which is injected into the optimization algorithm implementation to be (partially) iterated in every step.
7.2. Generic MoveSelectors
7.2.1. changeMoveSelector
ChangeMove selects 1 planning entity and 1 planning value and assigns the entity's variable to that value.

<changeMoveSelector/>
ChangeMove selectors for every planning variable.
<changeMoveSelector>
... <!-- Normal selector properties -->
<entitySelector>
<entityClass>...Lecture</entityClass>
...
</entitySelector>
<valueSelector>
<variableName>room</variableName>
...
<nearbySelection>...</nearbySelection>
</valueSelector>
</changeMoveSelector>ChangeMove is the finest grained move.
moveSelector configuration injected into a metaheuristic algorithm should include a changeMoveSelector or a custom implementation. This guarantees that every possible Solution can be reached through applying a number of moves in sequence (not taking score traps into account). Of course, normally it is unioned with other, more coarse grained move selectors.
7.2.2. swapMoveSelector
SwapMove selects 2 different planning entities and swaps the planning values of all their planning variables.

SwapMove on a single variable is essentially just 2 ChangeMoves, it's often the winning step where the first of the 2 ChangeMoves would not be the winning step because it leaves the solution in a state with broken hard constraints. For example: swapping the room of 2 lectures doesn't bring the solution in a intermediate state where both lectures are in the same room which breaks a hard constraint.
<swapMoveSelector/>
SwapMove selectors for every entity class.
<swapMoveSelector>
... <!-- Normal selector properties -->
<entitySelector>
<entityClass>...Lecture</entityClass>
...
</entitySelector>
<secondaryEntitySelector>
<entityClass>...Lecture</entityClass>
...
<nearbySelection>...</nearbySelection>
</secondaryEntitySelector>
<variableNameInclude>room</variableNameInclude>
<variableNameInclude>...</variableNameInclude>
</swapMoveSelector>secondaryEntitySelector is rarely needed: if it is not specified, entities from the same entitySelector are swapped.
variableNameInclude properties are specified, not all planning variables will be swapped, but only those specified. For example for course scheduling, specifying only variableNameInclude room will make it only swap room, not period.
7.2.3. pillarChangeMoveSelector
PillarChangeMove selects 1 entity pillar (or subset of those) and changes the value of 1 variable (which is the same for all entities) to another value.

<pillarChangeMoveSelector/>
<pillarSwapMoveSelector>
... <!-- Normal selector properties -->
<pillarSelector>
<entitySelector>
<entityClass>...Lecture</entityClass>
...
</entitySelector>
<subPillarEnabled>true</subPillarEnabled>
<minimumSubPillarSize>1</minimumSubPillarSize>
<maximumSubPillarSize>1000</maximumSubPillarSize>
</pillarSelector>
<valueSelector>
<variableName>room</variableName>
...
</valueSelector>
</pillarSwapMoveSelector>[A, D] is one of the many sub pillars. If subPillarEnabled (defaults to true) is false, no sub pillars are selected. If sub pillars are enabled, the pillar itself is also included and the properties minimumSubPillarSize (defaults to 1) and maximumSubPillarSize (defaults to infinity) limit the size of the selected (sub) pillar.
(2^32 - 1) subpillars. Therefore a pillarSelector only supports JIT random selection (which is the default).
7.2.4. pillarSwapMoveSelector
PillarSwapMove selects 2 different entity pillars and swaps the values of all their variables for all their entities.

<pillarSwapMoveSelector/>
<pillarSwapMoveSelector>
... <!-- Normal selector properties -->
<pillarSelector>
<entitySelector>
<entityClass>...Lecture</entityClass>
...
</entitySelector>
<subPillarEnabled>true</subPillarEnabled>
<minimumSubPillarSize>1</minimumSubPillarSize>
<maximumSubPillarSize>1000</maximumSubPillarSize>
</pillarSelector>
<secondaryPillarSelector>
<entitySelector>
...
</entitySelector>
...
</secondaryPillarSelector>
<variableNameInclude>room</variableNameInclude>
<variableNameInclude>...</variableNameInclude>
</pillarSwapMoveSelector>secondaryPillarSelector is rarely needed: if it is not specified, entities from the same pillarSelector are swapped.
7.2.5. tailChainSwapMoveSelector or 2-opt (chained variables only)
tailChainSwapMove selects a tail chain and swaps it with the tail chain of another planning value (in a different or the same anchor chain). If the targeted planning value, doesn't have a tail chain, it swaps with nothing (resulting in a change like move). If it occurs within the same anchor chain, a partial chain reverse occurs. In academic papers, this is often called a 2-opt move.
<tailChainSwapMoveSelector/>
<subChainChangeMoveSelector>
... <!-- Normal selector properties -->
<entitySelector>
<entityClass>...Customer</entityClass>
...
</entitySelector>
<valueSelector>
<variableName>previousStandstill</variableName>
...
<nearbySelection>...</nearbySelection>
</valueSelector>
</subChainChangeMoveSelector>entitySelector selects the start of the tail chain that is being moved. The valueSelector selects to where that tail chain is moved. If it has a tail chain itself, that is moved to the location of the original tail chain. It uses a valueSelector instead of a secondaryEntitySelector to be able to include all possible 2opt moves (such as moving to the end of a tail) and to work correctly with nearby selection (because of asymmetric distances and also swapped entity distance gives an incorrect selection probability).
subChainChangeMoveSelector and subChainSwapMoveSelector include almost every possible tailChainSwapMove, experiments have shown that focusing on tailChainSwapMoves increases efficiency.
7.2.6. subChainChangeMoveSelector (chained variables only)
subChainChangeMoveSelector selects a subChain and moves it to another place (in a different or the same anchor chain).
<subChainChangeMoveSelector/>
<subChainChangeMoveSelector>
... <!-- Normal selector properties -->
<entityClass>...Customer</entityClass>
<subChainSelector>
<valueSelector>
<variableName>previousStandstill</variableName>
...
</valueSelector>
<minimumSubChainSize>2</minimumSubChainSize>
<maximumSubChainSize>40</maximumSubChainSize>
</subChainSelector>
<valueSelector>
<variableName>previousStandstill</variableName>
...
</valueSelector>
<selectReversingMoveToo>true</selectReversingMoveToo>
</subChainChangeMoveSelector>subChainSelector selects a number of entities, no less than minimumSubChainSize (defaults to 1) and no more than maximumSubChainSize (defaults to infinity).
minimumSubChainSize is 1 (which is the default), this selector might select the same move as a ChangeMoveSelector, at a far lower selection probability (because each move type has the same selection chance by default (not every move instance) and there are far more SubChainChangeMove instances than ChangeMove instances). However, don't just remove the ChangeMoveSelector, because experiments show that it's good to focus on ChangeMoves.
SubChainSwapMoveSelector, setting minimumSubChainSize prevents swapping a subchain of size 1 with a subchain of at least size 2.
selectReversingMoveToo property (defaults to true) enables selecting the reverse of every subchain too.
7.2.7. subChainSwapMoveSelector (chained variables only)
subChainSwapMoveSelector selects 2 different subChains and moves them to another place in a different or the same anchor chain.
<subChainSwapMoveSelector/>
<subChainSwapMoveSelector>
... <!-- Normal selector properties -->
<entityClass>...Customer</entityClass>
<subChainSelector>
<valueSelector>
<variableName>previousStandstill</variableName>
...
</valueSelector>
<minimumSubChainSize>2</minimumSubChainSize>
<maximumSubChainSize>40</maximumSubChainSize>
</subChainSelector>
<secondarySubChainSelector>
<valueSelector>
<variableName>previousStandstill</variableName>
...
</valueSelector>
<minimumSubChainSize>2</minimumSubChainSize>
<maximumSubChainSize>40</maximumSubChainSize>
</secondarySubChainSelector>
<selectReversingMoveToo>true</selectReversingMoveToo>
</subChainSwapMoveSelector>secondarySubChainSelector is rarely needed: if it is not specified, entities from the same subChainSelector are swapped.
7.3. Combining Multiple MoveSelectors
7.3.1. unionMoveSelector
unionMoveSelector selects a Move by selecting 1 of its MoveSelector children to supply the next Move.
<unionMoveSelector>
<...MoveSelector/>
<...MoveSelector/>
<...MoveSelector/>
...
</unionMoveSelector> <unionMoveSelector>
... <!-- Normal selector properties -->
<selectorProbabilityWeightFactoryClass>...ProbabilityWeightFactory</selectorProbabilityWeightFactoryClass>
<changeMoveSelector>
<fixedProbabilityWeight>...</fixedProbabilityWeight>
...
</changeMoveSelector>
<swapMoveSelector>
<fixedProbabilityWeight>...</fixedProbabilityWeight>
...
</swapMoveSelector>
<...MoveSelector>
<fixedProbabilityWeight>...</fixedProbabilityWeight>
...
</...MoveSelector>
...
</unionMoveSelector>selectorProbabilityWeightFactory determines in selectionOrder RANDOM how often a MoveSelector child is selected to supply the next Move. By default, each MoveSelector child has the same chance of being selected.

fixedProbabilityWeight of such a child to select it more often. For example, the unionMoveSelector can return a SwapMove twice as often as a ChangeMove:
<unionMoveSelector>
<changeMoveSelector>
<fixedProbabilityWeight>1.0</fixedProbabilityWeight>
...
</changeMoveSelector>
<swapMoveSelector>
<fixedProbabilityWeight>2.0</fixedProbabilityWeight>
...
</swapMoveSelector>
</unionMoveSelector>ChangeMoves is very different from the number of possible SwapMoves and furthermore it's problem dependent. To give each individual Move the same selection chance (as opposed to each MoveSelector), use the FairSelectorProbabilityWeightFactory:
<unionMoveSelector>
<selectorProbabilityWeightFactoryClass>org.optaplanner.core.impl.heuristic.selector.common.decorator.FairSelectorProbabilityWeightFactory</selectorProbabilityWeightFactoryClass>
<changeMoveSelector/>
<swapMoveSelector/>
</unionMoveSelector>7.3.2. cartesianProductMoveSelector
cartesianProductMoveSelector selects a new CompositeMove. It builds that CompositeMove by selecting 1 Move per MoveSelector child and adding it to the CompositeMove.
<cartesianProductMoveSelector>
<...MoveSelector/>
<...MoveSelector/>
<...MoveSelector/>
...
</cartesianProductMoveSelector> <cartesianProductMoveSelector>
... <!-- Normal selector properties -->
<ignoreEmptyChildIterators>true</ignoreEmptyChildIterators>
<changeMoveSelector>
...
</changeMoveSelector>
<swapMoveSelector>
...
</swapMoveSelector>
<...MoveSelector>
...
</...MoveSelector>
...
</cartesianProductMoveSelector>ignoreEmptyChildIterators property (true by default) will ignore every empty childMoveSelector to avoid returning no moves. For example: a cartesian product of changeMoveSelector A and B, for which B is empty (because all it's entities are immovable) returns no move if ignoreEmptyChildIterators is false and the moves of A if ignoreEmptyChildIterators is true.
7.4. EntitySelector
<entitySelector/>
<entitySelector>
... <!-- Normal selector properties -->
<entityClass>org.optaplanner.examples.curriculumcourse.domain.Lecture</entityClass>
</entitySelector>entityClass property is only required if it cannot be deduced automatically because there are multiple entity classes.
7.5. ValueSelector
<valueSelector/>
<valueSelector>
... <!-- Normal selector properties -->
<variableName>room</variableName>
</valueSelector>variableName property is only required if it cannot be deduced automatically because there are multiple variables (for the related entity class).
entityClass from the EntitySelector sometimes needs to be downcasted, which can be done with the property downcastEntityClass:
<valueSelector>
<downcastEntityClass>...LeadingExam</downcastEntityClass>
<variableName>period</variableName>
</valueSelector>ValueSelector is empty for that entity.
7.6. General Selector Features
7.6.1. CacheType: Create Moves Ahead of Time or Just In Time
Selector's cacheType determines when a selection (such as a Move, an entity, a value, ...) is created and how long it lives.
Selector supports setting a cacheType:
<changeMoveSelector>
<cacheType>PHASE</cacheType>
...
</changeMoveSelector>cacheTypes are supported:
JUST_IN_TIME(default): Not cached. Construct each selection (Move, ...) just before it's used. This scales up well in memory footprint.STEP: Cached. Create each selection (Move, ...) at the beginning of a step and cache them in a list for the remainder of the step. This scales up badly in memory footprint.PHASE: Cached. Create each selection (Move, ...) at the beginning of a solver phase and cache them in a list for the remainder of the phase. Some selections cannot be phase cached because the list changes every step. This scales up badly in memory footprint, but has a slight performance gain.SOLVER: Cached. Create each selection (Move, ...) at the beginning of aSolverand cache them in a list for the remainder of theSolver. Some selections cannot be solver cached because the list changes every step. This scales up badly in memory footprint, but has a slight performance gain.
cacheType can be set on composite selectors too:
<unionMoveSelector>
<cacheType>PHASE</cacheType>
<changeMoveSelector/>
<swapMoveSelector/>
...
</unionMoveSelector>cacheType. For example: a STEP cached unionMoveSelector can hold a PHASE cached changeMoveSelector, but not a STEP cached changeMoveSelector.
7.6.2. SelectionOrder: Original, Sorted, Random, Shuffled or Probabilistic
Selector's selectionOrder determines the order in which the selections (such as Moves, entities, values, ...) are iterated. An optimization algorithm will usually only iterate through a subset of its MoveSelector's selections, starting from the start, so the selectionOrder is critical to decide which Moves are actually evaluated.
Selector supports setting a selectionOrder:
<changeMoveSelector>
...
<selectionOrder>RANDOM</selectionOrder>
...
</changeMoveSelector>selectionOrders are supported:
ORIGINAL: Select the selections (Moves, entities, values, ...) in default order. Each selection will be selected only once.- For example: A0, A1, A2, A3, ..., B0, B1, B2, B3, ..., C0, C1, C2, C3, ...
- SORTED: Select the selections (
Moves, entities, values, ...) in sorted order. Each selection will be selected only once. RequirescacheType >= STEP. Mostly used on anentitySelectororvalueSelectorfor construction heuristics. See sorted selection.- For example: A0, B0, C0, ..., A2, B2, C2, ..., A1, B1, C1, ...
- RANDOM (default): Select the selections (
Moves, entities, values, ...) in non-shuffled random order. A selection might be selected multiple times. This scales up well in performance because it does not require caching.- For example: C2, A3, B1, C2, A0, C0, ...
- SHUFFLED: Select the selections (
Moves, entities, values, ...) in shuffled random order. Each selection will be selected only once. RequirescacheType >= STEP. This scales up badly in performance, not just because it requires caching, but also because a random number is generated for each element, even if it's not selected (which is the grand majority when scaling up).- For example: C2, A3, B1, A0, C0, ...
- PROBABILISTIC: Select the selections (
Moves, entities, values, ...) in random order, based on the selection probability of each element. A selection with a higher probability has a higher chance to be selected than elements with a lower probability. A selection might be selected multiple times. RequirescacheType >= STEP. Mostly used on anentitySelectororvalueSelector. See probabilistic selection.- For example: B1, B1, A1, B2, B1, C2, B1, B1, ...
selectionOrder can be set on composite selectors too.
Selector is cached, all of its nested Selectors will naturally default to selectionOrder ORIGINAL. Avoid overwriting the selectionOrder of those nested Selectors.
7.6.3. Recommended Combinations of CacheType and SelectionOrder
7.6.3.1. Just in Time Random Selection (default)
cacheType and selectionOrder is actually obsolete:
<unionMoveSelector>
<cacheType>JUST_IN_TIME</cacheType>
<selectionOrder>RANDOM</selectionOrder>
<changeMoveSelector/>
<swapMoveSelector/>
</unionMoveSelector>Iterator<Move>.next() is called, a child MoveSelector is randomly selected (1), which creates a random Move (2, 3, 4) and is then returned (5):

Moves and it generates random numbers only for Moves that are actually selected.
7.6.3.2. Cached Shuffled Selection
<unionMoveSelector>
<cacheType>PHASE</cacheType>
<selectionOrder>SHUFFLED</selectionOrder>
<changeMoveSelector/>
<swapMoveSelector/>
</unionMoveSelector>cacheType), all moves are created (1) and cached (2). When MoveSelector.iterator() is called, the moves are shuffled (3). When Iterator<Move>.next() is called, the next element in the shuffled list is returned (4):

Move will only be selected once, even though they are selected in random order.
STEP. Otherwise, do something like this:
<unionMoveSelector>
<cacheType>STEP</cacheType>
<selectionOrder>SHUFFLED</selectionOrder>
<changeMoveSelector>
<cacheType>PHASE</cacheType>
</changeMoveSelector>
<swapMoveSelector/>
<cacheType>PHASE</cacheType>
</swapMoveSelector>
<pillarSwapMoveSelector/><!-- Does not support cacheType PHASE -->
</unionMoveSelector>7.6.3.3. Cached Random Selection
<unionMoveSelector>
<cacheType>PHASE</cacheType>
<selectionOrder>RANDOM</selectionOrder>
<changeMoveSelector/>
<swapMoveSelector/>
</unionMoveSelector>7.6.4. Filtered Selection
- The move is pointless and would only waste CPU time. For example, swapping 2 lectures of the same course will result in the same score and the same schedule because all lectures of 1 course are interchangeable (same teacher, same students, same topic).
- Doing the move would break a built-in hard constraint, so the solution would be infeasible but the score function doesn't check built-in hard constraints (for performance gain). For example, don't change a gym lecture to a room which is not a gym room.NoteAny built-in hard constraint must probably be filtered on every move type of every solver phase. For example if it's filters the change move of Local Search, it must also filter the swap move that swaps the room of a gym lecture with another lecture for which the other lecture's original room isn't a gym room. Furthermore, it must also filter the change moves of the Construction Heuristics (which requires an advanced configuration).
MoveSelector, EntitySelector or ValueSelector. It works with any cacheType and selectionOrder.

SelectionFilter:
public interface SelectionFilter<T> {
boolean accept(ScoreDirector scoreDirector, T selection);
}accept method to return false on a discarded selection. Unaccepted moves will not be selected and will therefore never have their doMove method called.
public class DifferentCourseSwapMoveFilter implements SelectionFilter<SwapMove> {
public boolean accept(ScoreDirector scoreDirector, SwapMove move) {
Lecture leftLecture = (Lecture) move.getLeftEntity();
Lecture rightLecture = (Lecture) move.getRightEntity();
return !leftLecture.getCourse().equals(rightLecture.getCourse());
}
}filterClass on the moveSelector:
<swapMoveSelector>
<filterClass>org.optaplanner.examples.curriculumcourse.solver.move.DifferentCourseSwapMoveFilter</filterClass>
</swapMoveSelector>filterClass on the entitySelector or valueSelector:
<changeMoveSelector>
<entitySelector>
<filterClass>...EntityFilter</filterClass>
</entitySelector>
</changeMoveSelector>filterClass elements on a single selector.
7.6.5. Sorted Selection
MoveSelector, EntitySelector or ValueSelector. It does not work with cacheType JUST_IN_TIME and it only works with selectionOrder SORTED.
FIRST_FIT_DECREASING implies that the EntitySelector is sorted, there is no need to explicitly configure a Selector with sorting. If you do explicitly configure the Selector, it overwrites the default settings of that construction heuristic.
7.6.5.1. Sorted Selection by SorterManner
Selector types implement a SorterManner out of the box:
EntitySelectorsupports:DECREASING_DIFFICULTY: Sorts the planning entities according to decreasing planning entity difficulty. Requires that planning entity difficulty is annotated on the domain model.<entitySelector> <cacheType>PHASE</cacheType> <selectionOrder>SORTED</selectionOrder> <sorterManner>DECREASING_DIFFICULTY</sorterManner> </entitySelector>
ValueSelectorsupports:INCREASING_STRENGTH: Sorts the planning values according to increasing planning value strength. Requires that planning value strength is annotated on the domain model.<valueSelector> <cacheType>PHASE</cacheType> <selectionOrder>SORTED</selectionOrder> <sorterManner>INCREASING_STRENGTH</sorterManner> </valueSelector>
7.6.5.2. Sorted Selection by Comparator
Selector is with a plain old Comparator:
public class CloudProcessDifficultyComparator implements Comparator<CloudProcess> {
public int compare(CloudProcess a, CloudProcess b) {
return new CompareToBuilder()
.append(a.getRequiredMultiplicand(), b.getRequiredMultiplicand())
.append(a.getId(), b.getId())
.toComparison();
}
} <entitySelector>
<cacheType>PHASE</cacheType>
<selectionOrder>SORTED</selectionOrder>
<sorterComparatorClass>...CloudProcessDifficultyComparator</sorterComparatorClass>
<sorterOrder>DESCENDING</sorterOrder>
</entitySelector>7.6.5.3. Sorted Selection by SelectionSorterWeightFactory
Solution to sort a Selector, use a SelectionSorterWeightFactory instead:
public interface SelectionSorterWeightFactory<Sol extends Solution, T> {
Comparable createSorterWeight(Sol solution, T selection);
}public class QueenDifficultyWeightFactory implements SelectionSorterWeightFactory<NQueens, Queen> {
public Comparable createSorterWeight(NQueens nQueens, Queen queen) {
int distanceFromMiddle = calculateDistanceFromMiddle(nQueens.getN(), queen.getColumnIndex());
return new QueenDifficultyWeight(queen, distanceFromMiddle);
}
// ...
public static class QueenDifficultyWeight implements Comparable<QueenDifficultyWeight> {
private final Queen queen;
private final int distanceFromMiddle;
public QueenDifficultyWeight(Queen queen, int distanceFromMiddle) {
this.queen = queen;
this.distanceFromMiddle = distanceFromMiddle;
}
public int compareTo(QueenDifficultyWeight other) {
return new CompareToBuilder()
// The more difficult queens have a lower distance to the middle
.append(other.distanceFromMiddle, distanceFromMiddle) // Decreasing
// Tie breaker
.append(queen.getColumnIndex(), other.queen.getColumnIndex())
.toComparison();
}
}
} <entitySelector>
<cacheType>PHASE</cacheType>
<selectionOrder>SORTED</selectionOrder>
<sorterWeightFactoryClass>...QueenDifficultyWeightFactory</sorterWeightFactoryClass>
<sorterOrder>DESCENDING</sorterOrder>
</entitySelector>7.6.5.4. Sorted Selection by SelectionSorter
SelectionSorter directly:
public interface SelectionSorter<T> {
void sort(ScoreDirector scoreDirector, List<T> selectionList);
} <entitySelector>
<cacheType>PHASE</cacheType>
<selectionOrder>SORTED</selectionOrder>
<sorterClass>...MyEntitySorter</sorterClass>
</entitySelector>7.6.6. Probabilistic Selection
MoveSelector, EntitySelector or ValueSelector. It does not work with cacheType JUST_IN_TIME and it only works with selectionOrder PROBABILISTIC.

probabilityWeight, which determines the chance that selection will be selected:
public interface SelectionProbabilityWeightFactory<T> {
double createProbabilityWeight(ScoreDirector scoreDirector, T selection);
} <entitySelector>
<cacheType>PHASE</cacheType>
<selectionOrder>PROBABILISTIC</selectionOrder>
<probabilityWeightFactoryClass>...MyEntityProbabilityWeightFactoryClass</probabilityWeightFactoryClass>
</entitySelector>7.6.7. Limited Selection
selectedCountLimit on the selector:
<changeMoveSelector>
<selectedCountLimit>100</selectedCountLimit>
</changeMoveSelector>selectedCountLimit.
7.6.8. Mimic Selection (Record/Replay)
id. A replaying selector must reference a recorder's id with a mimicSelectorRef:
<cartesianProductMoveSelector>
<changeMoveSelector>
<entitySelector id="entitySelector"/>
<valueSelector>
<variableName>period</variableName>
</valueSelector>
</changeMoveSelector>
<changeMoveSelector>
<entitySelector mimicSelectorRef="entitySelector"/>
<valueSelector>
<variableName>room</variableName>
</valueSelector>
</changeMoveSelector>
</cartesianProductMoveSelector>7.6.9. Nearby Selection


NearbyDistanceMeter interface:
public interface NearbyDistanceMeter<O, D> {
double getNearbyDistance(O origin, D destination);
}double which represents the distance:
public class CustomerNearbyDistanceMeter implements NearbyDistanceMeter<Customer, Standstill> {
public double getNearbyDistance(Customer origin, Standstill destination) {
return origin.getDistanceTo(destination);
}
}nearbySelection element in the entitySelector or valueSelector and use mimic selection to specify which entity should be near by the selection.
<unionMoveSelector>
<changeMoveSelector>
<entitySelector id="entitySelector1"/>
<valueSelector>
<nearbySelection>
<originEntitySelector mimicSelectorRef="entitySelector1"/>
<nearbyDistanceMeterClass>...CustomerNearbyDistanceMeter</nearbyDistanceMeterClass>
<parabolicDistributionSizeMaximum>40</parabolicDistributionSizeMaximum>
</nearbySelection>
</valueSelector>
</changeMoveSelector>
<swapMoveSelector>
<entitySelector id="entitySelector2"/>
<secondaryEntitySelector>
<nearbySelection>
<originEntitySelector mimicSelectorRef="entitySelector2"/>
<nearbyDistanceMeterClass>...CustomerNearbyDistanceMeter</nearbyDistanceMeterClass>
<parabolicDistributionSizeMaximum>40</parabolicDistributionSizeMaximum>
</nearbySelection>
</secondaryEntitySelector>
</swapMoveSelector>
<tailChainSwapMoveSelector>
<entitySelector id="entitySelector3"/>
<valueSelector>
<nearbySelection>
<originEntitySelector mimicSelectorRef="entitySelector3"/>
<nearbyDistanceMeterClass>...CustomerNearbyDistanceMeter</nearbyDistanceMeterClass>
<parabolicDistributionSizeMaximum>40</parabolicDistributionSizeMaximum>
</nearbySelection>
</valueSelector>
</tailChainSwapMoveSelector>
</unionMoveSelector>distributionSizeMaximum parameter should not be 1 because if the nearest is already the planning value of the current entity, then the only move that is selectable is not doable.
distributionSizeMaximum parameter):
<nearbySelection>
<nearbySelectionDistributionType>PARABOLIC_DISTRIBUTION</nearbySelectionDistributionType>
</nearbySelection>NearbySelectionDistributionTypes are supported:
BLOCK_DISTRIBUTION: Only the n nearest are selected, with an equal probability. For example, select the 20 nearest:<nearbySelection> <blockDistributionSizeMaximum>20</blockDistributionSizeMaximum> </nearbySelection>LINEAR_DISTRIBUTION: Nearest elements are selected with a higher probability. The probability decreases linearly.<nearbySelection> <linearDistributionSizeMaximum>40</linearDistributionSizeMaximum> </nearbySelection>PARABOLIC_DISTRIBUTION(recommended): Nearest elements are selected with a higher probability.<nearbySelection> <parabolicDistributionSizeMaximum>80</parabolicDistributionSizeMaximum> </nearbySelection>BETA_DISTRIBUTION: Selection according to a beta distribution. Slows down the solver significantly.<nearbySelection> <betaDistributionAlpha>1</betaDistributionAlpha> <betaDistributionBeta>5</betaDistributionBeta> </nearbySelection>
7.7. Custom Moves
7.7.1. Which Move Types Might be Missing in my Implementation?
7.7.2. Custom Moves Introduction
Moves (such as ChangeMove) you can also implement your own Moves. Generic and custom MoveSelectors can be combined as desired.
Move can be tailored to work to the advantage of your constraints. For example, in examination scheduling, changing the period of an exam A also changes the period of all the exams that need to coincide with exam A.
Move is also slightly faster than a generic Move. However, it's far more work to implement and much harder to avoid bugs. After implementing a custom Move, make sure to turn on environmentMode FULL_ASSERT to check for score corruptions.
7.7.3. The Interface Move
Move interface:
public interface Move {
boolean isMoveDoable(ScoreDirector scoreDirector);
Move createUndoMove(ScoreDirector scoreDirector);
void doMove(ScoreDirector scoreDirector);
Collection<? extends Object> getPlanningEntities();
Collection<? extends Object> getPlanningValues();
}Move implementation for 4 queens which moves a queen to a different row:
public class RowChangeMove extends AbstractMove {
private Queen queen;
private Row toRow;
public RowChangeMove(Queen queen, Row toRow) {
this.queen = queen;
this.toRow = toRow;
}
// ... see below
}RowChangeMove moves a queen from its current row to a different row.
doMove(ScoreDirector) method to do a move, which calls doMoveOnGenuineVariables(ScoreDirector). The Move implementation must notify the ScoreDirector of any changes it makes to planning entity's variables:
public void doMoveOnGenuineVariables(ScoreDirector scoreDirector) {
scoreDirector.beforeVariableChanged(queen, "row"); // before changes are made to the queen.row
queen.setRow(toRow);
scoreDirector.afterVariableChanged(queen, "row"); // after changes are made to the queen.row
}scoreDirector.beforeVariableChanged(Object, String) and scoreDirector.afterVariableChanged(Object, String) methods directly before and after modifying the entity.
Move can only change/add/remove planning entities, it must not change any of the problem facts.
isMoveDoable(ScoreDirector) method on a move. A non doable move is:
- A move that changes nothing on the current solution. For example, moving queen B0 to row 0 is not doable, because it is already there.
- A move that is impossible to do on the current solution. For example, moving queen B0 to row 10 is not doable because it would move it outside the board limits.
public boolean isMoveDoable(ScoreDirector scoreDirector) {
return !ObjectUtils.equals(queen.getRow(), toRow);
}Solution of a later step.
Move, before the Move has been done on the current solution.
public Move createUndoMove(ScoreDirector scoreDirector) {
return new RowChangeMove(queen, queen.getRow());
}Move more than once. In fact, many solver phases will iteratively do and undo a number of moves to evaluate them, before selecting one of those and doing that move again (without undoing it this time).
Move must implement the getPlanningEntities() and getPlanningValues() methods. They are used by entity tabu and value tabu respectively. When they are called, the Move has already been done.
public List<? extends Object> getPlanningEntities() {
return Collections.singletonList(queen);
}
public Collection<? extends Object> getPlanningValues() {
return Collections.singletonList(toRow);
}Move changes multiple planning entities, return all of them in getPlanningEntities() and return all their values (to which they are changing) in getPlanningValues().
public Collection<? extends Object> getPlanningEntities() {
return Arrays.asList(leftCloudProcess, rightCloudProcess);
}
public Collection<? extends Object> getPlanningValues() {
return Arrays.asList(leftCloudProcess.getComputer(), rightCloudProcess.getComputer());
}Move must implement the equals() and hashCode() methods. 2 moves which make the same change on a solution, should be equal.
public boolean equals(Object o) {
if (this == o) {
return true;
} else if (o instanceof RowChangeMove) {
RowChangeMove other = (RowChangeMove) o;
return new EqualsBuilder()
.append(queen, other.queen)
.append(toRow, other.toRow)
.isEquals();
} else {
return false;
}
}
public int hashCode() {
return new HashCodeBuilder()
.append(queen)
.append(toRow)
.toHashCode();
}instanceof check is important because a move will be compared to a move with another move type if you're using more than 1 move type.
toString() method to keep Planner's logs readable:
public String toString() {
return queen + " {" + queen.getRow() + " -> " + toRow + "}";
}Move, let's take a look at generating such custom moves.
7.7.4. MoveListFactory: the Easy Way to Generate Custom Moves
MoveListFactory:
public interface MoveListFactory<S extends Solution> {
List<Move> createMoveList(S solution);
}public class RowChangeMoveFactory implements MoveListFactory<NQueens> {
public List<Move> createMoveList(NQueens nQueens) {
List<Move> moveList = new ArrayList<Move>();
for (Queen queen : nQueens.getQueenList()) {
for (Row toRow : nQueens.getRowList()) {
moveList.add(new RowChangeMove(queen, toRow));
}
}
return moveList;
}
}unionMoveSelector just like any other MoveSelector):
<moveListFactory>
<moveListFactoryClass>org.optaplanner.examples.nqueens.solver.move.factory.RowChangeMoveFactory</moveListFactoryClass>
</moveListFactory> <moveListFactory>
... <!-- Normal moveSelector properties -->
<moveListFactoryClass>org.optaplanner.examples.nqueens.solver.move.factory.RowChangeMoveFactory</moveListFactoryClass>
</moveListFactory>MoveListFactory generates all moves at once in a List<Move>, it does not support cacheType JUST_IN_TIME. Therefore, moveListFactory uses cacheType STEP by default and it scales badly in memory footprint.
7.7.5. MoveIteratorFactory: Generate Custom Moves Just in Time
MoveIteratorFactory interface:
public interface MoveIteratorFactory {
long getSize(ScoreDirector scoreDirector);
Iterator<Move> createOriginalMoveIterator(ScoreDirector scoreDirector);
Iterator<Move> createRandomMoveIterator(ScoreDirector scoreDirector, Random workingRandom);
}getSize() method must give an estimation of the size. It doesn't need to be correct. The createOriginalMoveIterator method is called if the selectionOrder is ORIGINAL or if it is cached. The createRandomMoveIterator method is called for selectionOrder RANDOM combined with cacheType JUST_IN_TIME.
Moves when creating the Iterator<Move>: the whole purpose of MoveIteratorFactory over MoveListFactory is giving you the ability to create a Move just in time in the Iterator's method next().
unionMoveSelector just like any other MoveSelector):
<moveIteratorFactory>
<moveIteratorFactoryClass>...</moveIteratorFactoryClass>
</moveIteratorFactory> <moveIteratorFactory>
... <!-- Normal moveSelector properties -->
<moveIteratorFactoryClass>...</moveIteratorFactoryClass>
</moveIteratorFactory>Chapter 8. Exhaustive Search
8.1. Overview
8.2. Brute Force
8.2.1. Algorithm Description

8.2.2. Configuration
<solver>
...
<exhaustiveSearch>
<exhaustiveSearchType>BRUTE_FORCE</exhaustiveSearchType>
</exhaustiveSearch>
</solver>8.3. Branch And Bound
8.3.1. Algorithm Description
-1.

8.3.2. Configuration
<solver>
...
<exhaustiveSearch>
<exhaustiveSearchType>BRANCH_AND_BOUND</exhaustiveSearchType>
</exhaustiveSearch>
</solver>ScoreBounder, the InitializingScoreTrend should be set. Especially an InitializingScoreTrend of ONLY_DOWN (or at least has ONLY_DOWN in the leading score levels) prunes a lot.
<exhaustiveSearch>
<exhaustiveSearchType>BRANCH_AND_BOUND</exhaustiveSearchType>
<nodeExplorationType>DEPTH_FIRST</nodeExplorationType>
<entitySorterManner>DECREASING_DIFFICULTY_IF_AVAILABLE</entitySorterManner>
<valueSorterManner>INCREASING_STRENGTH_IF_AVAILABLE</valueSorterManner>
</exhaustiveSearch>nodeExplorationType options are:
DEPTH_FIRST(default): Explore deeper nodes first (and then a better score and then a better optimistic bound). Deeper nodes (especially leaf nodes) often improve the pessimistic bound. A better pessimistic bound allows pruning more nodes to reduce the search space.<exhaustiveSearch> <exhaustiveSearchType>BRANCH_AND_BOUND</exhaustiveSearchType> <nodeExplorationType>DEPTH_FIRST</nodeExplorationType> </exhaustiveSearch>BREADTH_FIRST(not recommended): Explore nodes layer by layer (and then a better score and then a better optimistic bound). Scales terribly in memory (and usually in performance too).<exhaustiveSearch> <exhaustiveSearchType>BRANCH_AND_BOUND</exhaustiveSearchType> <nodeExplorationType>BREADTH_FIRST</nodeExplorationType> </exhaustiveSearch>SCORE_FIRST: Explore nodes with a better score first (and then a better optimistic bound and then deeper nodes first). Might scale as terribly asBREADTH_FIRSTin some cases.<exhaustiveSearch> <exhaustiveSearchType>BRANCH_AND_BOUND</exhaustiveSearchType> <nodeExplorationType>SCORE_FIRST</nodeExplorationType> </exhaustiveSearch>OPTIMISTIC_BOUND_FIRST: Explore nodes with a better optimistic bound first (and then a better score and then deeper nodes first). Might scale as terribly asBREADTH_FIRSTin some cases.<exhaustiveSearch> <exhaustiveSearchType>BRANCH_AND_BOUND</exhaustiveSearchType> <nodeExplorationType>OPTIMISTIC_BOUND_FIRST</nodeExplorationType> </exhaustiveSearch>
entitySorterManner options are:
DECREASING_DIFFICULTY: Initialize the more difficult planning entities first. This usually increases pruning (and therefore improves scalability). Requires the model to support planning entity difficulty comparison.DECREASING_DIFFICULTY_IF_AVAILABLE(default): If the model supports planning entity difficulty comparison, behave likeDECREASING_DIFFICULTY, else likeNONE.NONE: Initialize the planning entities in original order.
valueSorterManner options are:
INCREASING_STRENGTH: Evaluate the planning values in increasing strength. Requires the model to support planning value strength comparison.INCREASING_STRENGTH_IF_AVAILABLE(default): If the model supports planning value strength comparison, behave likeINCREASING_STRENGTH, else likeNONE.DECREASING_STRENGTH: Evaluate the planning values in decreasing strength. Requires the model to support planning value strength comparison.DECREASING_STRENGTH_IF_AVAILABLE: If the model supports planning value strength comparison, behave likeDECREASING_STRENGTH, else likeNONE.NONE: Try the planning values in original order.
8.4. Scalability of Exhaustive Search
- They scale terribly memory wise.
- They scale horribly performance wise.


Chapter 9. Construction Heuristics
9.1. Overview
Termination on the construction heuristic phase specifically.
9.2. First Fit
9.2.1. Algorithm Description

Queen A into row 0 (and never moving it later), which makes it impossible to reach the optimal solution. Suffixing this construction heuristic with metaheuristics can remedy that.
9.2.2. Configuration
<constructionHeuristic>
<constructionHeuristicType>FIRST_FIT</constructionHeuristicType>
</constructionHeuristic>ONLY_DOWN, this algorithm is faster: for an entity, it picks the first move for which the score does not deteriorate the last step score, ignoring all subsequent moves.
9.3. First Fit Decreasing
9.3.1. Algorithm Description

9.3.2. Configuration
<constructionHeuristic>
<constructionHeuristicType>FIRST_FIT_DECREASING</constructionHeuristicType>
</constructionHeuristic>ONLY_DOWN, this algorithm is faster: for an entity, it picks the first move for which the score does not deteriorate the last step score, ignoring all subsequent moves.
9.4. Weakest Fit
9.4.1. Algorithm Description
9.4.2. Configuration
<constructionHeuristic>
<constructionHeuristicType>WEAKEST_FIT</constructionHeuristicType>
</constructionHeuristic>ONLY_DOWN, this algorithm is faster: for an entity, it picks the first move for which the score does not deteriorate the last step score, ignoring all subsequent moves.
9.5. Weakest Fit Decreasing
9.5.1. Algorithm Description
9.5.2. Configuration
<constructionHeuristic>
<constructionHeuristicType>WEAKEST_FIT_DECREASING</constructionHeuristicType>
</constructionHeuristic>ONLY_DOWN, this algorithm is faster: for an entity, it picks the first move for which the score does not deteriorate the last step score, ignoring all subsequent moves.
9.6. Strongest Fit
9.6.1. Algorithm Description
9.6.2. Configuration
<constructionHeuristic>
<constructionHeuristicType>STRONGEST_FIT</constructionHeuristicType>
</constructionHeuristic>ONLY_DOWN, this algorithm is faster: for an entity, it picks the first move for which the score does not deteriorate the last step score, ignoring all subsequent moves.
9.7. Strongest Fit Decreasing
9.7.1. Algorithm Description
9.7.2. Configuration
<constructionHeuristic>
<constructionHeuristicType>STRONGEST_FIT_DECREASING</constructionHeuristicType>
</constructionHeuristic>ONLY_DOWN, this algorithm is faster: for an entity, it picks the first move for which the score does not deteriorate the last step score, ignoring all subsequent moves.
9.8. Allocate Entity From Queue
9.8.1. Algorithm Description
- Put all entities in a queue.
- Assign the first entity (from that queue) to the best value.
- Repeat until all entities are assigned.
9.8.2. Configuration
<constructionHeuristic>
<constructionHeuristicType>ALLOCATE_ENTITY_FROM_QUEUE</constructionHeuristicType>
</constructionHeuristic> <constructionHeuristic>
<constructionHeuristicType>ALLOCATE_ENTITY_FROM_QUEUE</constructionHeuristicType>
<entitySorterManner>DECREASING_DIFFICULTY_IF_AVAILABLE</entitySorterManner>
<valueSorterManner>INCREASING_STRENGTH_IF_AVAILABLE</valueSorterManner>
</constructionHeuristic>entitySorterManner options are:
DECREASING_DIFFICULTY: Initialize the more difficult planning entities first. This usually increases pruning (and therefore improves scalability). Requires the model to support planning entity difficulty comparison.DECREASING_DIFFICULTY_IF_AVAILABLE(default): If the model supports planning entity difficulty comparison, behave likeDECREASING_DIFFICULTY, else likeNONE.NONE: Initialize the planning entities in original order.
valueSorterManner options are:
INCREASING_STRENGTH: Evaluate the planning values in increasing strength. Requires the model to support planning value strength comparison.INCREASING_STRENGTH_IF_AVAILABLE(default): If the model supports planning value strength comparison, behave likeINCREASING_STRENGTH, else likeNONE.DECREASING_STRENGTH: Evaluate the planning values in decreasing strength. Requires the model to support planning value strength comparison.DECREASING_STRENGTH_IF_AVAILABLE: If the model supports planning value strength comparison, behave likeDECREASING_STRENGTH, else likeNONE.NONE: Try the planning values in original order.
<constructionHeuristic>
<queuedEntityPlacer>
<entitySelector id="placerEntitySelector">
<cacheType>PHASE</cacheType>
<selectionOrder>SORTED</selectionOrder>
<sorterManner>DECREASING_DIFFICULTY</sorterManner>
</entitySelector>
<changeMoveSelector>
<entitySelector mimicSelectorRef="placerEntitySelector"/>
<valueSelector>
<cacheType>PHASE</cacheType>
<selectionOrder>SORTED</selectionOrder>
<sorterManner>INCREASING_STRENGTH</sorterManner>
</valueSelector>
</changeMoveSelector>
</queuedEntityPlacer>
</constructionHeuristic>QueuedEntityPlacer selects 1 uninitialized entity from the EntitySelector and applies the winning Move (out of all the moves for that entity generated by the MoveSelector). The mimic selection ensures that the winning Move changes (only) the selected entity.
Selector customization (such as filtering and limiting) is supported too.
9.8.3. Multiple Variables
ChangeMoves are combined:
- Cartesian product of the
ChangeMoves (default): All variables of the selected entity are assigned together. Has far better results (especially for timetabling use cases). - Sequential
ChangeMoves: One variable is assigned at a time. Scales much better, especially for 3 or more variables.
ChangeMoves, will select 8000 moves per entity:
<constructionHeuristic>
<queuedEntityPlacer>
<entitySelector id="placerEntitySelector">
<cacheType>PHASE</cacheType>
</entitySelector>
<cartesianProductMoveSelector>
<changeMoveSelector>
<entitySelector mimicSelectorRef="placerEntitySelector"/>
<valueSelector>
<variableName>room</variableName>
</valueSelector>
</changeMoveSelector>
<changeMoveSelector>
<entitySelector mimicSelectorRef="placerEntitySelector"/>
<valueSelector>
<variableName>period</variableName>
</valueSelector>
</changeMoveSelector>
</cartesianProductMoveSelector>
</queuedEntityPlacer>
...
</constructionHeuristic>ChangeMoves, will select 240 moves per entity:
<constructionHeuristic>
<queuedEntityPlacer>
<entitySelector id="placerEntitySelector">
<cacheType>PHASE</cacheType>
</entitySelector>
<changeMoveSelector>
<entitySelector mimicSelectorRef="placerEntitySelector"/>
<valueSelector>
<variableName>period</variableName>
</valueSelector>
</changeMoveSelector>
<changeMoveSelector>
<entitySelector mimicSelectorRef="placerEntitySelector"/>
<valueSelector>
<variableName>room</variableName>
</valueSelector>
</changeMoveSelector>
</queuedEntityPlacer>
...
</constructionHeuristic>ChangeMoves, the order of the variables is important. In the example above, it's better to select the period first (instead of the other way around), because there are more hard constraints that do not involve the room (for example: no teacher should teach 2 lectures at the same time). Let the Benchmarker guide you.
<constructionHeuristic>
<queuedEntityPlacer>
...
<cartesianProductMoveSelector>
<changeMoveSelector>...</changeMoveSelector>
<changeMoveSelector>...</changeMoveSelector>
</cartesianProductMoveSelector>
<changeMoveSelector>...</changeMoveSelector>
</queuedEntityPlacer>
...
</constructionHeuristic>9.8.4. Multiple Entity Classes
<constructionHeuristic>
<queuedEntityPlacer>
<entitySelector id="placerEntitySelector">
<cacheType>PHASE</cacheType>
<entityClass>...DogEntity</entityClass>
</entitySelector>
<changeMoveSelector>
<entitySelector mimicSelectorRef="placerEntitySelector"/>
</changeMoveSelector>
</queuedEntityPlacer>
...
</constructionHeuristic>
<constructionHeuristic>
<queuedEntityPlacer>
<entitySelector id="placerEntitySelector">
<cacheType>PHASE</cacheType>
<entityClass>...CatEntity</entityClass>
</entitySelector>
<changeMoveSelector>
<entitySelector mimicSelectorRef="placerEntitySelector"/>
</changeMoveSelector>
</queuedEntityPlacer>
...
</constructionHeuristic>9.8.5. Pick Early Type
NEVER: Evaluate all the selected moves to initialize the variable(s). This is the default if the InitializingScoreTrend is notONLY_DOWN.<constructionHeuristic> ... <forager> <pickEarlyType>NEVER</pickEarlyType> </forager> </constructionHeuristic>FIRST_NON_DETERIORATING_SCORE: Initialize the variable(s) with the first move that doesn't deteriorate the score, ignore the remaining selected moves. This is the default if the InitializingScoreTrend isONLY_DOWN.<constructionHeuristic> ... <forager> <pickEarlyType>FIRST_NON_DETERIORATING_SCORE</pickEarlyType> </forager> </constructionHeuristic>NoteIf there are only negative constraints, but the InitializingScoreTrend is strictly notONLY_DOWN, it can sometimes make sense to apply FIRST_NON_DETERIORATING_SCORE. Use the Benchmarker to decide if the score quality loss is worth the time gain.FIRST_FEASIBLE_SCORE: Initialize the variable(s) with the first move that has a feasible score.<constructionHeuristic> ... <forager> <pickEarlyType>FIRST_FEASIBLE_SCORE</pickEarlyType> </forager> </constructionHeuristic>If the InitializingScoreTrend isONLY_DOWN, useFIRST_FEASIBLE_SCORE_OR_NON_DETERIORATING_HARDinstead, because that's faster without any disadvantages.FIRST_FEASIBLE_SCORE_OR_NON_DETERIORATING_HARD: Initialize the variable(s) with the first move that doesn't deteriorate the feasibility of the score any further.<constructionHeuristic> ... <forager> <pickEarlyType>FIRST_FEASIBLE_SCORE_OR_NON_DETERIORATING_HARD</pickEarlyType> </forager> </constructionHeuristic>
9.9. Allocate To Value From Queue
9.9.1. Algorithm Description
- Put all values in a round-robin queue.
- Assign the best entity to the first value (from that queue).
- Repeat until all entities are assigned.
9.9.2. Configuration
9.10. Cheapest Insertion
9.10.1. Algorithm Description

9.10.2. Configuration
<constructionHeuristic>
<constructionHeuristicType>CHEAPEST_INSERTION</constructionHeuristicType>
</constructionHeuristic>ONLY_DOWN, this algorithm is faster: for an entity, it picks the first move for which the score does not deteriorate the last step score, ignoring all subsequent moves.
9.11. Regret Insertion
9.11.1. Algorithm Description
9.11.2. Configuration
9.12. Allocate From Pool
9.12.1. Algorithm Description
- Put all entity-value combinations in a pool.
- Assign the best entity to best value.
- Repeat until all entities are assigned.
9.12.2. Configuration
<constructionHeuristic>
<constructionHeuristicType>ALLOCATE_FROM_POOL</constructionHeuristicType>
</constructionHeuristic> <constructionHeuristic>
<constructionHeuristicType>ALLOCATE_FROM_POOL</constructionHeuristicType>
<entitySorterManner>DECREASING_DIFFICULTY_IF_AVAILABLE</entitySorterManner>
<valueSorterManner>INCREASING_STRENGTH_IF_AVAILABLE</valueSorterManner>
</constructionHeuristic> <constructionHeuristic>
<pooledEntityPlacer>
<changeMoveSelector>
<entitySelector id="placerEntitySelector">
<cacheType>PHASE</cacheType>
<selectionOrder>SORTED</selectionOrder>
<sorterManner>DECREASING_DIFFICULTY</sorterManner>
</entitySelector>
<valueSelector>
<cacheType>PHASE</cacheType>
<selectionOrder>SORTED</selectionOrder>
<sorterManner>INCREASING_STRENGTH</sorterManner>
</valueSelector>
</changeMoveSelector>
</pooledEntityPlacer>
</constructionHeuristic>PooledEntityPlacer applies the winning Move (out of all the moves for that entity generated by the MoveSelector).
Selector customization (such as filtering and limiting) is supported too.
Chapter 10. Local Search
10.1. Overview
10.2. Local Search Concepts
10.2.1. Step by Step
Move. Local Search tries a number of moves on the current solution and picks the best accepted move as the step:
Figure 10.1. Decide the next step at step 0 (4 queens example)

-3), it is picked as the next step. If multiple moves have the same highest score, one is picked randomly, in this case B0 to B3. Note that C0 to C3 (not shown) could also have been picked because it also has the score -3.
Figure 10.2. All steps (4 queens example)

- B0 to B3
- D0 to B2
- A0 to B1
debug logging for the category org.optaplanner to show those steps in the log:
INFO Solving started: time spent (0), best score (-6), random (JDK with seed 0).
DEBUG LS step (0), time spent (20), score (-3), new best score (-3), accepted/selected move count (12/12), picked move (Queen-1 {Row-0 -> Row-3}).
DEBUG LS step (1), time spent (31), score (-1), new best score (-1), accepted/selected move count (12/12), picked move (Queen-3 {Row-0 -> Row-2}).
DEBUG LS step (2), time spent (40), score (0), new best score (0), accepted/selected move count (12/12), picked move (Queen-0 {Row-0 -> Row-1}).
INFO Local Search phase (0) ended: step total (3), time spent (41), best score (0).
INFO Solving ended: time spent (41), best score (0), average calculate count per second (1780).toString() method of the Move implementation which returns for example "Queen-1 {Row-0 -> Row-3}".
10.2.2. Decide the Next Step
- A
MoveSelectorwhich selects the possible moves of the current solution. See the chapter move and neighborhood selection. - An
Acceptorwhich filters out unacceptable moves. - A
Foragerwhich gathers accepted moves and picks the next step from them.
<localSearch>
<unionMoveSelector>
...
</unionMoveSelector>
<acceptor>
...
</acceptor>
<forager>
...
</forager>
</localSearch>MoveSelector generated the moves shown with the blue lines, the Acceptor accepted all of them and the Forager picked the move B0 to B3.

trace logging to show the decision making in the log:
INFO Solver started: time spent (0), score (-6), new best score (-6), random (JDK with seed 0).
TRACE Move index (0) not doable, ignoring move (Queen-0 {Row-0 -> Row-0}).
TRACE Move index (1), score (-4), accepted (true), move (Queen-0 {Row-0 -> Row-1}).
TRACE Move index (2), score (-4), accepted (true), move (Queen-0 {Row-0 -> Row-2}).
TRACE Move index (3), score (-4), accepted (true), move (Queen-0 {Row-0 -> Row-3}).
...
TRACE Move index (6), score (-3), accepted (true), move (Queen-1 {Row-0 -> Row-3}).
...
TRACE Move index (9), score (-3), accepted (true), move (Queen-2 {Row-0 -> Row-3}).
...
TRACE Move index (12), score (-4), accepted (true), move (Queen-3 {Row-0 -> Row-3}).
DEBUG LS step (0), time spent (6), score (-3), new best score (-3), accepted/selected move count (12/12), picked move (Queen-1 {Row-0 -> Row-3}).
...Solver remembers the best solution it has encountered through the entire search path. Each time the current solution is better than the last best solution, the current solution is cloned and referenced as the new best solution.

10.2.3. Acceptor
Acceptor is used (together with a Forager) to active Tabu Search, Simulated Annealing, Late Acceptance, ... For each move it checks whether it is accepted or not.
Acceptor, but the build-in acceptors should suffice for most needs. You can also combine multiple acceptors.
10.2.4. Forager
Forager gathers all accepted moves and picks the move which is the next step. Normally it picks the accepted move with the highest score. If several accepted moves have the highest score, one is picked randomly to break the tie. Breaking ties randomly leads to better results.
breakTieRandomly to false, but that's almost never a good idea:
- If an earlier move is better than a later move with the same score, the score calculator should add an extra softer score level to score the first move as slightly better. Don't rely on move selection order to enforce that.
- Random tie breaking does not affect reproducibility.
10.2.4.1. Accepted Count Limit
- An
acceptedCountLimitinteger, which specifies how many accepted moves should be evaluated during each step. By default, all accepted moves are evaluated at every step.<forager> <acceptedCountLimit>1000</acceptedCountLimit> </forager>
acceptedCountLimit. Start from an acceptedCountLimit that takes a step in less then 2 seconds. Turn on INFO logging to see the step times. Use the Benchmarker to tweak the value.
acceptedCountLimit (so a fast stepping algorithm), it is recommended to avoid using selectionOrder SHUFFLED because the shuffling generates a random number for every element in the selector, taking up a lot of time, but only a few elements are actually selected.
10.2.4.2. Pick Early Type
NEVER: A move is never picked early: all accepted moves are evaluated that the selection allows. This is the default.<forager> <pickEarlyType>NEVER</pickEarlyType> </forager>FIRST_BEST_SCORE_IMPROVING: Pick the first accepted move that improves the best score. If none improve the best score, it behaves exactly like the pickEarlyType NEVER.<forager> <pickEarlyType>FIRST_BEST_SCORE_IMPROVING</pickEarlyType> </forager>FIRST_LAST_STEP_SCORE_IMPROVING: Pick the first accepted move that improves the last step score. If none improve the last step score, it behaves exactly like the pickEarlyType NEVER.<forager> <pickEarlyType>FIRST_LAST_STEP_SCORE_IMPROVING</pickEarlyType> </forager>
10.3. Hill Climbing (Simple Local Search)
10.3.1. Algorithm Description

10.3.2. Stuck in Local Optima

10.3.3. Configuration
<localSearch>
<localSearchType>HILL_CLIMBING</localSearchType>
</localSearch> <localSearch>
...
<acceptor>
<acceptorType>HILL_CLIMBING</acceptorType>
</acceptor>
<forager>
<acceptedCountLimit>1</acceptedCountLimit>
</forager>
</localSearch>10.4. Tabu Search
10.4.1. Algorithm Description

10.4.2. Configuration
<localSearch>
<localSearchType>TABU_SEARCH</localSearchType>
</localSearch> <localSearch>
...
<acceptor>
<entityTabuSize>7</entityTabuSize>
</acceptor>
<forager>
<acceptedCountLimit>1000</acceptedCountLimit>
</forager>
</localSearch>acceptedCountLimit, such as 1000.
- Planning entity tabu (recommended) makes the planning entities of recent steps tabu. For example, for N queens it makes the recently moved queens tabu. It's recommended to start with this tabu type.
<acceptor> <entityTabuSize>7</entityTabuSize> </acceptor>To avoid hard coding the tabu size, configure a tabu ratio, relative to the number of entities, for example 2%:<acceptor> <entityTabuRatio>0.02</entityTabuRatio> </acceptor> - Planning value tabu makes the planning values of recent steps tabu. For example, for N queens it makes the recently moved to rows tabu.
<acceptor> <valueTabuSize>7</valueTabuSize> </acceptor>To avoid hard coding the tabu size, configure a tabu ratio, relative to the number of values, for example 2%:<acceptor> <valueTabuRatio>0.02</valueTabuRatio> </acceptor> - Move tabu makes recent steps tabu. It does not accept a move equal to one of those steps.
<acceptor> <moveTabuSize>7</moveTabuSize> </acceptor> - Undo move tabu makes the undo move of recent steps tabu.
<acceptor> <undoMoveTabuSize>7</undoMoveTabuSize> </acceptor> - Solution tabu makes recently visited solutions tabu. It does not accept a move that leads to one of those solutions. It requires that the
Solutionimplementsequals()andhashCode()properly. If you can spare the memory, don't be cheap on the tabu size.<acceptor> <solutionTabuSize>1000</solutionTabuSize> </acceptor>For non-trivial cases, solution tabu is usually useless because the search space size makes it statistically highly unlikely to reach the same solution twice. Therefore its use is not recommended, except for small datasets.
<acceptor>
<entityTabuSize>7</entityTabuSize>
<valueTabuSize>3</valueTabuSize>
</acceptor>10.5. Simulated Annealing
10.5.1. Algorithm Description

Termination. In the end, it gradually turns into Hill Climbing, only accepting improving moves.
10.5.2. Configuration
simulatedAnnealingStartingTemperature set to the maximum score delta a single move can cause. Use the Benchmarker to tweak the value. Advanced configuration:
<localSearch>
...
<acceptor>
<simulatedAnnealingStartingTemperature>2hard/100soft</simulatedAnnealingStartingTemperature>
</acceptor>
<forager>
<acceptedCountLimit>1</acceptedCountLimit>
</forager>
</localSearch>acceptedCountLimit. The classic algorithm uses an acceptedCountLimit of 1, but often 4 performs better.
<localSearch>
...
<acceptor>
<simulatedAnnealingStartingTemperature>2hard/100soft</simulatedAnnealingStartingTemperature>
<entityTabuSize>5</entityTabuSize>
</acceptor>
<forager>
<acceptedCountLimit>1</acceptedCountLimit>
</forager>
</localSearch>10.6. Late Acceptance
10.6.1. Algorithm Description

10.6.2. Configuration
<localSearch>
<localSearchType>LATE_ACCEPTANCE</localSearchType>
</localSearch>lateAcceptanceSize. Advanced configuration:
<localSearch>
...
<acceptor>
<lateAcceptanceSize>400</lateAcceptanceSize>
</acceptor>
<forager>
<acceptedCountLimit>1</acceptedCountLimit>
</forager>
</localSearch>acceptedCountLimit.
<localSearch>
...
<acceptor>
<lateAcceptanceSize>400</lateAcceptanceSize>
<entityTabuSize>5</entityTabuSize>
</acceptor>
<forager>
<acceptedCountLimit>1</acceptedCountLimit>
</forager>
</localSearch>10.7. Step Counting Hill Climbing
10.7.1. Algorithm Description
10.7.2. Configuration
stepCountingHillClimbingSize), the threshold score is set to the step score.
<localSearch>
...
<acceptor>
<stepCountingHillClimbingSize>400</stepCountingHillClimbingSize>
</acceptor>
<forager>
<acceptedCountLimit>1</acceptedCountLimit>
</forager>
</localSearch>acceptedCountLimit.
10.8. Strategic Oscillation
10.8.1. Algorithm Description
10.8.2. Configuration
finalistPodiumType, for example in a Tabu Search configuration:
<localSearch>
...
<acceptor>
<entityTabuSize>7</entityTabuSize>
</acceptor>
<forager>
<acceptedCountLimit>1000</acceptedCountLimit>
<finalistPodiumType>STRATEGIC_OSCILLATION</finalistPodiumType>
</forager>
</localSearch>finalistPodiumTypes are supported:
HIGHEST_SCORE(default): Pick the accepted move with the highest score.STRATEGIC_OSCILLATION: Alias for the default strategic oscillation variant.STRATEGIC_OSCILLATION_BY_LEVEL: If there is an accepted improving move, pick it. If no such move exists, prefer an accepted move which improves a softer score level over one that doesn't (even if it has a better harder score level). A move is improving if it's better than the last completed step score.STRATEGIC_OSCILLATION_BY_LEVEL_ON_BEST_SCORE: LikeSTRATEGIC_OSCILLATION_BY_LEVEL, but define improving as better than the best score (instead of the last completed step score).
10.9. Using a Custom Termination, MoveSelector, EntitySelector, ValueSelector or Acceptor
Termination, MoveSelector, EntitySelector, ValueSelector or Acceptor by extending the abstract class and also the related *Config class.
MoveSelector, extend the AbstractMoveSelector class, extend the MoveSelectorConfig class and configure it in the solver configuration.
Termination, ... instance directly (to avoid extending a Config class too) because:
- A
SolverFactorycan build multipleSolverinstances, which each require a distinctTermination, ... instance. - A solver configuration needs to be serializable to and from XML. This makes benchmarking with
PlannerBenchmarkparticularly easy because you can configure differentSolvervariants in XML. - A
Configclass is often easier and clearer to configure. For example:TerminationConfigtranslatesminutesSpentLimitandsecondsSpentLimitintotimeMillisSpentLimit.
Chapter 11. Evolutionary Algorithms
11.1. Overview
11.2. Evolutionary Strategies
11.3. Genetic Algorithms
Chapter 12. Hyperheuristics
12.1. Overview
Chapter 13. Partitioned Search
13.1. Overview

Solver to solve each piece.
Chapter 14. Benchmarking And Tweaking
14.1. Find The Best Solver Configuration

14.2. Benchmark Configuration
14.2.1. Add Dependency On optaplanner-benchmark
optaplanner-benchmark.
pom.xml file:
<dependency>
<groupId>org.optaplanner</groupId>
<artifactId>optaplanner-benchmark</artifactId>
</dependency>optaplanner-core version used (which is automatically the case if you import optaplanner-bom).
binaries directory.
14.2.2. Build And Run A PlannerBenchmark
PlannerBenchmark instance with a PlannerBenchmarkFactory. Configure it with a benchmark configuration XML file, provided as a classpath resource:
PlannerBenchmarkFactory plannerBenchmarkFactory = PlannerBenchmarkFactory.createFromXmlResource(
"org/optaplanner/examples/nqueens/benchmark/nqueensBenchmarkConfig.xml");
PlannerBenchmark plannerBenchmark = benchmarkFactory.buildPlannerBenchmark();
plannerBenchmark.benchmark();<?xml version="1.0" encoding="UTF-8"?>
<plannerBenchmark>
<benchmarkDirectory>local/data/nqueens</benchmarkDirectory>
<inheritedSolverBenchmark>
<problemBenchmarks>
...
<inputSolutionFile>data/nqueens/unsolved/32queens.xml</inputSolutionFile>
<inputSolutionFile>data/nqueens/unsolved/64queens.xml</inputSolutionFile>
</problemBenchmarks>
<solver>
...<!-- Common solver configuration -->
</solver>
</inheritedSolverBenchmark>
<solverBenchmark>
<name>Tabu Search</name>
<solver>
...<!-- Tabu Search specific solver configuration -->
</solver>
</solverBenchmark>
<solverBenchmark>
<name>Simulated Annealing</name>
<solver>
...<!-- Simulated Annealing specific solver configuration -->
</solver>
</solverBenchmark>
<solverBenchmark>
<name>Late Acceptance</name>
<solver>
...<!-- Late Acceptance specific solver configuration -->
</solver>
</solverBenchmark>
</plannerBenchmark>PlannerBenchmark will try 3 configurations (Tabu Search, Simulated Annealing and Late Acceptance) on 2 data sets (32queens and 64queens), so it will run 6 solvers.
<solverBenchmark> element contains a solver configuration and one or more <inputSolutionFile> elements. It will run the solver configuration on each of those unsolved solution files. The element name is optional, because it is generated if absent. The inputSolutionFile is read by a SolutionFileIO (relative to the working directory).
/) as the file separator (for example in the element <inputSolutionFile>). That will work on any platform (including Windows).
\) as the file separator: that breaks portability because it does not work on Linux and Mac.
<benchmarkDirectory> element (relative to the working directory).
benchmarkDirectory is a directory ignored for source control and not cleaned by your build system. This way the generated files are not bloating your source control and they aren't lost when doing a build. Usually that directory is called local.
Exception or Error occurs in a single benchmark, the entire Benchmarker will not fail-fast (unlike everything else in Planner). Instead, the Benchmarker will continue to run all other benchmarks, write the benchmark report and then fail (if there is at least 1 failing single benchmark). The failing benchmarks will be clearly marked as such in the benchmark report.
14.2.2.1. Inherited solver benchmark
<solverBenchmark> elements are extracted to the <inheritedSolverBenchmark> element. Every property can still be overwritten per <solverBenchmark> element. Note that inherited solver phases such as <constructionHeuristic> or <localSearch> are not overwritten but instead are added to the tail of the solver phases list.
14.2.3. SolutionFileIO: Input And Output Of Solution Files
14.2.3.1. SolutionFileIO Interface
Solution. Also, it might need to write the best Solution of each benchmark to an output file. For that it uses a class that implements the SolutionFileIO interface:
public interface SolutionFileIO {
String getInputFileExtension();
String getOutputFileExtension();
Solution read(File inputSolutionFile);
void write(Solution solution, File outputSolutionFile);
}SolutionFileIO interface is in the optaplanner-persistence-common jar (which is a dependency of the optaplanner-benchmark jar).
14.2.3.2. XStreamSolutionFileIO: The Default SolutionFileIO
XStreamSolutionFileIO instance to read and write solutions.
Solution class which is annotated with XStream annotations:
<problemBenchmarks>
<xStreamAnnotatedClass>org.optaplanner.examples.nqueens.domain.NQueens</xStreamAnnotatedClass>
<inputSolutionFile>data/nqueens/unsolved/32queens.xml</inputSolutionFile>
...
</problemBenchmarks>XStreamSolutionFileIO instance, not just any XStream instance, because the XStreamSolutionFileIO uses a customized XStream instance.
OutOfMemoryError and performance degradation.
14.2.3.3. Custom SolutionFileIO
SolutionFileIO implementation and configure it with the solutionFileIOClass element:
<problemBenchmarks>
<solutionFileIOClass>org.optaplanner.examples.machinereassignment.persistence.MachineReassignmentFileIO</solutionFileIOClass>
<inputSolutionFile>data/machinereassignment/import/model_a1_1.txt</inputSolutionFile>
...
</problemBenchmarks>getInputFileExtension() and getOutputFileExtension() return the same value.
SolutionFileIO implementation must be thread-safe.
14.2.3.4. Reading An Input Solution From A Database (Or Other Repository)
<inputSolutionFile> element for each dataset. There are 2 ways to deal with this if your dataset is in a database or another type of repository:
- Extract the datasets from the database and serialize them to a local file (for example as XML with
XStreamSolutionFileIO). Then use those files an<inputSolutionFile>elements. - For each dataset, create a txt file that holds the unique id of the dataset. Write a custom
SolutionFileIOthat reads that identifier, connects to the database and extract the problem identified by that id. Configure those txt files as<inputSolutionFile>elements.
14.2.4. Warming Up The HotSpot Compiler
<plannerBenchmark> ... <warmUpSecondsSpentLimit>30</warmUpSecondsSpentLimit> ... </plannerBenchmark>
14.2.5. Benchmark Blueprint: A Predefined Configuration
solverBenchmarkBluePrint instead of solverBenchmarks:
<?xml version="1.0" encoding="UTF-8"?>
<plannerBenchmark>
<benchmarkDirectory>local/data/nqueens</benchmarkDirectory>
<warmUpSecondsSpentLimit>30</warmUpSecondsSpentLimit>
<inheritedSolverBenchmark>
<problemBenchmarks>
<xStreamAnnotatedClass>org.optaplanner.examples.nqueens.domain.NQueens</xStreamAnnotatedClass>
<inputSolutionFile>data/nqueens/unsolved/32queens.xml</inputSolutionFile>
<inputSolutionFile>data/nqueens/unsolved/64queens.xml</inputSolutionFile>
<problemStatisticType>BEST_SCORE</problemStatisticType>
</problemBenchmarks>
<solver>
<scanAnnotatedClasses/>
<scoreDirectorFactory>
<scoreDefinitionType>SIMPLE</scoreDefinitionType>
<scoreDrl>org/optaplanner/examples/nqueens/solver/nQueensScoreRules.drl</scoreDrl>
<initializingScoreTrend>ONLY_DOWN</initializingScoreTrend>
</scoreDirectorFactory>
<termination>
<minutesSpentLimit>1</minutesSpentLimit>
</termination>
</solver>
</inheritedSolverBenchmark>
<solverBenchmarkBluePrint>
<solverBenchmarkBluePrintType>EVERY_CONSTRUCTION_HEURISTIC_TYPE_WITH_EVERY_LOCAL_SEARCH_TYPE</solverBenchmarkBluePrintType>
</solverBenchmarkBluePrint>
</plannerBenchmark>SolverBenchmarkBluePrintTypes are supported:
EVERY_CONSTRUCTION_HEURISTIC_TYPE: Run every Construction Heuristic type (First Fit, First Fit Decreasing, Cheapest Insertion, ...).
EVERY_LOCAL_SEARCH_TYPE: Run every Local Search type (Tabu Search, Late Acceptance, ...) with the default Construction Heuristic.
EVERY_CONSTRUCTION_HEURISTIC_TYPE_WITH_EVERY_LOCAL_SEARCH_TYPE: Run every Construction Heuristic type with every Local Search type.
14.2.6. Write The Output Solution Of Benchmark Runs
benchmarkDirectory. By default, this is disabled, because the files are rarely used and considered bloat. Also, on large datasets, writing the best solution of each single benchmark can take quite some time and memory (causing an OutOfMemoryError), especially in a verbose format like XStream XML.
benchmarkDirectory, enable writeOutputSolutionEnabled:
<problemBenchmarks>
...
<writeOutputSolutionEnabled>true</writeOutputSolutionEnabled>
...
</problemBenchmarks>14.2.7. Benchmark Logging
Solver logging.
singleBenchmark.name in a sifting appender. For example with Logback in logback.xml:
<appender name="fileAppender" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<key>singleBenchmark.name</key>
<defaultValue>app</defaultValue>
</discriminator>
<sift>
<appender name="fileAppender.${singleBenchmark.name}" class="...FileAppender">
<file>local/log/optaplannerBenchmark-${singleBenchmark.name}.log</file>
...
</appender>
</sift>
</appender>14.3. Benchmark Report
14.3.1. HTML Report
benchmarkDirectory with the index.html filename. Open it in your browser. It has a nice overview of your benchmark including:
- Summary statistics: graphs and tables
- Problem statistics per
inputSolutionFile: graphs and CSV - Each solver configuration (ranked): Handy to copy and paste
- Benchmark information: settings, hardware, ...
locale accordingly:
<plannerBenchmark>
...
<benchmarkReport>
<locale>en_US</locale>
</benchmarkReport>
...
</plannerBenchmark>14.3.2. Ranking The Solvers
Solver with rank 0 is called the favorite Solver: it performs best overall, but it might not be the best on every problem. It's recommended to use that favorite Solver in production.
<plannerBenchmark>
...
<benchmarkReport>
<solverRankingType>TOTAL_SCORE</solverRankingType>
</benchmarkReport>
...
</plannerBenchmark>solverRankingTypes are supported:
TOTAL_SCORE(default): Maximize the overall score, so minimize the overall cost if all solutions would be executed.WORST_SCORE: Minimize the worst case scenario.TOTAL_RANKING: Maximize the overall ranking. Use this if your datasets differ greatly in size or difficulty, producing a difference inScoremagnitude.
Solvers with at least one failed single benchmark do not get a ranking. Solvers with not fully initialized solutions are ranked worse.
Comparator:
<benchmarkReport>
<solverRankingComparatorClass>...TotalScoreSolverRankingComparator</solverRankingComparatorClass>
</benchmarkReport> <benchmarkReport>
<solverRankingWeightFactoryClass>...TotalRankSolverRankingWeightFactory</solverRankingWeightFactoryClass>
</benchmarkReport>14.4. Summary Statistics
14.4.1. Best Score Summary (Graph And Table)
inputSolutionFile for each solver configuration.
Figure 14.1. Best Score Summary Statistic

14.4.2. Best Score Scalability Summary (Graph)
0 if any @ValueRangeProvider method signature returns ValueRange (instead of CountableValueRange or Collection).
14.4.3. Winning Score Difference Summary (Graph And Table)
inputSolutionFile for each solver configuration. The winning score difference is the score difference with the score of the winning solver configuration for that particular inputSolutionFile.
14.4.4. Worst Score Difference Percentage (ROI) Summary (Graph and Table)
inputSolutionFile for each solver configuration if you'd upgrade from the worst solver configuration for that particular inputSolutionFile.
14.4.5. Average Calculation Count Summary (Graph and Table)
14.4.6. Time Spent Summary (Graph And Table)
inputSolutionFile for each solver configuration. This is pointless if it's benchmarking against a fixed time limit.
14.4.7. Time Spent Scalability Summary (Graph)
14.4.8. Best Score Per Time Spent Summary (Graph)
14.5. Statistic Per Dataset (Graph And CSV)
14.5.1. Enable A Problem Statistic
benchmarkDirectory. To configure one, add a problemStatisticType line:
<plannerBenchmark>
<benchmarkDirectory>local/data/nqueens/solved</benchmarkDirectory>
<inheritedSolverBenchmark>
<problemBenchmarks>
...
<problemStatisticType>BEST_SCORE</problemStatisticType>
<problemStatisticType>CALCULATE_COUNT_PER_SECOND</problemStatisticType>
</problemBenchmarks>
...
</inheritedSolverBenchmark>
...
</plannerBenchmark>problemStatisticType elements are allowed.
14.5.2. Best Score Over Time Statistic (Graph And CSV)
<problemBenchmarks>
...
<problemStatisticType>BEST_SCORE</problemStatisticType>
</problemBenchmarks>Figure 14.2. Best Score Over Time Statistic

14.5.3. Step Score Over Time Statistic (Graph And CSV)
<problemBenchmarks>
...
<problemStatisticType>STEP_SCORE</problemStatisticType>
</problemBenchmarks>Figure 14.3. Step Score Over Time Statistic

14.5.4. Calculate Count Per Second Statistic (Graph And CSV)
<problemBenchmarks>
...
<problemStatisticType>CALCULATE_COUNT_PER_SECOND</problemStatisticType>
</problemBenchmarks>Figure 14.4. Calculate Count Per Second Statistic

14.5.5. Best Solution Mutation Over Time Statistic (Graph And CSV)
<problemBenchmarks>
...
<problemStatisticType>BEST_SOLUTION_MUTATION</problemStatisticType>
</problemBenchmarks>Figure 14.5. Best Solution Mutation Over Time Statistic

14.5.6. Move Count Per Step Statistic (Graph And CSV)
<problemBenchmarks>
...
<problemStatisticType>MOVE_COUNT_PER_STEP</problemStatisticType>
</problemBenchmarks>Figure 14.6. Move Count Per Step Statistic

14.5.7. Memory Use Statistic (Graph And CSV)
<problemBenchmarks>
...
<problemStatisticType>MEMORY_USE</problemStatisticType>
</problemBenchmarks>Figure 14.7. Memory Use Statistic

14.6. Statistic Per Single Benchmark (Graph And CSV)
14.6.1. Enable A Single Statistic
benchmarkDirectory. To configure one, add a singleStatisticType line:
<plannerBenchmark>
<benchmarkDirectory>local/data/nqueens/solved</benchmarkDirectory>
<inheritedSolverBenchmark>
<problemBenchmarks>
...
<problemStatisticType>...</problemStatisticType>
<singleStatisticType>PICKED_MOVE_TYPE_BEST_SCORE_DIFF</singleStatisticType>
</problemBenchmarks>
...
</inheritedSolverBenchmark>
...
</plannerBenchmark>singleStatisticType elements are allowed.
14.6.2. Constraint Match Total Best Score Over Time Statistic (Graph And CSV)
<problemBenchmarks>
...
<singleStatisticType>CONSTRAINT_MATCH_TOTAL_BEST_SCORE</singleStatisticType>
</problemBenchmarks>Figure 14.8. Constraint Match Total Best Score Diff Over Time Statistic

14.6.3. Constraint Match Total Step Score Over Time Statistic (Graph And CSV)
<problemBenchmarks>
...
<singleStatisticType>CONSTRAINT_MATCH_TOTAL_STEP_SCORE</singleStatisticType>
</problemBenchmarks>Figure 14.9. Constraint Match Total Step Score Diff Over Time Statistic

14.6.4. Picked Move Type Best Score Diff Over Time Statistic (Graph And CSV)
<problemBenchmarks>
...
<singleStatisticType>PICKED_MOVE_TYPE_BEST_SCORE_DIFF</singleStatisticType>
</problemBenchmarks>Figure 14.10. Picked Move Type Best Score Diff Over Time Statistic

14.6.5. Picked Move Type Step Score Diff Over Time Statistic (Graph And CSV)
<problemBenchmarks>
...
<singleStatisticType>PICKED_MOVE_TYPE_STEP_SCORE_DIFF</singleStatisticType>
</problemBenchmarks>Figure 14.11. Picked Move Type Step Score Diff Over Time Statistic

14.7. Advanced Benchmarking
14.7.1. Benchmarking Performance Tricks
14.7.1.1. Parallel Benchmarking On Multiple Threads
<plannerBenchmark> ... <parallelBenchmarkCount>AUTO</parallelBenchmarkCount> ... </plannerBenchmark>
parallelBenchmarkCount AUTO to maximize the reliability and efficiency of the benchmark results.
parallelBenchmarkCounts are supported:
1(default): Run all benchmarks sequentially.AUTO: Let Planner decide how many benchmarks to run in parallel. This formula is based on experience. It's recommended to prefer this over the other parallel enabling options.- Static number: The number of benchmarks to run in parallel.
<parallelBenchmarkCount>2</parallelBenchmarkCount>
- JavaScript formula: Formula for the number of benchmarks to run in parallel. It can use the variable
availableProcessorCount. For example:<parallelBenchmarkCount>(availableProcessorCount / 2) + 1</parallelBenchmarkCount>
parallelBenchmarkCount is always limited to the number of available processors. If it's higher, it will be automatically decreased.
parallelBenchmarkCount above 1 (even on AUTO) may overheat your CPU.
sensors command can help you detect if this is the case. It is available in the package lm_sensors or lm-sensors in most Linux distributions. There are several freeware tools available for Windows too.
14.7.2. Statistical Benchmarking
Figure 14.12. Sub Single Benchmark Summary Statistic

<subSingleCount> element to an <inheritedSolverBenchmark> element:
<?xml version="1.0" encoding="UTF-8"?>
<plannerBenchmark>
...
<inheritedSolverBenchmark>
...
<solver>
...
</solver>
<subSingleCount>10<subSingleCount>
</inheritedSolverBenchmark>
...
</plannerBenchmark>subSingleCount in the individual <solverBenchmark> elements. This will override the configuration in the <inheritedSolverBenchmark> element. subSingleCount is by default set to 1.
14.7.3. Template Based Benchmarking And Matrix Benchmarking
entityTabuSize values (5, 7, 11 and 13) combined with 3 acceptedCountLimit values (500, 1000 and 2000), resulting in 12 solver configurations.
<plannerBenchmark>
...
<inheritedSolverBenchmark>
...
</inheritedSolverBenchmark>
<#list [5, 7, 11, 13] as entityTabuSize>
<#list [500, 1000, 2000] as acceptedCountLimit>
<solverBenchmark>
<name>entityTabuSize ${entityTabuSize} acceptedCountLimit ${acceptedCountLimit}</name>
<solver>
<localSearch>
<unionMoveSelector>
<changeMoveSelector/>
<swapMoveSelector/>
</unionMoveSelector>
<acceptor>
<entityTabuSize>${entityTabuSize}</entityTabuSize>
</acceptor>
<forager>
<acceptedCountLimit>${acceptedCountLimit}</acceptedCountLimit>
</forager>
</localSearch>
</solver>
</solverBenchmark>
</#list>
</#list>
</plannerBenchmark>PlannerBenchmarkFactory:
PlannerBenchmarkFactory plannerBenchmarkFactory = PlannerBenchmarkFactory.createFromFreemarkerXmlResource(
"org/optaplanner/examples/cloudbalancing/benchmark/cloudBalancingBenchmarkConfigTemplate.xml.ftl");
PlannerBenchmark plannerBenchmark = benchmarkFactory.buildPlannerBenchmark();14.7.4. Benchmark Report Aggregation
BenchmarkAggregator takes 1 or more existing benchmarks and merges them into new benchmark report, without actually running the benchmarks again.

- Report on the impact of code changes: Run the same benchmark configuration before and after the code changes, then aggregate a report.
- Report on the impact of dependency upgrades: Run the same benchmark configuration before and after upgrading the dependency, then aggregate a report.
- Condense a too verbose report: Select only the interesting solver benchmarks from the existing report. This especially useful on template reports to make the graphs readable.
- Partially rerun a benchmark: Rerun part of an existing report (for example only the failed or invalid solvers), then recreate the original intended report with the new values.
PlannerBenchmarkFactory to the BenchmarkAggregatorFrame to display the GUI:
public static void main(String[] args) {
PlannerBenchmarkFactory plannerBenchmarkFactory = PlannerBenchmarkFactory.createFromXmlResource(
"org/optaplanner/examples/nqueens/benchmark/nqueensBenchmarkConfig.xml");
BenchmarkAggregatorFrame.createAndDisplay(plannerBenchmarkFactory);
}<benchmarkDirectory> and <benchmarkReport>.
BenchmarkAggregator. Using reports from different Planner major or minor versions are not guaranteed to succeed and deliver correct information, because the benchmark report data structure often changes.
Chapter 15. Repeated Planning
15.1. Introduction to Repeated Planning
- Unforeseen fact changes: For example: an employee assigned to a shift calls in sick, an airplane scheduled to take off has a technical delay, one of the machines or vehicles break down, ... Use backup planning.
- Impossible to assign all entities now: Leave some unassigned. For example: there are 10 shifts at the same time to assign but only 9 employees to handle shifts. Use overconstrained planning.
- Unknown long term future facts: For example: The hospital admissions for the next 2 weeks are reliable, but those for week 3 and 4 are less reliable and for week 5 and beyond are not worth planning yet. Use continuous planning.
- Constantly changing problem facts: Use real-time planning.
15.2. Backup Planning
15.3. Overconstrained Planning

- Add a additional score level (usually a medium level between the hard and soft level) by switching ScoreDefinition.
- Make the planning variable nullable.
- Add a score constraint on the new level (so usually a medium constraint) to penalize the number of unassigned entities (or a weighted sum of them).
15.4. Continuous Planning (Windowed Planning)

15.4.1. Immovable Planning Entities
SelectionFilter that returns true if an entity is movable and false if it is immovable.
public class MovableShiftAssignmentSelectionFilter implements SelectionFilter<ShiftAssignment> {
public boolean accept(ScoreDirector scoreDirector, ShiftAssignment shiftAssignment) {
ShiftDate shiftDate = shiftAssignment.getShift().getShiftDate();
NurseRoster nurseRoster = (NurseRoster) scoreDirector.getWorkingSolution();
return nurseRoster.getNurseRosterInfo().isInPlanningWindow(shiftDate);
}
}@PlanningEntity(movableEntitySelectionFilter = MovableShiftAssignmentSelectionFilter.class)
public class ShiftAssignment {
...
}MoveListFactory and MoveIteratorFactory implementations must make sure that they don't move immovable entities.
15.4.2. Nonvolatile Replanning to minimize disruption (Semi-movable Planning Entities)

machine and its original value originalMachine:
@PlanningEntity(...)
public class ProcessAssignment {
private MrProcess process;
private Machine originalMachine;
private Machine machine;
public Machine getOriginalMachine() {...}
@PlanningVariable(...)
public Machine getMachine() {...}
public boolean isMoved() {
return originalMachine != null && originalMachine != machine;
}
...
}machine changes. By comparing it with the originalMachine, a change in plan can be penalized:
rule "processMoved"
when
ProcessAssignment(moved == true)
then
scoreHolder.addSoftConstraintMatch(kcontext, -1000);
end-1000 means that a better solution is only accepted if it improves the soft score for at least 1000 points per variable changed (or if it improves the hard score).
15.5. Real-time Planning

07:56, 08:02 and 08:45), after the original customer set finished solving at 07:55 and in some cases after the vehicles already left. Planner can handle such scenario's with ProblemFactChange (in combination with immovable planning entities).
15.5.1. ProblemFactChange
Solver is solving, an outside event might want to change one of the problem facts, for example an airplane is delayed and needs the runway at a later time. Do not change the problem fact instances used by the Solver while it is solving (from another thread or even in the same thread), as that will corrupt it. Instead, add a ProblemFactChange to the Solver which it will execute in the solver thread as soon as possible.
public interface Solver {
...
boolean addProblemFactChange(ProblemFactChange problemFactChange);
boolean isEveryProblemFactChangeProcessed();
...
}public interface ProblemFactChange {
void doChange(ScoreDirector scoreDirector);
} public void deleteComputer(final CloudComputer computer) {
solver.addProblemFactChange(new ProblemFactChange() {
public void doChange(ScoreDirector scoreDirector) {
CloudBalance cloudBalance = (CloudBalance) scoreDirector.getWorkingSolution();
// First remove the problem fact from all planning entities that use it
for (CloudProcess process : cloudBalance.getProcessList()) {
if (ObjectUtils.equals(process.getComputer(), computer)) {
scoreDirector.beforeVariableChanged(process, "computer");
process.setComputer(null);
scoreDirector.afterVariableChanged(process, "computer");
}
}
// A SolutionCloner does not clone problem fact lists (such as computerList)
// Shallow clone the computerList so only workingSolution is affected, not bestSolution or guiSolution
cloudBalance.setComputerList(new ArrayList<CloudComputer>(cloudBalance.getComputerList()));
// Next remove it the problem fact itself
for (Iterator<CloudComputer> it = cloudBalance.getComputerList().iterator(); it.hasNext(); ) {
CloudComputer workingComputer = it.next();
if (ObjectUtils.equals(workingComputer, computer)) {
scoreDirector.beforeProblemFactRemoved(workingComputer);
it.remove(); // remove from list
scoreDirector.beforeProblemFactRemoved(workingComputer);
break;
}
}
}
});
}ProblemFactChange must be told to the ScoreDirector.
ProblemFactChange correctly, it's important to understand the behaviour of a planning clone:
- Any change in a
ProblemFactChangemust be done on theSolutioninstance ofscoreDirector.getWorkingSolution(). TheworkingSolutionis a planning clone of theBestSolutionChangedEvent'sbestSolution. So theworkingSolutionin theSolveris never the same instance as theSolutionin the rest of your application. - A planning clone also clones the planning entities and planning entity collections. So any change on the planning entities must happen on the instances hold by
scoreDirector.getWorkingSolution(). - A planning clone does not clone the problem facts, nor the problem fact collections. Therefore the
workingSolutionand thebestSolutionshare the same problem fact instances and the same problem fact list instances.Any problem fact or problem fact list changed by aProblemFactChangemust be problem cloned first (which can imply rerouting references in other problem facts and planning entities). Otherwise, if theworkingSolutionandbestSolutionare used in different threads (for example a solver thread and a GUI event thread), a race condition can occur.
Solver stops, runs the ProblemFactChange and restarts. This is a warm start because its initial solution is the adjusted best solution of the previous run. Each solver phase runs again. This implies the construction heuristic runs again, but because little or no planning variables are uninitialized (unless you have a nullable planning variable), it finishes much quicker than in a cold start.
Termination resets (both in solver and phase configuration), but a previous call to terminateEarly() is not undone. Normally however, you won't configure any Termination (except in daemon mode), just call Solver.terminateEarly() when the results are needed. Alternatively, do configure a Termination and use the daemon mode in combination with BestSolutionChangedEvent as described below.
15.5.2. Daemon: solve() Does Not Return
Solver in daemon mode has these effects:
- If the
Solver'sTerminationterminates, it does not return fromsolve()but blocks its thread instead (which frees up CPU power).- Except for terminateEarly(), which does make it return from
solve(), freeing up system resources and allowing an application to shutdown gracefully. - If a
Solverstarts with an empty planning entity collection, it waits in the blocked state immediately.
- If a
ProblemFactChangeis added, it goes into the running state, applies theProblemFactChangeand runs theSolveragain.
<solver> <daemon>true</daemon> ... </solver>
Solver.terminateEarly() when your application needs to shutdown to avoid killing the solver thread unnaturally.
BestSolutionChangedEvent to process new best solutions found by the solver thread. A BestSolutionChangedEvent doesn't guarantee that every ProblemFactChange has been processed already, nor that the solution is initialized and feasible. To ignore BestSolutionChangedEvents with such invalid solutions, do this:
public void bestSolutionChanged(BestSolutionChangedEvent<CloudBalance> event) {
// Ignore invalid solutions
if (event.isEveryProblemFactChangeProcessed()
&& event.isNewBestSolutionInitialized()
&& event.getNewBestSolution().getScore().isFeasible()) {
...
}
}Chapter 16. Integration
16.1. Overview
- To read a planning problem from the database (and store the best solution in it), annotate the domain POJO's with JPA annotations.
- To read a planning problem from an XML file (and store the best solution in it), annotate the domain POJO's with XStream or JAXB annotations.
- To expose the Solver as a REST Service that reads the planning problem and responds with the best solution, annotate the domain POJO's with XStream or JAXB annotations and hook the
Solverin Camel or RESTEasy.

16.2. Persistent Storage
16.2.1. Database: JPA and Hibernate
@Entity annotation with Planner's @PlanningEntity annotation. They can appear both on the same class:
@PlanningEntity // OptaPlanner annotation
@Entity // JPA annotation
public class Talk {...}optaplanner-persistence-jpa jar to take advantage of these extra integration features:
16.2.1.1. JPA and Hibernate: Persisting a Score
Score is persisted into a relational database, JPA and Hibernate will default to Java serializing it to a BLOB column. This has several disadvantages:
- The Java serialization format of
Scoreclasses is currently not backwards compatible. Upgrading to a newer Planner version can break reading an existing database. - The score is not easily readable for a query executed in the database console. This is annoying during development.
- The score cannot be used in a SQL or JPA-QL query to filter the results: for example to query all infeasible schedules.
INTEGER columns instead by using the appropriate *ScoreHibernateType for your Score type, for example for a HardSoftScore:
@PlanningSolution
@Entity
@TypeDef(defaultForType = HardSoftScore.class, typeClass = HardSoftScoreHibernateType.class)
public class CloudBalance implements Solution<HardSoftScore> {
@Columns(columns = {@Column(name = "hardScore"), @Column(name = "softScore")})
protected HardSoftScore score;
...
}@Column annotations as the number of score levels in the score, otherwise Hibernate will fail fast because a property mapping has the wrong number of columns.
CREATE TABLE CloudBalance(
...
hardScore INTEGER,
softScore INTEGER
);BigDecimal based Score, specify the precision and scale of the columns to avoid silent rounding:
@PlanningSolution
@Entity
@TypeDef(defaultForType = HardSoftBigDecimalScore.class, typeClass = HardSoftBigDecimalScoreHibernateType.class)
public class CloudBalance implements Solution<HardSoftBigDecimalScore> {
@Columns(columns = {
@Column(name = "hardScore", precision = 10, scale = 5),
@Column(name = "softScore", precision = 10, scale = 5)})
protected HardSoftBigDecimalScore score;
...
}Score, specify the hard and soft level sizes as parameters:
@PlanningSolution
@Entity
@TypeDef(defaultForType = BendableScore.class, typeClass = BendableScoreHibernateType.class, parameters = {
@Parameter(name = "hardLevelsSize", value = "3"),
@Parameter(name = "softLevelsSize", value = "2")})
public class Schedule implements Solution<BendableScore> {
@Columns(columns = {
@Column(name = "hard0Score"),
@Column(name = "hard1Score"),
@Column(name = "hard2Score"),
@Column(name = "soft0Score"),
@Column(name = "soft1Score")})
protected BendableScore score;
...
}16.2.1.2. JPA and Hibernate: Planning Cloning
@ManyToOne relationship from most problem fact classes to the planning solution class. Therefore, the problem fact classes reference the planning solution class, which implies that when the solution is planning cloned, they need to be cloned too. Use an @DeepPlanningClone on each such problem fact class to enforce that:
@PlanningSolution // OptaPlanner annotation
@Entity // JPA annotation
public class Conference {
@OneToMany(mappedBy="conference")
private List<Room> roomList;
...
}@DeepPlanningClone // OptaPlanner annotation: Force the default planning cloner to planning clone this class too
@Entity // JPA annotation
public class Room {
@ManyToOne
private Conference conference; // Because of this reference, this problem fact needs to be planning cloned too
}16.2.2. XML or JSON: XStream
optaplanner-persistence-xstream jar to take advantage of these extra integration features:
16.2.2.1. XStream: Marshalling a Score
Score is marshalled to XML or JSON by the default XStream configuration, it's verbose and ugly. To fix that, configure the XStreamScoreConverter and provide the ScoreDefinition as a parameter:
@PlanningSolution
@XStreamAlias("CloudBalance")
public class CloudBalance implements Solution<HardSoftScore> {
@XStreamConverter(value = XStreamScoreConverter.class, types = {HardSoftScoreDefinition.class})
private HardSoftScore score;
...
}<CloudBalance> ... <score>0hard/-200soft</score> </CloudBalance>
int parameters to define hardLevelsSize and softLevelsSize:
@PlanningSolution
@XStreamAlias("Schedule")
public class Schedule implements Solution<BendableScore> {
@XStreamConverter(value = XStreamScoreConverter.class, types = {BendableScoreDefinition.class}, ints = {2, 3})
private BendableScore score;
...
}<Schedule> ... <score>0/0/-100/-20/-3</score> </Schedule>
16.2.3. XML or JSON: JAXB
16.3. SOA and ESB
16.3.1. Camel and Karaf
16.4. Other Environments
16.4.1. JBoss Modules, WildFly and JBoss EAP
war file's WEB-INF/lib directory (just like any other dependency) as shown in the optaplanner-webexamples-*.war. However, in this approach the war file can easily grow to several MB in size, which is fine for a one-time deployment, but too heavyweight for frequent redeployments (especially over a slow network connection).
- Navigate to the directory
${WILDFLY_HOME}/modules/system/layers/base/. This directory contains the JBoss modules of WildFly. Create directory structureorg/optaplanner/mainfor our new module.- Copy
optaplanner-core-${version}.jarand all its direct and transitive dependency jars into that new directory. Use "mvn dependency:tree" on each optaplanner artifact to discover all dependencies. - Create the file
module.xmlin that new directory. Give it this content:<?xml version="1.0" encoding="UTF-8"?> <module xmlns="urn:jboss:module:1.3" name="org.optaplanner"> <resources> ... <resource-root path="kie-api-${version}.jar"/> ... <resource-root path="optaplanner-core-${version}.jar"/> ... <resource-root path="."/> </resources> <dependencies> <module name="javaee.api"/> </dependencies> </module>
- Navigate to the deployed
warfile.- Remove
optaplanner-core-${version}.jarand all its direct and transitive dependency jars from theWEB-INF/libdirectory in thewarfile. - Create the file
jboss-deployment-structure.xmlin theWEB-INF/libdirectory. Give it this content:<?xml version="1.0" encoding="UTF-8" ?> <jboss-deployment-structure> <deployment> <dependencies> <module name="org.optaplanner" export="true"/> </dependencies> </deployment> </jboss-deployment-structure>
ClassLoader magic, you'll likely need to provide the ClassLoader of your classes during the SolverFactory creation, so it can find the classpath resources (such as the solver config, score DRL's and domain classes) in your jars.
16.4.2. OSGi
optaplanner-core jar includes OSGi metadata in its MANIFEST.MF file to function properly in an OSGi environment too. Furthermore, the maven artifact drools-karaf-features (which will be renamed to kie-karaf-features) contains a features.xml file that supports the OSGi-feature optaplanner-engine.
ClassLoader magic, you'll likely need to provide the ClassLoader of your classes during the SolverFactory creation, so it can find the classpath resources (such as the solver config, score DRL's and domain classes) in your jars.
16.4.3. Android
- Add a dependency to the
build.gradlefile in your Android project to excludeorg.droolsandxmlpulldependencies:dependencies { ... compile('org.optaplanner:optaplanner-core:...') { exclude group: 'xmlpull' exclude group: 'org.drools' } ... }
16.5. Integration with Human Planners (Politics)
- The human planner defines and validates the score function.
- Some examples expose a
*Parametrizationobject, which defines the weight for each score constraint. The human planner can then tweak those weights at runtime. - When the business changes, the score function often needs to change too. The human planner can notify the developers to add, change or remove score constraints.
- The human planner is always in control of Planner.
- As shown in the course scheduling example, the human planner can lock 1 or more planning variables to a specific planning value and make those immovable. Because they are immovable, Planner does not change them: it optimizes the planning around the enforcements made by the human. If the human planner locks all planning variables, he/she sidelines Planner completely.
- In a prototype implementation, the human planner might use this occasionally. But as the implementation matures, it must become obsolete. But do keep the feature alive: as a reassurance for the humans. Or in case that one day the business changes dramatically before the score constraints can be adjusted.
Chapter 17. Development
17.1. Methodology Overview

- Reuse: The examples are reused as integration tests, stress tests and demo's. The documentation images are reused as slides.
- Consistent terminology: Each example has a class
App(executable class),Dao(Data Access Object) andPanel(swing UI). - Consistent structure: Each example has the same packages:
domain,persistence,app,solverandswingui. - Real world usefulness: Every feature is used in an example. Most examples are real world use cases with real world constraints, often with real world data.
- Automated testing: There are unit tests, integration tests and stress tests. The test coverage is high.
- Fail fast with an understandable error message: Invalid states are checked as early as possible.
17.2. Development guidelines
- Fail fast. There are several levels of fail fast, from better to worse:
- Fail Fast at compile time. For example: Don't accept an
Objectas parameter if it needs to be aStringor anInteger. - Fail Fast at startup time. For example: if the configuration parameter needs to be a positive
intand it's negative, fail fast - Fail Fast at runtime. For example: if the request needs to contain a double between
0.0and1.0and it's bigger than1.0, fail fast. - Fail Fast at runtime in assertion mode if the detection performance cost is high. For example: If, after every low level iteration, the variable A needs to be equal to the square root of B, check it if and only if an assert flag is set to true (usually controlled by the EnvironmentMode).
Exceptionmessages- The
Exceptionmessage must include the name and state of each relevant variable. For example:if (fooSize < 0) { throw new IllegalArgumentException("The fooSize (" + fooSize + ") of bar (" + this + ") must be positive."); }Notice that the output clearly explains what's wrong:Exception in thread "main" java.lang.IllegalArgumentException: The fooSize (-5) must be positive. at ... - Whenever possible, the
Exceptionmessage must include context. - Whenever the fix is not obvious, the
Exceptionmessage must include advice.Exception in thread "main" java.lang.IllegalArgumentException: UndoMove corruption: ... 1) Check your custom createUndoMove() of ... 2) Check your custom Variable listeners if ... at ...
Chapter 18. Migration Guide
18.1. About the Business Resource Planner Migration
- Simulated Annealing now uses the time gradient of the current step instead of the time gradient of the last step. The impact of this change is negligible.
- On AbstractScore, the methods
parseLevelStrings(...)andbuildScorePattern(...)have been changed from public to protected. It is highly unlikely that this affects your code. - The
*Descriptorclasses have been moved into a descriptor package.SolutionDescriptor.isInitialized(Solution)now requires aScoreDirectorparameter. - There is now a better alternative to Brute Force: Branch And Bound, see docs for more information.
InverseRelationShadowVariableListenerhas been renamed toSingletonInverseVariableListener. It, andInverseRelationShadowVariableDescriptorhave moved to the package...impl.domain.variable.inverserelation.- The
ConstraintOccurrenceclasses (which were deprecated) have been remove, and should be switched to theConstraintMatchsystem. - The interface
Solutionhas been promoted to the public API. It has also moved package fromimpl.solutiontoapi.domain.solution.Previously in*.java:import org.optaplanner.core.impl.solution.Solution;
Now in*.java:import org.optaplanner.core.api.domain.solution.Solution;
- All classes in the package
impl.movehave been moved toimpl.heuristic.move. None of them are future-proof enough at this time to be added the public API. Prefer generic moves whenever possible.Previously in*.java:import org.optaplanner.core.impl.move.Move; import org.optaplanner.core.impl.move.CompositeMove; import org.optaplanner.core.impl.move.NoChangeMove;
Now in*.java:import org.optaplanner.core.impl.heuristic.move.Move; import org.optaplanner.core.impl.heuristic.move.CompositeMove; import org.optaplanner.core.impl.heuristic.move.NoChangeMove;
- All classpath resources must lose their leading slash, because Business Resource Planner now expects them to adhere to
ClassLoader.getResource(String)instead ofClass.getResource(String).- The
SolverFactory.createFromXmlResource(String)parameter must lose its leading slash.Previously in*.java:... = SolverFactory.createFromXmlResource( "/org/optaplanner/examples/cloudbalancing/solver/cloudBalancingSolverConfig.xml");
Now in*.java:... = SolverFactory.createFromXmlResource( "org/optaplanner/examples/cloudbalancing/solver/cloudBalancingSolverConfig.xml");
- All elements
<scoreDrl>must lose their leading slash.Previously in*SolverConfig.xmland*BenchmarkConfig.xml:<scoreDrl>/org/optaplanner/examples/cloudbalancing/solver/cloudBalancingScoreRules.drl</scoreDrl>
Now in*SolverConfig.xmland*BenchmarkConfig.xml:<scoreDrl>org/optaplanner/examples/cloudbalancing/solver/cloudBalancingScoreRules.drl</scoreDrl>
- The
PlannerBenchmarkFactory.createFromXmlResource(String)parameter must lose its leading slash.Previously in*.java:... = PlannerBenchmarkFactory.createFromXmlResource( "/org/optaplanner/examples/cloudbalancing/benchmark/cloudBalancingBenchmarkConfig.xml");Now in*.java:... = PlannerBenchmarkFactory.createFromXmlResource( "org/optaplanner/examples/cloudbalancing/benchmark/cloudBalancingBenchmarkConfig.xml"); - The
PlannerBenchmarkFactory.createFromFreemarkerXmlResource(String)parameter must lose its leading slash.Previously in*.java:... = PlannerBenchmarkFactory.createFromFreemarkerXmlResource( "/org/optaplanner/examples/cloudbalancing/benchmark/cloudBalancingBenchmarkConfigTemplate.xml.ftl");
Now in*.java:... = PlannerBenchmarkFactory.createFromFreemarkerXmlResource( "org/optaplanner/examples/cloudbalancing/benchmark/cloudBalancingBenchmarkConfigTemplate.xml.ftl");
- The @PlanningVariable property chained has been refactored to
graphType. This is to allow support for other graph types (such as TREE) in the future.Previously in*.java:@PlanningVariable(chained = true, ...) public Standstill getPreviousStandstill() { return previousStandstill; }Now in*.java:@PlanningVariable(graphType = PlanningVariableGraphType.CHAINED, ...) public Standstill getPreviousStandstill() { return previousStandstill; } - The
constructionHeuristicTypeBEST_FIThas been renamed intoWEAKEST_FIT. The terminology "Best Fit" was not correct and did not allow forSTRONGEST_FIT.Previously in*SolverConfig.xmland*BenchmarkConfig.xml:<constructionHeuristic> <constructionHeuristicType>BEST_FIT</constructionHeuristicType> </constructionHeuristic>Now in*SolverConfig.xmland*BenchmarkConfig.xml:<constructionHeuristic> <constructionHeuristicType>WEAKEST_FIT</constructionHeuristicType> </constructionHeuristic> - The
constructionHeuristicTypeBEST_FIT_DECREASINGhas been renamed intoWEAKEST_FIT_DECREASING. The terminology "Best Fit" was not correct and did not allow forSTRONGEST_FIT_DECREASING.Previously in*SolverConfig.xmland*BenchmarkConfig.xml:<constructionHeuristic> <constructionHeuristicType>BEST_FIT_DECREASING</constructionHeuristicType> </constructionHeuristic>
Now in*SolverConfig.xmland*BenchmarkConfig.xml:<constructionHeuristic> <constructionHeuristicType>WEAKEST_FIT_DECREASING</constructionHeuristicType> </constructionHeuristic>
- For the shadow variable of a bi-directional relationship, the declaration has changed from @PlanningVariable(mappedBy) to @InverseRelationShadowVariable(sourceVariableName).Previously in
*.java:@PlanningVariable(mappedBy = "previousStandstill") Customer getNextCustomer(); void setNextCustomer(Customer nextCustomer);
Now in*.java:@InverseRelationShadowVariable(sourceVariableName = "previousStandstill") Customer getNextCustomer(); void setNextCustomer(Customer nextCustomer);
- Multiple
<planningEntityClass>elements now need to be ordered by superclasses (and superinterfaces) first, instead of superclasses (and superinterfaces) last.Previously in*SolverConfig.xmland*BenchmarkConfig.xml:<planningEntityClass>...TimeWindowedCustomer</planningEntityClass> <planningEntityClass>...Customer</planningEntityClass> <planningEntityClass>...Standstill</planningEntityClass>
Now in*SolverConfig.xmland*BenchmarkConfig.xml:<planningEntityClass>...Standstill</planningEntityClass> <planningEntityClass>...Customer</planningEntityClass> <planningEntityClass>...TimeWindowedCustomer</planningEntityClass>
- The element
<planningEntityClass>has been renamed to<entityClass>.Previously in*SolverConfig.xmland*BenchmarkConfig.xml:<planningEntityClass>org.optaplanner.examples.cloudbalancing.domain.CloudProcess</planningEntityClass>
Now in*SolverConfig.xmland*BenchmarkConfig.xml:<entityClass>org.optaplanner.examples.cloudbalancing.domain.CloudProcess</entityClass>
XStreamScoreConverterandXStreamBendableScoreConverterhave moved package.Previously in*.java:import org.optaplanner.persistence.xstream.XStreamScoreConverter;
Now in*.java:import org.optaplanner.persistence.xstream.impl.score.XStreamScoreConverter;
Previously in*.java:import org.optaplanner.persistence.xstream.XStreamBendableScoreConverter;
Now in*.java:import org.optaplanner.persistence.xstream.impl.score.XStreamBendableScoreConverter;
- If you have a custom Move implementation, now extract
AbstractMove.Previously in*.java:public class CloudComputerChangeMove implements Move {...}Now in*.java:public class CloudComputerChangeMove extends AbstractMove {...}
18.2. Planning Values and Value Ranges
18.2.1. ValueRangeProvider
@ValueRangeProvider that returns a collection of numbers (for example List<Integer> or List<BigDecimal>), then you should switch to a ValueRange, which uses less memory and offers additional opportunities.
@ValueRangeProvider(id = "delayRange")
public List<Integer> getDelayRange() {
List<Integer> = new ArrayList<Integer>(5000);
for (int i = 0; i < 5000; i++) {
delayRange.add(i);
}
return delayRange;
}@ValueRangeProvider(id = "delayRange")
public CountableValueRange<Integer> getDelayRange() {
return ValueRangeFactory.createIntValueRange(0, 5000);
}@ValueRangeProvider has been moved into another package, from:
import org.optaplanner.core.api.domain.value.ValueRangeProvider;
import org.optaplanner.core.api.domain.valuerange.ValueRangeProvider;
18.2.2. Planning Variables
PlanningVariableListener has been renamed to VariableListener.
*.java:
public class VehicleUpdatingVariableListener implements PlanningVariableListener<Customer> {*.java:
public class VehicleUpdatingVariableListener implements VariableListener<Customer> {*.java:
public class VehicleUpdatingVariableListener extends AbstractPlanningVariableListener<Customer> {*.java:
public class VehicleUpdatingVariableListener implements VariableListener<Customer> {*.java:
@PlanningVariable(valueRangeProviderRefs = {"vehicleRange", "customerRange"},
graphType = PlanningVariableGraphType.CHAINED,
variableListenerClasses = {VehicleUpdatingVariableListener.class, ArrivalTimeUpdatingVariableListener.class})
public Standstill getPreviousStandstill() {
return previousStandstill;
}
public Vehicle getVehicle() {
return vehicle;
}
public Integer getArrivalTime() {
return arrivalTime;
}
*.java:
@PlanningVariable(...)
public Standstill getPreviousStandstill() {
return previousStandstill;
}
@CustomShadowVariable(variableListenerClass = VehicleUpdatingVariableListener.class,
sources = {@CustomShadowVariable.Source(variableName = "previousStandstill")})
public Vehicle getVehicle() {
return vehicle;
}
@CustomShadowVariable(variableListenerClass = ArrivalTimeUpdatingVariableListener.class,
sources = {@CustomShadowVariable.Source(variableName = "previousStandstill")})
public Integer getArrivalTime() {
return arrivalTime;
}
*.java:
@PlanningEntity
public class Customer implements Standstill {
@PlanningVariable(...)
public Standstill getPreviousStandstill() {...}
@CustomShadowVariable(variableListenerClass = VehicleUpdatingVariableListener.class,
sources = {@CustomShadowVariable.Source(variableName = "previousStandstill")})
public Vehicle getVehicle() {...}
}
public class VehicleUpdatingVariableListener implements VariableListener<Customer> {
...
}*.java:
@PlanningEntity
public class Customer implements Standstill {
@PlanningVariable(...)
public Standstill getPreviousStandstill() {...}
@AnchorShadowVariable(sourceVariableName = "previousStandstill")
public Vehicle getVehicle() {...}
}18.3. Benchmark
- The phrase "time spend" has been renamed to "time spent". This includes the log output and the benchmark report.
<warmUp*> elements have been renamed:
- The element <warmUpTimeMillisSpend> has been renamed to <warmUpMillisecondsSpentLimit>
- The element <warmUpSecondsSpend> has been renamed to <warmUpSecondsSpentLimit>
- The element <warmUpMinutesSpend> has been renamed to <warmUpMinutesSpentLimit>
- The element <warmUpHoursSpend> has been renamed to <warmUpHoursSpentLimit>
*BenchmarkConfig.xml
<plannerBenchmark>
<warmUpTimeMillisSpend>...</warmUpTimeMillisSpend>
<warmUpSecondsSpend>...</warmUpSecondsSpend>
<warmUpMinutesSpend>...</warmUpMinutesSpend>
<warmUpHoursSpend>...</warmUpHoursSpend>
...
<//plannerBenchmark>*BenchmarkConfig.xml:
<plannerBenchmark>
<warmUpMillisecondsSpentLimit>...</warmUpMillisecondsSpentLimit>
<warmUpSecondsSpentLimit>...</warmUpSecondsSpentLimit>
<warmUpMinutesSpentLimit>...</warmUpMinutesSpentLimit>
<warmUpHoursSpentLimit>...</warmUpHoursSpentLimit>
...
<//plannerBenchmark>XmlPlannerBenchmarkFactory has been removed and replaced by static methods on PlannerBenchmarkFactory.
*.java:
PlannerBenchmarkFactory plannerBenchmarkFactory = new XmlPlannerBenchmarkFactory(...);
*.java:
PlannerBenchmarkFactory plannerBenchmarkFactory = PlannerBenchmarkFactory.createFromXmlResource(...);
addXstreamAnnotations(), take a look at the non-public API class XStreamXmlPlannerBenchmarkFactory.
<xstreamAnnotatedClass> has been renamed to <xStreamAnnotatedClass>.
*BenchmarkConfig.xml
<problemBenchmarks> <xstreamAnnotatedClass>org.optaplanner.examples.nqueens.domain.NQueens</xstreamAnnotatedClass> ... </problemBenchmarks> </plannerBenchmark>
*BenchmarkConfig.xml:
<problemBenchmarks> <xStreamAnnotatedClass>org.optaplanner.examples.nqueens.domain.NQueens</xStreamAnnotatedClass> ... </problemBenchmarks>
18.3.1. SolutionFileIO
*.java:
import org.optaplanner.core.impl.solution.ProblemIO;
public class MachineReassignmentFileIO implements ProblemIO {
...
}
After in *.java:
import org.optaplanner.persistence.common.api.domain.solution.SolutionFileIO;
public class MachineReassignmentFileIO implements SolutionFileIO {
...
}*.java:
*SolverConfig.xml and *BenchmarckConfig.xml:
<problemBenchmarks> <problemIOClass>...MachineReassignmentProblemIO</problemIOClass> ... </problemBenchmarks>
*SolverConfig.xml and *BenchmarckConfig.xml:
<problemBenchmarks> <solutionFileIOClass>...MachineReassignmentFileIO</solutionFileIOClass> ... </problemBenchmarks>
SolutionFileIO.getFileExtension() has been split up in getInputFileExtension() and getOutputFileExtension();. It is still highly recommended to use the same input and output file extension.
*.java:
public String getFileExtension() {
return FILE_EXTENSION;
}
*.java:
public String getInputFileExtension() {
return FILE_EXTENSION;
}
public String getOutputFileExtension() {
return FILE_EXTENSION;
}18.4. Solver Configuration
getTimeMillisSpend() has been renamed to getTimeMillisSpent().
... = solver.getTimeMillisSpend();
... = solver.getTimeMillisSpent();
public void bestSolutionChanged(BestSolutionChangedEvent event) {
... = event.getTimeMillisSpend();
}public void bestSolutionChanged(BestSolutionChangedEvent event) {
... = event.getTimeMillisSpent();
}<bruteForce> has been replaced by <exhaustiveSearch>'s BRUTE_FORCE type.
*SolverConfig.xml and *BenchmarkConfig.xml:
<bruteForce/>
*SolverConfig.xml and *BenchmarkConfig.xml:
<exhaustiveSearch>
<exhaustiveSearchType>BRUTE_FORCE</exhaustiveSearchType>
</exhaustiveSearch>setPlanningProblem(Solution) and solve() have been merged as the method solve(Solution).
solver.setPlanningProblem(planningProblem); solver.solve();
solver.solve(planningProblem);
solver.getBestSolution() to retrieve the best solution. That is intentional due to real-time planning and to support pare to optimization in the future.
XmlSolverFactory (which was not part of the public API) has been removed and replaced by static methods on SolverFactory (which are part of the public API).
SolverFactory solverFactory = new XmlSolverFactory("...solverConfig.xml");SolverFactory solverFactory = SolverFactory.createFromXmlResource("...solverConfig.xml");SolverFactory solverFactory = new XmlSolverFactory().configure(inputStream);
SolverFactory solverFactory = SolverFactory.createFromXmlInputStream(inputStream);
SolverFactory solverFactory = new XmlSolverFactory().configure(reader);
SolverFactory solverFactory = SolverFactory.createFromXmlReader(reader);
addXstreamAnnotations(), take a look at the non-public API class XStreamXmlSolverFactory.
- The interface
CustomSolverPhaseCommandhas been renamed toCustomPhaseCommand. - The element
<customSolverPhase>has been renamed to<customPhase>. - The element
<customSolverPhaseCommandClass>has been renamed to>customPhaseCommandClass>.
public class ToOriginalMachineSolutionInitializer implements CustomSolverPhaseCommand {
...
}public class ToOriginalMachineSolutionInitializer implements CustomPhaseCommand {
...
}*SolverConfig.xml and *BenchmarkConfig.xml:
<customSolverPhase> <customSolverPhaseCommandClass>...ToOriginalMachineSolutionInitializer</customSolverPhaseCommandClass> </customSolverPhase>
*SolverConfig.xml and *BenchmarkConfig.xml:
<customPhase> <customPhaseCommandClass>....ToOriginalMachineSolutionInitializer</customPhaseCommandClass> </customPhase>
ScoreDefinition.getLevelCount() has been renamed to ScoreDefinition.getLevelsSize().
18.5. Optimization
18.5.1. Termination
<termination> have been renamed:
- The element
<maximumTimeMillisSpend>has been renamed to<millisecondsSpentLimit> - The element
<maximumSecondsSpend>has been renamed to<secondsSpentLimit> - The element
<maximumMinutesSpend>has been renamed to<minutesSpentLimit> - The element
<maximumHoursSpend>has been renamed to<hoursSpentLimit> - The element
<scoreAttained>has been renamed to<bestScoreLimit> - The element
<maximumStepCount>has been renamed to<stepCountLimit> - The element
<maximumUnimprovedStepCount>has been renamed to<unimprovedStepCountLimit>
*SolverConfig.xml and *BenchmarkConfig.xml has changed from:
<termination>
<maximumTimeMillisSpend>...</maximumTimeMillisSpend>
<maximumSecondsSpend>...</maximumSecondsSpend>
<maximumMinutesSpend>...</maximumMinutesSpend>
<maximumHoursSpend>...</maximumHoursSpend>
<scoreAttained>...</scoreAttained>
<maximumStepCount>...</maximumStepCount>
<maximumUnimprovedStepCount>...</maximumUnimprovedStepCount>
</termination><termination>
<millisecondsSpentLimit>...</millisecondsSpentLimit>
<secondsSpentLimit>...</secondsSpentLimit>
<minutesSpentLimit>...</minutesSpentLimit>
<hoursSpentLimit>...</hoursSpentLimit>
<bestScoreLimit>...</bestScoreLimit>
<stepCountLimit>...</stepCountLimit>
<unimprovedStepCountLimit>...</unimprovedStepCountLimit>
</termination>
18.5.2. Events
BestSolutionChangedEvent and SolverEventListener moved from package impl.event to api.solver.event. They are now part of the public api.
*.java:
import org.optaplanner.core.impl.event.BestSolutionChangedEvent; import org.optaplanner.core.impl.event.SolverEventListener;
*.java:
import org.optaplanner.core.api.solver.event.BestSolutionChangedEvent; import org.optaplanner.core.api.solver.event.SolverEventListener;
18.5.3. Score Trends
<initializingScoreTrend> in the <scoreDirectorFactory> to increase performance of some algorithms (Construction Heuristics and Exhaustive Search).
InitializingScoreTrend when to use ANY, ONLY_UP, or ONLY_DOWN.
*SolverConfig.xml and *BenchmarkConfig.xml:
<scoreDirectorFactory>
<scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
<scoreDrl>.../cloudBalancingScoreRules.drl</scoreDrl>
</scoreDirectorFactory>*SolverConfig.xml and *BenchmarkConfig.xml:
<scoreDirectorFactory>
<scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
<scoreDrl>.../cloudBalancingScoreRules.drl</scoreDrl>
<initializingScoreTrend>ONLY_DOWN</initializingScoreTrend>
</scoreDirectorFactory><pickEarlyType> FIRST_NON_DETERIORATING_SCORE with <initializingScoreTrend> ONLY_DOWN. If the <initializingScoreTrend> is specified, the <constructionHeuristic> will automatically use the most appropriate <pickEarlyType>.
*SolverConfig.xml and *BenchmarkConfig.xml:
<scoreDirectorFactory>
...
</scoreDirectorFactory>
...
<constructionHeuristic>
<constructionHeuristicType>FIRST_FIT_DECREASING</constructionHeuristicType>
<forager>
<pickEarlyType>FIRST_NON_DETERIORATING_SCORE</pickEarlyType>
</forager>
</constructionHeuristic>*SolverConfig.xml and *BenchmarkConfig.xml:
<scoreDirectorFactory>
...
<initializingScoreTrend>ONLY_DOWN</initializingScoreTrend>
</scoreDirectorFactory>
...
<constructionHeuristic>
<constructionHeuristicType>FIRST_FIT_DECREASING</constructionHeuristicType>
</constructionHeuristic>18.5.4. Score Calculator
SimpleScoreCalculator has been renamed to EasyScoreCalculator to avoid confusion with SimpleScore and SimpleScore: it can return other Score types. The package name has also changed.
*.java:
import org.optaplanner.core.impl.score.director.simple.SimpleScoreCalculator;
public class CloudBalancingEasyScoreCalculator implements SimpleScoreCalculator<CloudBalance> {
...
}*.java:
import org.optaplanner.core.impl.score.director.easy.EasyScoreCalculator;
public class CloudBalancingEasyScoreCalculator implements EasyScoreCalculator<CloudBalance> {
...
}*SolverConfig.xml and *BenchmarkConfig.xml:
<simpleScoreCalculatorClass>org.optaplanner.examples.cloudbalancing.solver.score.CloudBalancingEasyScoreCalculator<simpleScoreCalculatorClass>
*SolverConfig.xml and *BenchmarkConfig.xml:
<easyScoreCalculatorClass>org.optaplanner.examples.cloudbalancing.solver.score.CloudBalancingEasyScoreCalculator
BendableScore configuration has changed: ...LevelCount has been renamed to ...LevelsSize.
*SolverConfig.xml and *BenchmarkConfig.xml:
<scoreDirectorFactory> <scoreDefinitionType>BENDABLE</scoreDefinitionType> <bendableHardLevelCount>2</bendableHardLevelCount> <bendableSoftLevelCount>3</bendableSoftLevelCount> ... </scoreDirectorFactory>
*SolverConfig.xml and *BenchmarkConfig.xml:
<scoreDirectorFactory> <scoreDefinitionType>BENDABLE</scoreDefinitionType> <bendableHardLevelsSize>2</bendableHardLevelsSize> <bendableSoftLevelsSize>3</bendableSoftLevelsSize> ... </scoreDirectorFactory>
Appendix A. Revision History
| Revision History | |||
|---|---|---|---|
| Revision 6.2.0-5 | Thu Apr 28 2016 | ||
| |||
| Revision 6.2.0-4 | Thu Apr 28 2016 | ||
| |||
| Revision 6.2.0-3 | Tue Mar 29 2016 | ||
| |||
| Revision 6.2.0-2 | Mon Nov 30 2015 | ||
| |||
| Revision 6.2.0-1 | Mon Nov 30 2015 | ||
| |||
