Chapter 5. Score Calculation

5.1. Score Terminology

5.1.1. What is a Score?

Every initialized Solution has a score. The score is an objective way to compare two solutions. The solution with the higher score is better. The Solver aims to find the Solution with the highest Score of all possible solutions. The best solution is the Solution with the highest Score that Solver has encountered during solving, which might be the optimal solution.

Planner cannot automatically know which Solution is best for your business, so you need to tell it how to calculate the score of a given Solution according to your business needs. If you forget or are unable to implement an important business constraint, the solution is probably useless:

optimalWithIncompleteConstraints

Defining constraints in Planner is very flexible through the following score techniques:

  • Score signum (positive or negative): maximize or minimize a constraint type
  • Score weight: put a cost/profit on a constraint type
  • Score level (hard, soft, …​): prioritize a group of constraint types
  • Pareto scoring

5.1.2. Score Constraint Signum (Positive or Negative)

All score techniques are based on constraints. A constraint can be a simple pattern (such as Maximize the apple harvest in the solution) or a more complex pattern. A positive constraint is a constraint you want to maximize. A negative constraint is a constraint you want to minimize.

positiveAndNegativeConstraints

The image above illustrates that the optimal solution always has the highest score, regardless if the constraints are positive or negative.

Most planning problems have only negative constraints and therefore have a negative score. In that case, the score is the sum of the weight of the negative constraints being broken, with a perfect score of 0. This explains why the score of a solution of four queens is the negative of the number of queen pairs which can attack each other.

Negative and positive constraints can be combined, even in the same score level.

Note

Do not presume that your business knows all its score constraints in advance. Expect score constraints to be added or changed after the first releases.

When a constraint activates (because the negative constraint is broken or the positive constraint is fulfilled) on a certain planning entity set, it is called a constraint match.

5.1.3. Score Constraint Weight

Not all score constraints are equally important. If breaking one constraint is equally bad as breaking another constraint x times, then those two constraints have a different weight (but they are in the same score level). For example in vehicle routing, you can make one “unhappy driver” constraint match count as much as two “fuel tank usage” constraint matches:

scoreWeighting

Score weighting is often used in use cases where you can put a price tag on everything. In that case, the positive constraints maximize revenue and the negative constraints minimize expenses, so together they maximize profit. Alternatively, score weighting is also often used to create social fairness. For example, a nurse, who requests a free day, pays a higher weight on New Years eve than on a normal day.

Putting a good weight on a constraint can be a difficult analytical decision, because it is about making choices and tradeoffs with other constraints. However, a non-accurate weight is less damaging than mediocre algorithms:

scoreTradeoffInPerspective

The weight of a constraint match can be dynamically based on the planning entities involved. For example in cloud balance, the weight of the soft constraint match for an active Computer is the cost of that Computer (which differs per computer).

Note

Furthermore, it is often useful to allow the end-user to recalibrate constraint weights in the user interface, as demonstrated in the exam timetabling example with the InstitutionParametrization class.

5.1.4. Score Constraint Level (hard, soft, …​)

Sometimes a score constraint outranks another score constraint, no matter how many times the other is broken. In that case, those score constraints are in different levels. For example, a nurse cannot do 2 shifts at the same time (due to the constraints of physical reality), this outranks all nurse happiness constraints.

Most use cases have only two score levels, hard and soft. Two scores are compared lexicographically. The first score level gets compared first. If those differ, the others score levels are ignored. For example, a score that breaks 0 hard constraints and 1 000 000 soft constraints is better than a score that breaks 1 hard constraint and 0 soft constraints.

scoreLevels

If there are two (or more) score levels, for example a hard and soft level, then a score is feasible if no hard constraints are broken.

Note

By default, Planner will always assign all planning variables a planning value. If there is no feasible solution, this means the best solution will be unfeasible. To instead leave some of the planning entities unassigned, apply overconstrained planning.

For each constraint, you need to pick a score level, a score weight and a score signum. For example: -1soft which has score level of soft, a weight of 1 and a negative signum. Do not use a big constraint weight when your business actually wants different score levels. That hack, known as score folding, is broken:

scoreFoldingIsBroken
Note

Your business might tell you that your hard constraints all have the same weight, because they cannot be broken (so the weight does not matter). This is not true because if no feasible solution exists for a specific dataset, the least infeasible solution allows the business to estimate how many business resources they are lacking. For example in cloud balancing, how many new computers to buy.

Furthermore, it will likely create a score trap. For example in cloud balance if a Computer has seven CPU too little for its Processes, then it must be weighted seven times as much as if it had only one CPU too little.

Three or more score levels are supported. For example: a company might decide that profit outranks employee satisfaction (or vice versa), while both are outranked by the constraints of physical reality.

Note

To model fairness or load balancing, there is no need to use lots of score levels (even though Planner can handle many score levels).

5.1.5. Pareto Scoring (AKA Multi-objective Optimization Scoring)

Far less common is the use case of pareto optimization, which is also known under the more confusing term multi-objective optimization. In pareto scoring, score constraints are in the same score level, yet they are not weighted against each other. When two scores are compared, each of the score constraints are compared individually and the score with the most dominating score constraints wins. Pareto scoring can even be combined with score levels and score constraint weighting.

Consider this example with positive constraints, where we want to get the most apples and oranges. Since it is impossible to compare apples and oranges, we can not weight them against each other. Yet, despite that we can not compare them, we can state that two apples are better then one apple. Similarly, we can state that two apples and one orange are better than just one orange. So despite our inability to compare some Scores conclusively (at which point we declare them equal), we can find a set of optimal scores. Those are called pareto optimal.

paretoOptimizationScoring

Scores are considered equal far more often. It is left up to a human to choose the better out of a set of best solutions (with equal scores) found by Planner. In the example above, the user must choose between solution A (three apples and one orange) and solution B (one apple and six oranges). It is guaranteed that Planner has not found another solution which has more apples or more oranges or even a better combination of both (such as two apples and three oranges).

To implement pareto scoring in Planner, implement a custom ScoreDefinition and Score (and replace the BestSolutionRecaller). Future versions will provide out-of-the-box support.

Note

A pareto Score's compareTo method is not transitive because it does a pareto comparison. For example: having two apples is greater than one apple. One apple is equal to one orange. Yet, two apples are not greater than one orange (but actually equal). Pareto comparison violates the contract of the interface java.lang.Comparable's compareTo method, but Planner’s systems are pareto comparison safe, unless explicitly stated otherwise in this documentation.

5.1.6. Combining Score Techniques

All the score techniques mentioned above, can be combined seamlessly:

scoreComposition

5.1.7. Score interface

A score is represented by the Score interface, which naturally extends Comparable:

public interface Score<...> extends Comparable<...> {
    ...
}

The Score implementation to use depends on your use case. Your score might not efficiently fit in a single long value. Planner has several built-in Score implementations, but you can implement a custom Score too. Most use cases tend to use the built-in HardSoftScore.

scoreClassDiagram

The Score implementation (for example HardSoftScore) must be the same throughout a Solver runtime. The Score implementation is configured in the solver configuration as a ScoreDefinition:

  <scoreDirectorFactory>
    <scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
    ...
  </scoreDirectorFactory>

5.1.8. Avoid Floating Point Numbers in Score Calculation

Avoid the use of float and double for score calculation. Use BigDecimal instead.

Floating point numbers (float and double) cannot represent a decimal number correctly. For example: a double cannot hold the value 0.05 correctly. Instead, it holds the nearest representable value. Arithmetic (including addition and subtraction) with floating point numbers, especially for planning problems, leads to incorrect decisions:

scoreWeightType

Additionally, floating point number addition is not associative:

System.out.println( ((0.01 + 0.02) + 0.03) == (0.01 + (0.02 + 0.03)) ); // returns false

This leads to score corruption.

Decimal numbers (BigDecimal) have none of these problems.

Note

BigDecimal arithmetic is considerably slower than int, long or double arithmetic. In experiments we have seen the average calculation count get divided by 5.

Therefore, in some cases, it can be worthwhile to multiply all numbers for a single score weight by a plural of ten (for example 1000), so the score weight fits in an int or long.

5.2. Choose a Score Definition

Each Score implementation also has a ScoreDefinition implementation. For example: SimpleScore is defined by SimpleScoreDefinition.

Note

To properly write a Score to database (with JPA/Hibernate) or to XML/JSON (with XStream/JAXB), see the integration chapter.

5.2.1. SimpleScore

A SimpleScore has a single int value, for example -123. It has a single score level.

  <scoreDirectorFactory>
    <scoreDefinitionType>SIMPLE</scoreDefinitionType>
    ...
  </scoreDirectorFactory>

Variants of this scoreDefinitionType:

  • SIMPLE_LONG: Uses SimpleLongScore which has a long value instead of an int value.
  • SIMPLE_DOUBLE: Uses SimpleDoubleScore which has a double value instead of an int value. Not recommended to use.
  • SIMPLE_BIG_DECIMAL: Uses SimpleBigDecimalScore which has a BigDecimal value instead of an int value.

5.2.2. HardSoftScore (Recommended)

A HardSoftScore has a hard int value and a soft int value, for example -123hard/-456soft. It has 2 score levels (hard and soft).

  <scoreDirectorFactory>
    <scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
    ...
  </scoreDirectorFactory>

Variants of this scoreDefinitionType:

  • HARD_SOFT_LONG: Uses HardSoftLongScore which has long values instead of int values.
  • HARD_SOFT_DOUBLE: Uses HardSoftDoubleScore which has double values instead of int values. Not recommended to use.
  • HARD_SOFT_BIG_DECIMAL: Uses HardSoftBigDecimalScore which has BigDecimal values instead of int values.

5.2.3. HardMediumSoftScore

A HardMediumSoftScore which has a hard int value, a medium int value and a soft int value, for example -123hard/-456medium/-789soft. It has 3 score levels (hard, medium and soft).

  <scoreDirectorFactory>
    <scoreDefinitionType>HARD_MEDIUM_SOFT</scoreDefinitionType>
    ...
  </scoreDirectorFactory>

Variants of this scoreDefinitionType:

  • HARD_MEDIUM_SOFT_LONG: Uses HardMediumSoftLongScore which has long values instead of int values.

5.2.4. BendableScore

A BendableScore has a configurable number of score levels. It has an array of hard int values and an array of soft int value, for example with 2 hard levels and 3 soft levels, the score can be -123/-456/-789/-012/-345.

  <scoreDirectorFactory>
    <scoreDefinitionType>BENDABLE</scoreDefinitionType>
    <bendableHardLevelsSize>2</bendableHardLevelsSize>
    <bendableSoftLevelsSize>3</bendableSoftLevelsSize>
    ...
  </scoreDirectorFactory>

The number of hard and soft score levels needs to be set at configuration time. It is not flexible to change during solving.

Variants of this scoreDefinitionType:

  • BENDABLE_LONG: Uses BendableLongScore which has long values instead of int values.
  • BENDABLE_BIG_DECIMAL: Uses BendableBigDecimalScore which has BigDecimal values instead of int values.

5.2.5. Implementing a Custom Score

The ScoreDefinition interface defines the score representation.

To implement a custom Score, you will also need to implement a custom ScoreDefinition. Extend AbstractScoreDefinition (preferably by copy pasting HardSoftScoreDefinition) and start from there.

Then hook your custom ScoreDefinition in your SolverConfig.xml :

  <scoreDirectorFactory>
    <scoreDefinitionClass>...MyScoreDefinition</scoreDefinitionClass>
    ...
  </scoreDirectorFactory>

To have it integrate seamlessly with JPA/Hibernate, XStream, …​ you might need to write some glue code.

5.3. Calculate the Score

5.3.1. Score Calculation Types

There are several ways to calculate the Score of a Solution:

  • Easy Java score calculation: implement a single Java method
  • Incremental Java score calculation: implement multiple Java methods
  • Drools score calculation (recommended): implement score rules

Every score calculation type can use any Score definition. For example, easy Java score calculation can output a HardSoftScore.

All score calculation types are Object Oriented and can reuse existing Java code.

Important

The score calculation must be read-only. It must not change the planning entities or the problem facts in any way. For example, it must not call a setter method on a planning entity in a Drools score rule’s RHS. This does not apply to logically inserted objects, which can be changed by the score rules that logically inserted them in the first place.

Planner will not recalculate the score of a Solution if it can predict it (unless an environmentMode assertion is enabled). For example, after a winning step is done, there is no need to calculate the score because that move was done and undone earlier. As a result, there is no guarantee that such changes applied during score calculation are actually done.

5.3.2. Easy Java Score Calculation

An easy way to implement your score calculation in Java.

  • Advantages:

    • Plain old Java: no learning curve
    • Opportunity to delegate score calculation to an existing code base or legacy system
  • Disadvantages:

Just implement one method of the interface EasyScoreCalculator:

public interface EasyScoreCalculator<Sol extends Solution> {

    Score calculateScore(Sol solution);

}

For example in n queens:

public class NQueensEasyScoreCalculator implements EasyScoreCalculator<NQueens> {

    public SimpleScore calculateScore(NQueens nQueens) {
        int n = nQueens.getN();
        List<Queen> queenList = nQueens.getQueenList();

        int score = 0;
        for (int i = 0; i < n; i++) {
            for (int j = i + 1; j < n; j++) {
                Queen leftQueen = queenList.get(i);
                Queen rightQueen = queenList.get(j);
                if (leftQueen.getRow() != null && rightQueen.getRow() != null) {
                    if (leftQueen.getRowIndex() == rightQueen.getRowIndex()) {
                        score--;
                    }
                    if (leftQueen.getAscendingDiagonalIndex() == rightQueen.getAscendingDiagonalIndex()) {
                        score--;
                    }
                    if (leftQueen.getDescendingDiagonalIndex() == rightQueen.getDescendingDiagonalIndex()) {
                        score--;
                    }
                }
            }
        }
        return SimpleScore.valueOf(score);
    }

}

Configure it in your solver configuration:

  <scoreDirectorFactory>
    <scoreDefinitionType>...</scoreDefinitionType>
    <easyScoreCalculatorClass>org.optaplanner.examples.nqueens.solver.score.NQueensEasyScoreCalculator</easyScoreCalculatorClass>
  </scoreDirectorFactory>

Alternatively, build an EasyScoreCalculator instance at runtime and set it with the programmatic API:

    solverFactory.getSolverConfig().getScoreDirectorFactoryConfig.setEasyScoreCalculator(easyScoreCalculator);

5.3.3. Incremental Java Score Calculation

A way to implement your score calculation incrementally in Java.

  • Advantages:

    • Very fast and scalable

      • Currently the fastest if implemented correctly
  • Disadvantages:

    • Hard to write

      • A scalable implementation heavily uses maps, indexes, …​ (things the Drools rule engine can do for you)
      • You have to learn, design, write and improve all these performance optimizations yourself
    • Hard to read

      • Regular score constraint changes can lead to a high maintenance cost

Implement all the methods of the interface IncrementalScoreCalculator and extend the class AbstractIncrementalScoreCalculator:

public interface IncrementalScoreCalculator<Sol extends Solution> {

    void resetWorkingSolution(Sol workingSolution);

    void beforeEntityAdded(Object entity);

    void afterEntityAdded(Object entity);

    void beforeVariableChanged(Object entity, String variableName);

    void afterVariableChanged(Object entity, String variableName);

    void beforeEntityRemoved(Object entity);

    void afterEntityRemoved(Object entity);

    Score calculateScore();

}
incrementalScoreCalculatorSequenceDiagram

For example in n queens:

public class NQueensAdvancedIncrementalScoreCalculator extends AbstractIncrementalScoreCalculator<NQueens> {

    private Map<Integer, List<Queen>> rowIndexMap;
    private Map<Integer, List<Queen>> ascendingDiagonalIndexMap;
    private Map<Integer, List<Queen>> descendingDiagonalIndexMap;

    private int score;

    public void resetWorkingSolution(NQueens nQueens) {
        int n = nQueens.getN();
        rowIndexMap = new HashMap<Integer, List<Queen>>(n);
        ascendingDiagonalIndexMap = new HashMap<Integer, List<Queen>>(n * 2);
        descendingDiagonalIndexMap = new HashMap<Integer, List<Queen>>(n * 2);
        for (int i = 0; i < n; i++) {
            rowIndexMap.put(i, new ArrayList<Queen>(n));
            ascendingDiagonalIndexMap.put(i, new ArrayList<Queen>(n));
            descendingDiagonalIndexMap.put(i, new ArrayList<Queen>(n));
            if (i != 0) {
                ascendingDiagonalIndexMap.put(n - 1 + i, new ArrayList<Queen>(n));
                descendingDiagonalIndexMap.put((-i), new ArrayList<Queen>(n));
            }
        }
        score = 0;
        for (Queen queen : nQueens.getQueenList()) {
            insert(queen);
        }
    }

    public void beforeEntityAdded(Object entity) {
        // Do nothing
    }

    public void afterEntityAdded(Object entity) {
        insert((Queen) entity);
    }

    public void beforeVariableChanged(Object entity, String variableName) {
        retract((Queen) entity);
    }

    public void afterVariableChanged(Object entity, String variableName) {
        insert((Queen) entity);
    }

    public void beforeEntityRemoved(Object entity) {
        retract((Queen) entity);
    }

    public void afterEntityRemoved(Object entity) {
        // Do nothing
    }

    private void insert(Queen queen) {
        Row row = queen.getRow();
        if (row != null) {
            int rowIndex = queen.getRowIndex();
            List<Queen> rowIndexList = rowIndexMap.get(rowIndex);
            score -= rowIndexList.size();
            rowIndexList.add(queen);
            List<Queen> ascendingDiagonalIndexList = ascendingDiagonalIndexMap.get(queen.getAscendingDiagonalIndex());
            score -= ascendingDiagonalIndexList.size();
            ascendingDiagonalIndexList.add(queen);
            List<Queen> descendingDiagonalIndexList = descendingDiagonalIndexMap.get(queen.getDescendingDiagonalIndex());
            score -= descendingDiagonalIndexList.size();
            descendingDiagonalIndexList.add(queen);
        }
    }

    private void retract(Queen queen) {
        Row row = queen.getRow();
        if (row != null) {
            List<Queen> rowIndexList = rowIndexMap.get(queen.getRowIndex());
            rowIndexList.remove(queen);
            score += rowIndexList.size();
            List<Queen> ascendingDiagonalIndexList = ascendingDiagonalIndexMap.get(queen.getAscendingDiagonalIndex());
            ascendingDiagonalIndexList.remove(queen);
            score += ascendingDiagonalIndexList.size();
            List<Queen> descendingDiagonalIndexList = descendingDiagonalIndexMap.get(queen.getDescendingDiagonalIndex());
            descendingDiagonalIndexList.remove(queen);
            score += descendingDiagonalIndexList.size();
        }
    }

    public SimpleScore calculateScore() {
        return SimpleScore.valueOf(score);
    }

}

Configure it in your solver configuration:

  <scoreDirectorFactory>
    <scoreDefinitionType>...</scoreDefinitionType>
    <incrementalScoreCalculatorClass>org.optaplanner.examples.nqueens.solver.score.NQueensAdvancedIncrementalScoreCalculator</incrementalScoreCalculatorClass>
  </scoreDirectorFactory>

Optionally, to explain a score with ScoreDirector.getConstraintMatchTotals() or to get better output when the IncrementalScoreCalculator is corrupted in FAST_ASSERT or FULL_ASSERT environmentMode, implement also the ConstraintMatchAwareIncrementalScoreCalculator interface:

public interface ConstraintMatchAwareIncrementalScoreCalculator<Sol extends Solution> {

    void resetWorkingSolution(Sol workingSolution, boolean constraintMatchEnabled);

    Collection<ConstraintMatchTotal> getConstraintMatchTotals();

}

5.3.4. Drools Score Calculation

5.3.4.1. Overview

Implement your score calculation using the Drools rule engine. Every score constraint is written as one or more score rules.

  • Advantages:

    • Incremental score calculation for free

      • Because most DRL syntax uses forward chaining, it does incremental calculation without any extra code
    • Score constraints are isolated as separate rules

      • Easy to add or edit existing score rules
    • Flexibility to augment your score constraints by

      • Defining them in decision tables

        • Excel (XLS) spreadsheet
        • KIE Workbench WebUI
      • Translate them into natural language with DSL
      • Store and release in the KIE Workbench repository
    • Performance optimizations in future versions for free

      • In every release, the Drools rule engine tends to become faster
  • Disadvantages:

    • DRL learning curve
    • Usage of DRL

      • Polyglot fear can prohibit the use of a new language such as DRL in some organizations

5.3.4.2. Drools Score Rules Configuration

There are several ways to define where your score rules live.

5.3.4.2.1. A scoreDrl Resource on the Classpath

This is the easy way. The score rules live in a DRL file which is provided as a classpath resource. Just add the score rules DRL file in the solver configuration as a <scoreDrl> element:

  <scoreDirectorFactory>
    <scoreDefinitionType>...</scoreDefinitionType>
    <scoreDrl>org/optaplanner/examples/nqueens/solver/nQueensScoreRules.drl</scoreDrl>
  </scoreDirectorFactory>

In a typical project (following the Maven directory structure), that DRL file would be located at $PROJECT_DIR/src/main/resources/org/optaplanner/examples/nqueens/solver/nQueensScoreRules.drl (even for a WAR project).

Note

The <scoreDrl> element expects a classpath resource, as defined by ClassLoader.getResource(String), it does not accept a File, nor an URL, nor a webapp resource. See below to use a File instead.

Add multiple <scoreDrl> elements if the score rules are split across multiple DRL files.

Optionally, you can also set Drools configuration properties (but be careful of backwards compatibility issues):

  <scoreDirectorFactory>
    ...
    <scoreDrl>org/optaplanner/examples/nqueens/solver/nQueensScoreRules.drl</scoreDrl>
    <kieBaseConfigurationProperties>
      <drools.equalityBehavior>...</drools.equalityBehavior>
    </kieBaseConfigurationProperties>
  </scoreDirectorFactory>
5.3.4.2.2. A scoreDrlFile

To use File on the local file system, instead of a classpath resource, add the score rules DRL file in the solver configuration as a <scoreDrlFile> element:

  <scoreDirectorFactory>
    <scoreDefinitionType>...</scoreDefinitionType>
    <scoreDrlFile>/home/ge0ffrey/tmp/nQueensScoreRules.drl</scoreDrlFile>
  </scoreDirectorFactory>
Warning

For portability reasons, a classpath resource is recommended over a File. An application built on one computer, but used on another computer, might not find the file on the same location. Worse, if they use a different Operating System, it is hard to choose a portable file path.

Add multiple <scoreDrlFile> elements if the score rules are split across multiple DRL files.

5.3.4.2.3. A ksessionName in a KJAR from a Maven repository

This way allows you to use score rules defined by the Workbench or build a KJAR and deploy it to the Execution Server. Both the score rules and the solver configuration are resources in a KJAR. Clients can obtain that KJAR either from the local classpath, from a local Maven repository or even from a remote Maven repository.

The score rules still live in a DRL file, but the KieContainer finds it through the META-INF/kmodule.xml file:

<kmodule xmlns="http://www.drools.org/xsd/kmodule">
  <kbase name="nQueensKbase" packages="org.optaplanner.examples.nqueens.solver">
    <ksession name="nQueensKsession"/>
  </kbase>
</kmodule>

The kmodule above will pick up all the DRL files in the package org.optaplanner.examples.nqueens.solver. A kbase can even extend another kbase.

Add the ksession name in the solver configuration as a <ksessionName> element:

  <scoreDirectorFactory>
    <scoreDefinitionType>...</scoreDefinitionType>
    <ksessionName>nQueensKsession</ksessionName>
  </scoreDirectorFactory>

When using this approach, it’s required to use a SolverFactory.createFromKieContainerXmlResource(…​) method to build the SolverFactory.

5.3.4.3. Implementing a Score Rule

Here is an example of a score constraint implemented as a score rule in a DRL file:

rule "multipleQueensHorizontal"
    when
        Queen($id : id, row != null, $i : rowIndex)
        Queen(id > $id, rowIndex == $i)
    then
        scoreHolder.addConstraintMatch(kcontext, -1);
end

This score rule will fire once for every 2 queens with the same rowIndex. The (id > $id) condition is needed to assure that for 2 queens A and B, it can only fire for (A, B) and not for (B, A), (A, A) or (B, B). Let us take a closer look at this score rule on this solution of 4 queens:

unsolvedNQueens04

In this solution the multipleQueensHorizontal score rule will fire for 6 queen couples: (A, B), (A, C), (A, D), (B, C), (B, D) and (C, D). Because none of the queens are on the same vertical or diagonal line, this solution will have a score of -6. An optimal solution of 4 queens has a score of 0.

Note

Notice that every score rule will relate to at least one planning entity class (directly or indirectly through a logically inserted fact).

This is a normal case. It would be a waste of time to write a score rule that only relates to problem facts, as the consequence will never change during planning, no matter what the possible solution.

Note

The kcontext variable is a magic variable in Drools Expert. The scoreHolder's method uses it to do incremental score calculation correctly and to create a ConstraintMatch instance.

5.3.4.4. Weighing Score Rules

A ScoreHolder instance is asserted into the KieSession as a global called scoreHolder. The score rules need to (directly or indirectly) update that instance.

global SimpleScoreHolder scoreHolder;

rule "multipleQueensHorizontal"
    when
        Queen($id : id, row != null, $i : rowIndex)
        Queen(id > $id, rowIndex == $i)
    then
        scoreHolder.addConstraintMatch(kcontext, -1);
end

// multipleQueensVertical is obsolete because it is always 0

rule "multipleQueensAscendingDiagonal"
    when
        Queen($id : id, row != null, $i : ascendingDiagonalIndex)
        Queen(id > $id, ascendingDiagonalIndex == $i)
    then
        scoreHolder.addConstraintMatch(kcontext, -1);
end

rule "multipleQueensDescendingDiagonal"
    when
        Queen($id : id, row != null, $i : descendingDiagonalIndex)
        Queen(id > $id, descendingDiagonalIndex == $i)
    then
        scoreHolder.addConstraintMatch(kcontext, -1);
end
Note

To learn more about the Drools rule language (DRL), consult the Drools documentation.

Most use cases also weigh their constraint types or even their matches differently, by using a specific weight for each constraint match. For example in course scheduling, assigning a Lecture to a Room that is lacking two seats is weighted equally bad as having one isolated Lecture in a Curriculum:

global HardSoftScoreHolder scoreHolder;

// RoomCapacity: For each lecture, the number of students that attend the course must be less or equal
// than the number of seats of all the rooms that host its lectures.
rule "roomCapacity"
    when
        $room : Room($capacity : capacity)
        $lecture : Lecture(room == $room, studentSize > $capacity, $studentSize : studentSize)
    then
        // Each student above the capacity counts as 1 point of penalty.
        scoreHolder.addSoftConstraintMatch(kcontext, ($capacity - $studentSize));
end

// CurriculumCompactness: Lectures belonging to a curriculum should be adjacent
// to each other (i.e., in consecutive periods).
// For a given curriculum we account for a violation every time there is one lecture not adjacent
// to any other lecture within the same day.
rule "curriculumCompactness"
    when
        ...
    then
        // Each isolated lecture in a curriculum counts as 2 points of penalty.
        scoreHolder.addSoftConstraintMatch(kcontext, -2);
end

5.3.5. InitializingScoreTrend

The InitializingScoreTrend specifies how the Score will change as more and more variables are initialized (while the already initialized variables do not change). Some optimization algorithms (such Construction Heuristics and Exhaustive Search) run faster if they have such information.

For the Score (or each score level separately), specify a trend:

  • ANY (default): Initializing an extra variable can change the score positively or negatively. Gives no performance gain.
  • ONLY_UP (rare): Initializing an extra variable can only change the score positively. Implies that:

    • There are only positive constraints
    • And initializing the next variable can not unmatch a positive constraint that was matched by a previous initialized variable.
  • ONLY_DOWN: Initializing an extra variable can only change the score negatively. Implies that:

    • There are only negative constraints
    • And initializing the next variable can not unmatch a negative constraint that was matched by a previous initialized variable.

Most use cases only have negative constraints. Many of those have an InitializingScoreTrend that only goes down:

  <scoreDirectorFactory>
    <scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
    <scoreDrl>.../cloudBalancingScoreRules.drl</scoreDrl>
    <initializingScoreTrend>ONLY_DOWN</initializingScoreTrend>
  </scoreDirectorFactory>

Alternatively, you can also specify the trend for each score level separately:

  <scoreDirectorFactory>
    <scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
    <scoreDrl>.../cloudBalancingScoreRules.drl</scoreDrl>
    <initializingScoreTrend>ONLY_DOWN/ONLY_DOWN</initializingScoreTrend>
  </scoreDirectorFactory>

5.3.6. Invalid Score Detection

Put the environmentMode in FULL_ASSERT (or FAST_ASSERT) to detect corruption in the incremental score calculation. For more information, see the section about environmentMode. However, that will not verify that your score calculator implements your score constraints as your business actually desires.

A piece of incremental score calculator code can be difficult to write and to review. Assert its correctness by using a different implementation (for example an EasyScoreCalculator) to do the assertions triggered by the environmentMode. Just configure the different implementation as an assertionScoreDirectorFactory:

  <environmentMode>FAST_ASSERT</environmentMode>
  ...
  <scoreDirectorFactory>
    <scoreDefinitionType>...</scoreDefinitionType>
    <scoreDrl>org/optaplanner/examples/nqueens/solver/nQueensScoreRules.drl</scoreDrl>
    <assertionScoreDirectorFactory>
      <easyScoreCalculatorClass>org.optaplanner.examples.nqueens.solver.score.NQueensEasyScoreCalculator</easyScoreCalculatorClass>
    </assertionScoreDirectorFactory>
  </scoreDirectorFactory>

This way, the scoreDrl will be validated by the EasyScoreCalculator.

5.4. Score Calculation Performance Tricks

5.4.1. Overview

The Solver will normally spend most of its execution time running the score calculation (which is called in its deepest loops). Faster score calculation will return the same solution in less time with the same algorithm, which normally means a better solution in equal time.

5.4.2. Average Calculation Count Per Second

After solving a problem, the Solver will log the average calculation count per second. This is a good measurement of Score calculation performance, despite that it is affected by non score calculation execution time. It depends on the problem scale of the problem dataset. Normally, even for high scale problems, it is higher than 1000, except when you are using EasyScoreCalculator.

Important

When improving your score calculation, focus on maximizing the average calculation count per second, instead of maximizing the best score. A big improvement in score calculation can sometimes yield little or no best score improvement, for example when the algorithm is stuck in a local or global optimum. If you are watching the calculation count instead, score calculation improvements are far more visible.

Furthermore, watching the calculation count allows you to remove or add score constraints and still compare it with the original calculation count. Comparing the best score with the original would be wrong, because it is comparing apples and oranges.

5.4.3. Incremental Score Calculation (with Deltas)

When a Solution changes, incremental score calculation (AKA delta based score calculation), will calculate the delta with the previous state to find the new Score, instead of recalculating the entire score on every solution evaluation.

For example, if a single queen A moves from row 1 to 2, it will not bother to check if queen B and C can attack each other, since neither of them changed.

Figure 5.1. Incremental Score Calculation for the 4 Queens Puzzle

incrementalScoreCalculationNQueens04

This is a huge performance and scalability gain. Drools score calculation gives you this huge scalability gain without forcing you to write a complicated incremental score calculation algorithm. Just let the Drools rule engine do the hard work.

Notice that the speedup is relative to the size of your planning problem (your n), making incremental score calculation far more scalable.

5.4.4. Avoid Calling Remote Services During Score Calculation

Do not call remote services in your score calculation (except if you are bridging EasyScoreCalculator to a legacy system). The network latency will kill your score calculation performance. Cache the results of those remote services if possible.

If some parts of a constraint can be calculated once, when the Solver starts, and never change during solving, then turn them into cached problem facts.

5.4.5. Pointless Constraints

If you know a certain constraint can never be broken (or it is always broken), you need not write a score constraint for it. For example in n queens, the score calculation does not check if multiple queens occupy the same column, because a Queen's column never changes and every Solution starts with each Queen on a different column.

Note

Do not go overboard with this. If some datasets do not use a specific constraint but others do, just return out of the constraint as soon as you can. There is no need to dynamically change your score calculation based on the dataset.

5.4.6. Built-in Hard Constraint

Instead of implementing a hard constraint, it can sometimes be built in. For example, If Lecture A should never be assigned to Room X, but it uses ValueRangeProvider on Solution, so the Solver will often try to assign it to Room X too (only to find out that it breaks a hard constraint). Use a ValueRangeProvider on the planning entity or filtered selection to define that Course A should only be assigned a Room different than X.

This can give a good performance gain in some use cases, not just because the score calculation is faster, but mainly because most optimization algorithms will spend less time evaluating unfeasible solutions. However, usually this is not a good idea because there is a real risk of trading short term benefits for long term harm:

  • Many optimization algorithms rely on the freedom to break hard constraints when changing planning entities, to get out of local optima.
  • Both implementation approaches have limitations (feature compatibility, disabling automatic performance optimizations), as explained in their documentation.

5.4.7. Other Score Calculation Performance Tricks

  • Verify that your score calculation happens in the correct Number type. If you are making the sum of int values, do not let Drools sum it in a double which takes longer.
  • For optimal performance, always use server mode (java -server). We have seen performance increases of 50% by turning on server mode.
  • For optimal performance, use the latest Java version. For example, in the past we have seen performance increases of 30% by switching from Java 1.5 to 1.6.
  • Always remember that premature optimization is the root of all evil. Make sure your design is flexible enough to allow configuration based tweaking.

5.4.8. Score Trap

Make sure that none of your score constraints cause a score trap. A trapped score constraint uses the same weight for different constraint matches, when it could just as easily use a different weight. It effectively lumps its constraint matches together, which creates a flatlined score function for that constraint. This can cause a solution state in which several moves need to be done to resolve or lower the weight of that single constraint. Some examples of score traps:

  • You need two doctors at each table, but you are only moving one doctor at a time. So the solver has no incentive to move a doctor to a table with no doctors. Punish a table with no doctors more then a table with only one doctor in that score constraint in the score function.
  • Two exams need to be conducted at the same time, but you are only moving one exam at a time. So the solver has to move one of those exams to another timeslot without moving the other in the same move. Add a coarse-grained move that moves both exams at the same time.

For example, consider this score trap. If the blue item moves from an overloaded computer to an empty computer, the hard score should improve. The trapped score implementation fails to do that:

scoreTrap

The Solver should eventually get out of this trap, but it will take a lot of effort (especially if there are even more processes on the overloaded computer). Before they do that, they might actually start moving more processes into that overloaded computer, as there is no penalty for doing so.

Note

Avoiding score traps does not mean that your score function should be smart enough to avoid local optima. Leave it to the optimization algorithms to deal with the local optima.

Avoiding score traps means to avoid, for each score constraint individually, a flatlined score function.

Important

Always specify the degree of infeasibility. The business will often say “if the solution is infeasible, it does not matter how infeasible it is.” While that is true for the business, it is not true for score calculation as it benefits from knowing how infeasible it is. In practice, soft constraints usually do this naturally and it is just a matter of doing it for the hard constraints too.

There are several ways to deal with a score trap:

  • Improve the score constraint to make a distinction in the score weight. For example, penalize -1hard for every missing CPU, instead of just -1hard if any CPU is missing.
  • If changing the score constraint is not allowed from the business perspective, add a lower score level with a score constraint that makes such a distinction. For example, penalize -1subsoft for every missing CPU, on top of -1hard if any CPU is missing. The business ignores the subsoft score level.
  • Add coarse-grained moves and union select them with the existing fine-grained moves. A coarse-grained move effectively does multiple moves to directly get out of a score trap with a single move. For example, move multiple items from the same container to another container.

5.4.9. stepLimit Benchmark

Not all score constraints have the same performance cost. Sometimes one score constraint can kill the score calculation performance outright. Use the Benchmarker to do a one minute run and check what happens to the average calculation count per second if you comment out all but one of the score constraints.

5.4.10. Fairness Score Constraints

Some use cases have a business requirement to provide a fair schedule (usually as a soft score constraint), for example:

  • Fairly distribute the workload amongst the employees, to avoid envy.
  • Evenly distribute the workload amongst assets, to improve reliability.

Implementing such a constraint can seem difficult (especially because there are different ways to formalize fairness), but usually the squared workload implementation behaves most desirable. For each employee/asset, count the workload w and subtract w2 from the score.

fairnessScoreConstraint

As shown above, the squared workload implementation guarantees that if you select two employees from a given solution and make their distribution between those two employees fairer, then the resulting new solution will have a better overall score. Don not just use the difference from the average workload, as that can lead to unfairness, as demonstrated below.

fairnessScoreConstraintPitfall
Note

Instead of the squared workload, it is also possible to use the variance (squared difference to the average) or the standard deviation (square root of the variance). This has no effect on the score comparison, because the average will not change during planning. It is just more work to implement (because the average needs to be known) and trivially slower (because the calculation is a bit longer).

When the workload is perfectly balanced, the user often likes to see a 0 score, instead of the distracting -34soft in the image above (for the last solution which is almost perfectly balanced). To nullify this, either add the average multiplied by the number of entities to the score or instead show the variance or standard deviation in the UI.

5.5. Explaining the Score: Using Score Calculation Outside the Solver

Other parts of your application, for example your webUI, might need to calculate the score too. Do that by reusing the ScoreDirectorFactory of the Solver to build a separate ScoreDirector for that webUI:

ScoreDirectorFactory scoreDirectorFactory = solver.getScoreDirectorFactory();
ScoreDirector guiScoreDirector = scoreDirectorFactory.buildScoreDirector();

Then use it when you need to calculate the Score of a Solution:

guiScoreDirector.setWorkingSolution(solution);
Score score = guiScoreDirector.calculateScore();

To explain in the GUI what entities are causing which part of the Score, get the ConstraintMatch objects from the ScoreDirector:

for (ConstraintMatchTotal constraintMatchTotal : guiScoreDirector.getConstraintMatchTotals()) {
    String constraintName = constraintMatchTotal.getConstraintName();
    Number weightTotal = constraintMatchTotal.getWeightTotalAsNumber();
    for (ConstraintMatch constraintMatch : constraintMatchTotal.getConstraintMatchSet()) {
        List<Object> justificationList = constraintMatch.getJustificationList();
        Number weight = constraintMatch.getWeightAsNumber();
        ...
    }
}
Note

Drools score calculation supports constraint matches automatically, but incremental Java score calculation requires implementing an extra interface (see that section).