Show Table of Contents








5.1.7.


Chapter 5. Score Calculation
5.1. Score Terminology
5.1.1. What is a Score?
Every initialized
Solution has a score. That score is an objective way to compare 2 solutions: the solution with the higher score is better. The Solver aims to find the Solution with the highest Score of all possible solutions. The best solution is the Solution with the highest Score that Solver has encountered during solving, which might be the optimal solution.
Planner cannot automatically know which
Solution is best for your business, so you need to tell it how to calculate the score of a given Solution according to your business needs. If you forget or unable to implement an important constraint, the solution is probably useless:

Luckily, Planner is very flexible to define constraints, thanks to these score techniques, which can be used and combined as much as needed:
- Score signum (positive or negative): maximize or minimize a constraint type.
- Score weight: put a cost/profit on a constraint type.
- Score level: prioritize a group of constraint types
- Pareto scoring
5.1.2. Score Constraint Signum (Positive or Negative)
All score techniques are based on constraints. Such a constraint can be a simple pattern (such as Maximize the apple harvest in the solution) or a more complex pattern. A positive constraint is a constraint you're trying to maximize. A negative constraint is a constraint you're trying to minimize.

Notice in the image above, that the optimal solution always has the highest score, regardless if the constraints are positive or negative.
Most planning problems have only negative constraints and therefore have a negative score. In that case, the score is usually the sum of the weight of the negative constraints being broken, with a perfect score of 0. This explains why the score of a solution of 4 queens is the negative (and not the positive!) of the number of queen pairs which can attack each other.
Negative and positive constraints can be combined, even in the same score level.
Note
Don't presume your business knows all its score constraints in advance. Expect score constraints to be added or changed after the first releases.
When a constraint activates (because the negative constraint is broken or the positive constraint is fulfilled) on a certain planning entity set, it is called a constraint match.
5.1.3. Score Constraint Weight
Not all score constraints are equally important. If breaking one constraint is equally bad as breaking another constraint x times, then those 2 constraints have a different weight (but they are in the same score level). For example in vehicle routing, you can make1 "unhappy driver" constraint match count as much as 2 "fuel tank usage" constraint matches:

Score weighting is often used in use cases where you can put a price tag on everything. In that case, the positive constraints maximize revenue and the negative constraints minimize expenses: together they maximize profit. Alternatively, score weighting is also often used to create social fairness. For example: a nurse that requests a free day pays a higher weight on New Year's eve than on a normal day.
Put a good weight on a constraint can be a difficult analytical decision, because it's about making choices and tradeoffs with other constraints. However, a non-accurate weight is less damaging than not good algorithms:

Furthermore, it is often useful to allow the planning end-user to recalibrate penalty weights in his/her user interface, as demonstrated in the exam timetabling example.
The weight of a constraint match can be dynamically based on the planning entities involved. For example in cloud balance: the weight of the soft constraint match for an active
Computer is the cost of that Computer.
5.1.4. Score Level
Sometimes a score constraint outranks another score constraint, no matter how many times the other is broken. In that case, those score constraints are in different levels. For example: a nurse cannot do 2 shifts at the same time (due to the constraints of physical reality), this outranks all nurse happiness constraints.
Most use cases have only 2 score levels: hard and soft. When comparing 2 scores, they are compared lexicographically: the first score level gets compared first. If those differ, the others score levels are ignored. For example: a score that breaks 0 hard constraints and 1000000 soft constraints is better than a score that breaks 1 hard constraint and 0 soft constraints.

Score levels often employ score weighting per level. In such case, the hard constraint level usually makes the solution feasible and the soft constraint level maximizes profit by weighting the constraints on price.
Don't use a big constraint weight when your business actually wants different score levels. That hack, known as score folding, is broken:

Note
Your business will probably tell you that your hard constraints all have the same weight, because they cannot be broken (so their weight does not matter). This is not true and it could create a score trap. For example in cloud balance: if a
Computer has 7 CPU too little for its Processes, then it must be weighted 7 times as much as if it had only 1 CPU too little. This way, there is an incentive to move a Process with 6 CPU or less away from that Computer.
Three or more score levels are also supported. For example: a company might decide that profit outranks employee satisfaction (or visa versa), while both are outranked by the constraints of physical reality.
Note
To model fairness or load balancing, there is no need to use lots of score levels (even though Planner can handle many score levels).
5.1.5. Pareto Scoring (AKA Multi-objective Optimization Scoring)
Far less common is the use case of pareto optimization, which is also known under the more confusing term multi-objective optimization. In pareto scoring, score constraints are in the same score level, yet they are not weighted against each other. When 2 scores are compared, each of the score constraints are compared individually and the score with the most dominating score constraints wins. Pareto scoring can even be combined with score levels and score constraint weighting.
Consider this example with positive constraints, where we want to get the most apples and oranges. Since it's impossible to compare apples and oranges, we can't weight them against each other. Yet, despite that we can't compare them, we can state that 2 apples are better then 1 apple. Similarly, we can state that 2 apples and 1 orange are better than just 1 orange. So despite our inability to compare some Scores conclusively (at which point we declare them equal), we can find a set of optimal scores. Those are called pareto optimal.

Scores are considered equal far more often. It's left up to a human to choose the better out of a set of best solutions (with equal scores) found by Planner. In the example above, the user must choose between solution A (3 apples and 1 orange) and solution B (1 apple and 6 oranges). It's guaranteed that Planner has not found another solution which has more apples or more oranges or even a better combination of both (such as 2 apples and 3 oranges).
To implement pareto scoring in Planner, implement a custom
ScoreDefinition and Score (and replace the BestSolutionRecaller). Future versions will provide out-of-the-box support.
Note
A pareto
Score's compareTo method is not transitive because it does a pareto comparison. For example: having 2 apples is greater than 1 apple. 1 apple is equal to 1 orange. Yet, 2 apples are not greater than 1 orange (but actually equal). Pareto comparison violates the contract of the interface java.lang.Comparable's compareTo method, but Planner's systems are pareto comparison safe, unless explicitly stated otherwise in this documentation.
5.1.6. Combining Score Techniques
All the score techniques mentioned above, can be combined seamlessly:

5.1.7. Score interface
A score is represented by the
Score interface, which naturally extends Comparable:
public interface Score<...> extends Comparable<...> {
...
}
The
Score implementation to use depends on your use case. Your score might not efficiently fit in a single long value. Planner has several built-in Score implementations, but you can implement a custom Score too. Most use cases tend to use the built-in HardSoftScore.

The
Score implementation (for example HardSoftScore) must be the same throughout a Solver runtime. The Score implementation is configured in the solver configuration as a ScoreDefinition:
<scoreDirectorFactory>
<scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
...
</scoreDirectorFactory>5.1.8. Avoid Floating Point Numbers in Score Calculation
Avoid the use of
float and double for score calculation. Use BigDecimal instead.
Floating point numbers (
float and double) cannot represent a decimal number correctly. For example: a double cannot hold the value 0.05 correctly. Instead, it holds the nearest representable value. Arithmetic (including addition and subtraction) with floating point numbers, especially for planning problems, leads to incorrect decisions:

Additionally, floating point number addition is not associative:
System.out.println( ((0.01 + 0.02) + 0.03) == (0.01 + (0.02 + 0.03)) ); // returns false
This leads to score corruption.
Decimal numbers (
BigDecimal) have none of these problems.
Note
BigDecimal arithmetic is considerably slower than
int, long or double arithmetic. In experiments we've seen the average calculation count get divided by 5.
Therefore, in some cases, it can be worthwhile to multiply all numbers for a single score weight by a plural of ten (for example
1000), so the score weight fits in an int or long.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.