Chapter 13. Score calculation performance tricks
Most of the execution time of a solver involves running the score calculation, which is called in the solver’s deepest loops. Faster score calculation returns the same solution in less time with the same algorithm. This usually provides a better solution in the same amount of time. Use the following techniques to improve your score calculation performance.
13.1. Score calculation speed
When you are improving your score calculation, focus on maximizing the score calculation speed instead of maximizing the best score. A big improvement in score calculation can sometimes yield little or no best score improvement, for example when the algorithm is stuck in a local or global optima. If you are watching the calculation speed instead, score calculation improvements are far more visible.
The score calculation speed per second is a reliable measurement of score calculation performance, even though it is affected by non-score calculation execution time. The result depends on the problem scale of the problem data set. Normally, even for high scale problems, the score calculation speed per second is higher than 1000, unless you are using an EasyScoreCalculator class.
By watching the calculation speed, you can remove or add score constraints and compare the latest calculation speed with the original calculation speed.
Comparing the best score with the original best score is pointless. It’s like comparing apples and oranges.
13.2. Incremental score calculation
Incremental score calculation is also known as delta-based score calculation. When a solution changes, incremental score calculation finds the new score by evaluating changes between the current state and the previous state instead of recalculating the entire score on every solution evaluation. For example, in the N Queens problem, when queen A moves from row 1 to 2, the incrementalScoreCalculation class does not check whether queen B and C can attack each other, because neither of them changed position, as shown in the following illustration:

The following example show incremental score calculation for employee rostering:

Incremental score calculation provides a significant performance and scalability gain. Constraint streams or Drools score calculation provides this scalability gain without forcing you to write a complicated incremental score calculation algorithm. Just let the rule engine do the hard work.
Notice that the increase in calculation speed is relative to the size of your planning problem (your n). This makes incremental score calculation scalable.
13.3. Remote services
Do not call remote services in your score calculation unless you are bridging the EasyScoreCalculator class to a legacy system. The network latency will significantly downgrade your score calculation performance. Cache the results of those remote services if possible.
If some parts of a constraint can be calculated once, when the solver starts, and never change during solving, then turn them into cached problem facts.
13.4. Pointless constraints
If you know that a specific constraint can never be broken or that it is always broken, do not write a score constraint for it. For example, in the N Queens problem, the score calculation does not check whether multiple queens occupy the same column because a queen’s column never changes and every solution starts with each queen on a different column.
Do not overuse this technique. If some data sets do not use a specific constraint but others do, just return out of the constraint as soon as you can. There is no need to dynamically change your score calculation based on the data set.
13.5. Built-in hard constraints
Instead of implementing a hard constraint, the hard constraint can sometimes be built in. For example, in the school timetabling example, if Lecture A should never be assigned to Room X, but it uses the ValueRangeProvider class on Solution, the Solver will often try to assign it to Room X only to discover that it breaks a hard constraint. Use a ValueRangeProvider on the planning entity or filtered selection to define that Lecture A should only be assigned a Room different than X.
This can give a good performance gain in some use cases, not just because the score calculation is faster, but because most optimization algorithms will spend less time evaluating infeasible solutions. However, usually this is not a good idea because there is a real risk of trading short term benefits for long term harm:
- Many optimization algorithms rely on the freedom to break hard constraints when changing planning entities, to get out of local optima.
- Both implementation approaches have limitations such as feature compatibility and disabling automatic performance optimizations.
13.6. Score traps
Make sure that none of your score constraints cause a score trap. A trapped score constraint applies the same weight to multiple constraint matches. Doing this groups constraint matches together and creates a flatlined score function for that constraint. This can cause a solution state in which several moves must be done to resolve or lower the weight of that single constraint. The following examples illustrate score traps:
- You need two doctors at each operating table but you are only moving one doctor at a time. The solver has no incentive to move a doctor to a table with no doctors. To fix this, penalize a table with no doctors more than a table with only one doctor in that score constraint in the score function.
- Two exams must be conducted at the same time, but you are only moving one exam at a time. The solver must move one of those exams to another time slot without moving the other in the same move. To fix this, add a coarse-grained move that moves both exams at the same time.
The following illustration shows a score trap:

If the blue item moves from an overloaded computer to an empty computer, the hard score should improve. But the trapped score implementation cannot do that. The solver should eventually get out of this trap, but it will take a lot of effort, especially if there are even more processes on the overloaded computer. Before it can do that, it might actually start moving more processes into that overloaded computer, because there is no penalty for doing so.
Avoiding score traps does not mean that your score function should be smart enough to avoid local optima. Leave it to the optimization algorithms to deal with the local optima.
Avoiding score traps means to avoid, for each score constraint individually, a flatlined score function.
Always specify the degree of infeasibility. The business often says "if the solution is infeasible, it does not matter how infeasible it is." While that is true for the business, it is not true for score calculation because score calculation benefits from knowing how infeasible a solution is. In practice, soft constraints usually do this naturally and it is just a matter of doing it for the hard constraints too.
There are several ways to deal with a score trap:
-
Improve the score constraint to make a distinction in the score weight. For example, penalize
-1hardfor every missing CPU instead of just-1hardif any CPU is missing. -
If changing the score constraint is not allowed from the business perspective, add a lower score level with a score constraint that makes such a distinction. For example, penalize
-1subsoftfor every missing CPU, on top of-1hardif any CPU is missing. The business ignores the subsoft score level. - Add coarse-grained moves and union-select them with the existing fine-grained moves. A coarse-grained move effectively does multiple moves to directly get out of a score trap with a single move. For example, move multiple items from the same container to another container.
13.7. The stepLimit benchmark
Not all score constraints have the same performance cost. Sometimes one score constraint can kill the score calculation performance outright. Use the benchmarker to perform a one minute run and then check what happens to the score calculation speed if you comment out all but one of the score constraints.
13.8. Fairness score constraints
Some use cases have a business requirement to provide a fair schedule, usually as a soft score constraint, for example:
- To avoid envy, fairly distribute the workload among the employees.
- To improve reliability, evenly distribute the workload among assets.
Implementing such a constraint might seem difficult, especially because there are different ways to formalize fairness, but usually the squared workload implementation behaves in the most desirable manner. For each employee or asset, specify the workload as w and subtract w² from the score.

The squared workload implementation guarantees that if you select two employees from a specified solution and make the distribution between those two employees fairer, then the resulting new solution will have a better overall score. Do not use only the difference from the average workload because that can lead to unfairness, as demonstrated in the following illustration:

Instead of the squared workload implementation, it is also possible to use the variance (squared difference to the average) or the standard deviation (square root of the variance). This has no effect on the score comparison, because the average does not change during planning. It is just more work to implement because the average needs to be known and trivially slower because the calculation takes a little longer.
When the workload is perfectly balanced, users often like to see a 0 score, instead of the distracting -34soft. This is shown in the preceding image for the last solution which is almost perfectly balanced. To nullify this, either add the average multiplied by the number of entities to the score or show the variance or standard deviation in the UI.
13.9. Other score calculation performance tricks
Use the following tips to further improve your score calculation performance:
-
Verify that your score calculation occurs in the correct number type. For example, if you are adding values of the type
int, do not store the result as the typedouble, which takes longer to calculate. - For optimal performance, use the latest Java version. For example, you can achieve a ~10 % performance increase by switching from Java 11 to 17.
- Always remember that premature optimization is very undesireable. Make sure your design is flexible enough to allow configuration-based adjustments.
13.10. Configuing constraints
Deciding the correct weight and level for each constraint is not easy. It often involves negotiating with different stakeholders and their priorities. Furthermore, quantifying the impact of soft constraints is often a new experience for business managers, so they need a number of iterations to get it right. To make this easier, use the @ConstraintConfiguration class with constraint weights and parameters. Then, provide a UI so business managers can adjust the constraint weights themselves and visualize the resulting solution, as shown in the following illustration:

For example, in the conference scheduling problem, the minimum pause constraint has a constraint weight, but it also has a constraint parameter that defines the length of time between two talks by the same speaker. The pause length depends on the conference: in some large conferences 20 minutes isn’t enough time to go from one room to the other and in smaller conferences 10 minutes can be enough time. The pause length is a field in the constraint configuration without a @ConstraintWeight annotation.
Each constraint has a constraint package and a constraint name and together they form the constraint ID. The constraint ID connects the constraint weight with the constraint implementation. For each constraint weight, there must be a constraint implementation with the same package and the same name.
-
The
@ConstraintConfigurationannotation has aconstraintPackageproperty that defaults to the package of the constraint configuration class. Cases with constraint streams normally do not need to specify it. -
The
@ConstraintWeightannotation has avaluewhich is the constraint name (for example "Speaker conflict"). It inherits the constraint package from the@ConstraintConfiguration, but it can override that, for example@ConstraintWeight(constraintPackage = "…region.france", …)to use a different constraint package than some other weights.
So every constraint weight ends up with a constraint package and a constraint name. Each constraint weight links with a constraint implementation, for example in constraint streams:
public final class ConferenceSchedulingConstraintProvider implements ConstraintProvider {
@Override
public Constraint[] defineConstraints(ConstraintFactory factory) {
return new Constraint[] {
speakerConflict(factory),
themeTrackConflict(factory),
contentConflict(factory),
...
};
}
protected Constraint speakerConflict(ConstraintFactory factory) {
return factory.forEachUniquePair(...)
...
.penalizeConfigurable("Speaker conflict", ...);
}
protected Constraint themeTrackConflict(ConstraintFactory factory) {
return factory.forEachUniquePair(...)
...
.penalizeConfigurable("Theme track conflict", ...);
}
protected Constraint contentConflict(ConstraintFactory factory) {
return factory.forEachUniquePair(...)
...
.penalizeConfigurable("Content conflict", ...);
}
...
}
Each of the constraint weights defines the score level and score weight of their constraint. The constraint implementation calls rewardConfigurable() or penalizeConfigurable() and the constraint weight is automatically applied.
If the constraint implementation provides a match weight, that match weight is multiplied with the constraint weight. For example, the content conflict constraint weight defaults to 100soft and the constraint implementation penalizes each match based on the number of shared content tags and the overlapping duration of the two talks:
@ConstraintWeight("Content conflict")
private HardMediumSoftScore contentConflict = HardMediumSoftScore.ofSoft(100);Constraint contentConflict(ConstraintFactory factory) {
return factory.forEachUniquePair(Talk.class,
overlapping(t -> t.getTimeslot().getStartDateTime(),
t -> t.getTimeslot().getEndDateTime()),
filtering((talk1, talk2) -> talk1.overlappingContentCount(talk2) > 0))
.penalizeConfigurable("Content conflict",
(talk1, talk2) -> talk1.overlappingContentCount(talk2)
* talk1.overlappingDurationInMinutes(talk2));
}
So when 2 overlapping talks share only 1 content tag and overlap by 60 minutes, the score is impacted by -6000soft. But when 2 overlapping talks share 3 content tags, the match weight is 180, so the score is impacted by -18000soft.
Procedure
-
Create a new class to hold the constraint weights and other constraint parameters, for example
ConferenceConstraintConfiguration. Annotate this class with
@ConstraintConfiguration:@ConstraintConfiguration public class ConferenceConstraintConfiguration { ... }Add the constraint configuration on the planning solution and annotate that field or property with
@ConstraintConfigurationProvider:@PlanningSolution public class ConferenceSolution { @ConstraintConfigurationProvider private ConferenceConstraintConfiguration constraintConfiguration; ... }In the constraint configuration class, add a
@ConstraintWeightproperty for each constraint and give each constraint weight a default value:@ConstraintConfiguration(constraintPackage = "...conferencescheduling.score") public class ConferenceConstraintConfiguration { @ConstraintWeight("Speaker conflict") private HardMediumSoftScore speakerConflict = HardMediumSoftScore.ofHard(10); @ConstraintWeight("Theme track conflict") private HardMediumSoftScore themeTrackConflict = HardMediumSoftScore.ofSoft(10); @ConstraintWeight("Content conflict") private HardMediumSoftScore contentConflict = HardMediumSoftScore.ofSoft(100); ... }The
@ConstraintConfigurationProviderannotation automatically exposes the constraint configuration as a problem fact. There is no need to add a@ProblemFactPropertyannotation. A constraint weight cannot be null.-
Expose the constraint weights in a UI so business users can tweak the values. The preceding example uses the
ofHard(),ofMedium()andofSoft()methods to do that. Notice how it defaults the content conflict constraint as ten times more important than the theme track conflict constraint. Normally, a constraint weight only uses one score level, but it’s possible to use multiple score levels (at a small performance cost).
13.11. Explaining the score
There are several ways to show how the OptaPlanner score is derived. This is called explaining the score:
-
Print the return value of
getSummary(). This is the easiest way to explain the score during development, but only use this method for diagnostic purposes. -
Use the
ScoreManagerAPI in an application or web UI. Break down the score for each constraint for a more granular view.

Procedure
Use one of the following methods to explain the score:
Print the return value of
getSummary():System.out.println(scoreManager.getSummary(solution));
The following conference scheduling example prints that talk
S51is responsible for breaking the hard constraintSpeaker required room tag:Explanation of score (-1hard/-806soft): Constraint match totals: -1hard: constraint (Speaker required room tag) has 1 matches: -1hard: justifications ([S51]) -340soft: constraint (Theme track conflict) has 32 matches: -20soft: justifications ([S68, S66]) -20soft: justifications ([S61, S44]) ... ... Indictments (top 5 of 72): -1hard/-22soft: justification (S51) has 12 matches: -1hard: constraint (Speaker required room tag) -10soft: constraint (Theme track conflict) ... ...ImportantDo not attempt to parse this string or use it in your UI or exposed services. Instead use the ConstraintMatch API.
Use the
ScoreManagerAPI in an application or web UI.Enter code similar to the following example:
ScoreManager<CloudBalance, HardSoftScore> scoreManager = ScoreManager.create(solverFactory); ScoreExplanation<CloudBalance, HardSoftScore> scoreExplanation = scoreManager.explainScore(cloudBalance);
Use this code when you need to calculate the score of a solution:
HardSoftScore score = scoreExplanation.getScore();
Break down the score by constraint:
Get the
ConstraintMatchTotalvalues fromScoreExplanation:Collection<ConstraintMatchTotal<HardSoftScore>> constraintMatchTotals = scoreExplanation.getConstraintMatchTotalMap().values(); for (ConstraintMatchTotal<HardSoftScore> constraintMatchTotal : constraintMatchTotals) { String constraintName = constraintMatchTotal.getConstraintName(); // The score impact of that constraint HardSoftScore totalScore = constraintMatchTotal.getScore(); for (ConstraintMatch<HardSoftScore> constraintMatch : constraintMatchTotal.getConstraintMatchSet()) { List<Object> justificationList = constraintMatch.getJustificationList(); HardSoftScore score = constraintMatch.getScore(); ... } }Each
ConstraintMatchTotalrepresents one constraint and has a part of the overall score. The sum of all theConstraintMatchTotal.getScore()equals the overall score.NoteConstraint streams and Drools score calculation support constraint matches automatically, but incremental Java score calculation requires implementing an extra interface.
13.12. Visualizing the hot planning entities
Show a heat map in the UI that highlights the planning entities and problem facts that have an impact on the score.
Procedure
Get the
Indictmentmap from theScoreExplanation:Map<Object, Indictment<HardSoftScore>> indictmentMap = scoreExplanation.getIndictmentMap(); for (CloudProcess process : cloudBalance.getProcessList()) { Indictment<HardSoftScore> indictment = indictmentMap.get(process); if (indictment == null) { continue; } // The score impact of that planning entity HardSoftScore totalScore = indictment.getScore(); for (ConstraintMatch<HardSoftScore> constraintMatch : indictment.getConstraintMatchSet()) { String constraintName = constraintMatch.getConstraintName(); HardSoftScore score = constraintMatch.getScore(); ... } }Each
Indictmentis the sum of all constraints where that justification object is involved. The sum of all theIndictment.getScoreTotal()differs from the overall score because multipleIndictmententities can share the sameConstraintMatch.NoteConstraint streams and Drools score calculation supports constraint matches automatically, but incremental Java score calculation requires implementing an extra interface.
13.13. Score constraints testing
Different score calculation types come with different tools for testing. Write a unit test for each score constraint individually to check that it behaves correctly.