Show Table of Contents




5.4. Score Calculation Performance Tricks
5.4.1. Overview
The
Solver will normally spend most of its execution time running the score calculation (which is called in its deepest loops). Faster score calculation will return the same solution in less time with the same algorithm, which normally means a better solution in equal time.
5.4.2. Average Calculation Count Per Second
After solving a problem, the
Solver will log the average calculation count per second. This is a good measurement of Score calculation performance, despite that it is affected by non score calculation execution time. It depends on the problem scale of the problem dataset. Normally, even for high scale problems, it is higher than 1000, except when you're using EasyScoreCalculator.
Important
When improving your score calculation, focus on maximizing the average calculation count per second, instead of maximizing the best score. A big improvement in score calculation can sometimes yield little or no best score improvement, for example when the algorithm is stuck in a local or global optima. If you're watching the calculation count instead, score calculation improvements are far more visible.
Furthermore, watching the calculation count, allows you to remove or add score constraints, and still compare it with the original calculation count. Comparing the best score with the original would be wrong, because it's comparing apples and oranges.
5.4.3. Incremental Score Calculation (with Delta's)
When a
Solution changes, incremental score calculation (AKA delta based score calculation), will calculate the delta with the previous state to find the new Score, instead of recalculating the entire score on every solution evaluation.
For example, if a single queen A moves from row
1 to 2, it won't bother to check if queen B and C can attack each other, since neither of them changed.
Figure 5.1. Incremental Score Calculation for the 4 Queens Puzzle

This is a huge performance and scalability gain. Drools score calculation gives you this huge scalability gain without forcing you to write a complicated incremental score calculation algorithm. Just let the Drools rule engine do the hard work.
Notice that the speedup is relative to the size of your planning problem (your n), making incremental score calculation far more scalable.
5.4.4. Avoid Calling Remote Services During Score Calculation
Do not call remote services in your score calculation (except if you're bridging
EasyScoreCalculator to a legacy system). The network latency will kill your score calculation performance. Cache the results of those remote services if possible.
If some parts of a constraint can be calculated once, when the
Solver starts, and never change during solving, then turn them into cached problem facts.
5.4.5. Pointless Constraints
If you know a certain constraint can never be broken (or it is always broken), don't bother writing a score constraint for it. For example in n queens, the score calculation doesn't check if multiple queens occupy the same column, because a
Queen's column never changes and every Solution starts with each Queen on a different column.
Note
Don't go overboard with this. If some datasets don't use a specific constraint but others do, just return out of the constraint as soon as you can. There is no need to dynamically change your score calculation based on the dataset.
5.4.6. Built-in Hard Constraint
Instead of implementing a hard constraint, it can sometimes be built in. For example: If
Lecture A should never be assigned to Room X, but it uses ValueRangeProvider on Solution, so the Solver will often try to assign it to Room X too (only to find out that it breaks a hard constraint). Use a ValueRangeProvider on the planning entity or filtered selection to define that Course A should only be assigned a Room different than X.
This can give a good performance gain in some use cases, not just because the score calculation is faster, but mainly because most optimization algorithms will spend less time evaluating unfeasible solutions. However, usually this not a good idea because there is a real risk of trading short term benefits for long term harm:
- Many optimization algorithms rely on the freedom to break hard constraints when changing planning entities, to get out of local optima.
- Both implementation approaches have limitations (feature compatibility, disabling automatic performance optimizations, ...), as explained in their documentation.
5.4.7. Other Score Calculation Performance Tricks
- Verify that your score calculation happens in the correct
Numbertype. If you're making the sum ofintvalues, don't let Drools sum it in adoublewhich takes longer. - For optimal performance, always use server mode (
java -server). We have seen performance increases of 50% by turning on server mode. - For optimal performance, use the latest Java version. For example, in the past we have seen performance increases of 30% by switching from java 1.5 to 1.6.
- Always remember that premature optimization is the root of all evil. Make sure your design is flexible enough to allow configuration based tweaking.
5.4.8. Score Trap
Make sure that none of your score constraints cause a score trap. A trapped score constraint uses the same weight for different constraint matches, when it could just as easily use a different weight. It effectively lumps its constraint matches together, which creates a flatlined score function for that constraint. This can cause a solution state in which several moves need to be done to resolve or lower the weight of that single constraint. Some examples of score traps:
- If you need 2 doctors at each table, but you're only moving 1 doctor at a time. So the solver has no incentive to move a doctor to a table with no doctors. Punish a table with no doctors more then a table with only 1 doctor in that score constraint in the score function.
- 2 exams need to be conducted at the same time, but you're only moving 1 exam at a time. So the solver has to move one of those exams to another timeslot without moving the other in the same move. Add a coarse-grained move that moves both exams at the same time.
For example, consider this score trap. If the blue item moves from an overloaded computer to an empty computer, the hard score should improve. The trapped score implementation fails to do that:

The Solver should eventually get out of this trap, but it will take a lot of effort (especially if there are even more processes on the overloaded computer). Before they do that, they might actually start moving more processes into that overloaded computer, as there is no penalty for doing so.
Note
Avoiding score traps does not mean that your score function should be smart enough to avoid local optima. Leave it to the optimization algorithms to deal with the local optima.
Avoiding score traps means to avoid - for each score constraint individually - a flatlined score function.
Important
Always specify the degree of infeasibility. The business will often say: "if the solution is infeasible, it doesn't matter how infeasible it." While that's true for the business, it's not true for score calculation: it benefits from knowing how infeasible it is. In practice, soft constraints usually do this naturally and it's just a matter of doing it for the hard constraints too.
There are several ways to deal with a score trap:
- Improve the score constraint to make a distinction in the score weight. For example: penalize
-1hardfor every missing CPU, instead of just-1hardif any CPU is missing. - If changing the score constraint is not allowed from the business perspective, add a lower score level with a score constraint that makes such a distinction. For example: penalize
-1subsoftfor every missing CPU, on top of-1hardif any CPU is missing. The business ignores the subsoft score level. - Add coarse-grained moves and union select them with the existing fine-grained moves. A coarse-grained move effectively does multiple moves to directly get out of a score trap with a single move. For example: move multiple items from the same container to another container.
5.4.9. stepLimit Benchmark
Not all score constraints have the same performance cost. Sometimes 1 score constraint can kill the score calculation performance outright. Use the Benchmarker to do a 1 minute run and check what happens to the average calculation count per second if you comment out all but 1 of the score constraints.
5.4.10. Fairness Score Constraints
Some use cases have a business requirement to provide a fair schedule (usually as a soft score constraint), for example:
- Fairly distribute the workload amongst the employees, to avoid envy.
- Evenly distribute the workload amongst assets, to improve reliability.
Implementing such a constraint can seem difficult (especially because there are different ways to formalize fairness), but usually the squared workload implementation behaves most desirable: for each employee/asset, count the workload
w and subtract w² from the score.

As shown above, the squared workload implementation guarantees that if you select 2 employees from a given solution and make their distribution between those 2 employees fairer, then the resulting new solution will have a better overall score. Don't just use the difference from the average workload, as that can lead to unfairness, as demonstrated below.

Note
Instead of the squared workload, it's also possible to use the variance (squared difference to the average) or the standard deviation (square root of the variance). This has no effect on the score comparison, because the average won't change during planning. It's just more work to implement (because the average needs to be known) and trivially slower (because the calculation is a bit longer).
When the workload is perfect balanced, the user often likes to see a
0 score, instead of the distracting -34soft in the image above (for the last solution which is almost perfectly balanced). To nullify this, either add the average multiplied by the number of entities to the score or instead show the variance or standard deviation in the UI.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.