Show Table of Contents


14.7. Advanced Benchmarking
14.7.1. Benchmarking Performance Tricks
14.7.1.1. Parallel Benchmarking On Multiple Threads
If you have multiple processors available on your computer, you can run multiple benchmarks in parallel on multiple threads to get your benchmarks results faster:
<plannerBenchmark> ... <parallelBenchmarkCount>AUTO</parallelBenchmarkCount> ... </plannerBenchmark>
Warning
Running too many benchmarks in parallel will affect the results of benchmarks negatively. Leave some processors unused for garbage collection and other processes.
We tweak
parallelBenchmarkCount AUTO to maximize the reliability and efficiency of the benchmark results.
The following
parallelBenchmarkCounts are supported:
1(default): Run all benchmarks sequentially.AUTO: Let Planner decide how many benchmarks to run in parallel. This formula is based on experience. It's recommended to prefer this over the other parallel enabling options.- Static number: The number of benchmarks to run in parallel.
<parallelBenchmarkCount>2</parallelBenchmarkCount>
- JavaScript formula: Formula for the number of benchmarks to run in parallel. It can use the variable
availableProcessorCount. For example:<parallelBenchmarkCount>(availableProcessorCount / 2) + 1</parallelBenchmarkCount>
Note
The
parallelBenchmarkCount is always limited to the number of available processors. If it's higher, it will be automatically decreased.
Note
If you have a computer with slow or unreliable cooling, increasing the
parallelBenchmarkCount above 1 (even on AUTO) may overheat your CPU.
The
sensors command can help you detect if this is the case. It is available in the package lm_sensors or lm-sensors in most Linux distributions. There are several freeware tools available for Windows too.
Note
In the future, we will also support multi-JVM benchmarking. This feature is independent of multi-threaded solving or multi-JVM solving.
14.7.2. Statistical Benchmarking
If you want to minimize the influence of your environment on the benchmark results, you can configure the number of times each single benchmark run is repeated. The results of those runs will get statistically aggregated and are even visible in the report individually.
Figure 14.12. Sub Single Benchmark Summary Statistic

To configure this in your benchmarks, add a
<subSingleCount> element to an <inheritedSolverBenchmark> element:
<?xml version="1.0" encoding="UTF-8"?>
<plannerBenchmark>
...
<inheritedSolverBenchmark>
...
<solver>
...
</solver>
<subSingleCount>10<subSingleCount>
</inheritedSolverBenchmark>
...
</plannerBenchmark>
You can also configure
subSingleCount in the individual <solverBenchmark> elements. This will override the configuration in the <inheritedSolverBenchmark> element. subSingleCount is by default set to 1.
14.7.3. Template Based Benchmarking And Matrix Benchmarking
Matrix benchmarking is benchmarking a combination of value sets. For example: benchmark 4
entityTabuSize values (5, 7, 11 and 13) combined with 3 acceptedCountLimit values (500, 1000 and 2000), resulting in 12 solver configurations.
To reduce the verbosity of such a benchmark configuration, you can use a Freemarker template for the benchmark configuration instead:
<plannerBenchmark>
...
<inheritedSolverBenchmark>
...
</inheritedSolverBenchmark>
<#list [5, 7, 11, 13] as entityTabuSize>
<#list [500, 1000, 2000] as acceptedCountLimit>
<solverBenchmark>
<name>entityTabuSize ${entityTabuSize} acceptedCountLimit ${acceptedCountLimit}</name>
<solver>
<localSearch>
<unionMoveSelector>
<changeMoveSelector/>
<swapMoveSelector/>
</unionMoveSelector>
<acceptor>
<entityTabuSize>${entityTabuSize}</entityTabuSize>
</acceptor>
<forager>
<acceptedCountLimit>${acceptedCountLimit}</acceptedCountLimit>
</forager>
</localSearch>
</solver>
</solverBenchmark>
</#list>
</#list>
</plannerBenchmark>
And build it with the class
PlannerBenchmarkFactory:
PlannerBenchmarkFactory plannerBenchmarkFactory = PlannerBenchmarkFactory.createFromFreemarkerXmlResource(
"org/optaplanner/examples/cloudbalancing/benchmark/cloudBalancingBenchmarkConfigTemplate.xml.ftl");
PlannerBenchmark plannerBenchmark = benchmarkFactory.buildPlannerBenchmark();14.7.4. Benchmark Report Aggregation
The
BenchmarkAggregator takes 1 or more existing benchmarks and merges them into new benchmark report, without actually running the benchmarks again.

This is useful to:
- Report on the impact of code changes: Run the same benchmark configuration before and after the code changes, then aggregate a report.
- Report on the impact of dependency upgrades: Run the same benchmark configuration before and after upgrading the dependency, then aggregate a report.
- Condense a too verbose report: Select only the interesting solver benchmarks from the existing report. This especially useful on template reports to make the graphs readable.
- Partially rerun a benchmark: Rerun part of an existing report (for example only the failed or invalid solvers), then recreate the original intended report with the new values.
To use it, provide a
PlannerBenchmarkFactory to the BenchmarkAggregatorFrame to display the GUI:
public static void main(String[] args) {
PlannerBenchmarkFactory plannerBenchmarkFactory = PlannerBenchmarkFactory.createFromXmlResource(
"org/optaplanner/examples/nqueens/benchmark/nqueensBenchmarkConfig.xml");
BenchmarkAggregatorFrame.createAndDisplay(plannerBenchmarkFactory);
}Warning
Despite that it uses a benchmark configuration as input, it ignores all elements of that configuration, except for the elements
<benchmarkDirectory> and <benchmarkReport>.
In the GUI, select the interesting benchmarks and click the button to generate the report.
Note
All the input reports which are being merged should have been generated with the same Planner version (excluding hotfix differences) as the
BenchmarkAggregator. Using reports from different Planner major or minor versions are not guaranteed to succeed and deliver correct information, because the benchmark report data structure often changes.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.