Show Table of Contents
14.2.1. Add Dependency On
14.2.2. Build And Run A
14.2.3.1.
14.2.3.2.
14.2.3.3. Custom
14.2. Benchmark Configuration
14.2.1. Add Dependency On optaplanner-benchmark
The benchmarker is in a separate artifact called
optaplanner-benchmark.
If you use Maven, add a dependency in your
pom.xml file:
<dependency>
<groupId>org.optaplanner</groupId>
<artifactId>optaplanner-benchmark</artifactId>
</dependency>
This is similar for Gradle, Ivy and Buildr. The version must be exactly the same as the
optaplanner-core version used (which is automatically the case if you import optaplanner-bom).
If you use ANT, you've probably already copied the required jars from the download zip's
binaries directory.
14.2.2. Build And Run A PlannerBenchmark
Build a
PlannerBenchmark instance with a PlannerBenchmarkFactory. Configure it with a benchmark configuration XML file, provided as a classpath resource:
PlannerBenchmarkFactory plannerBenchmarkFactory = PlannerBenchmarkFactory.createFromXmlResource(
"org/optaplanner/examples/nqueens/benchmark/nqueensBenchmarkConfig.xml");
PlannerBenchmark plannerBenchmark = benchmarkFactory.buildPlannerBenchmark();
plannerBenchmark.benchmark();
A benchmark configuration file looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<plannerBenchmark>
<benchmarkDirectory>local/data/nqueens</benchmarkDirectory>
<inheritedSolverBenchmark>
<problemBenchmarks>
...
<inputSolutionFile>data/nqueens/unsolved/32queens.xml</inputSolutionFile>
<inputSolutionFile>data/nqueens/unsolved/64queens.xml</inputSolutionFile>
</problemBenchmarks>
<solver>
...<!-- Common solver configuration -->
</solver>
</inheritedSolverBenchmark>
<solverBenchmark>
<name>Tabu Search</name>
<solver>
...<!-- Tabu Search specific solver configuration -->
</solver>
</solverBenchmark>
<solverBenchmark>
<name>Simulated Annealing</name>
<solver>
...<!-- Simulated Annealing specific solver configuration -->
</solver>
</solverBenchmark>
<solverBenchmark>
<name>Late Acceptance</name>
<solver>
...<!-- Late Acceptance specific solver configuration -->
</solver>
</solverBenchmark>
</plannerBenchmark>
This
PlannerBenchmark will try 3 configurations (Tabu Search, Simulated Annealing and Late Acceptance) on 2 data sets (32queens and 64queens), so it will run 6 solvers.
Every
<solverBenchmark> element contains a solver configuration and one or more <inputSolutionFile> elements. It will run the solver configuration on each of those unsolved solution files. The element name is optional, because it is generated if absent. The inputSolutionFile is read by a SolutionFileIO (relative to the working directory).
Note
Use a forward slash (
/) as the file separator (for example in the element <inputSolutionFile>). That will work on any platform (including Windows).
Do not use backslash (
\) as the file separator: that breaks portability because it does not work on Linux and Mac.
The benchmark report will be written in the directory specified the
<benchmarkDirectory> element (relative to the working directory).
Note
It's recommended that the
benchmarkDirectory is a directory ignored for source control and not cleaned by your build system. This way the generated files are not bloating your source control and they aren't lost when doing a build. Usually that directory is called local.
If an
Exception or Error occurs in a single benchmark, the entire Benchmarker will not fail-fast (unlike everything else in Planner). Instead, the Benchmarker will continue to run all other benchmarks, write the benchmark report and then fail (if there is at least 1 failing single benchmark). The failing benchmarks will be clearly marked as such in the benchmark report.
14.2.2.1. Inherited solver benchmark
To lower verbosity, the common parts of multiple
<solverBenchmark> elements are extracted to the <inheritedSolverBenchmark> element. Every property can still be overwritten per <solverBenchmark> element. Note that inherited solver phases such as <constructionHeuristic> or <localSearch> are not overwritten but instead are added to the tail of the solver phases list.
14.2.3. SolutionFileIO: Input And Output Of Solution Files
14.2.3.1. SolutionFileIO Interface
The benchmarker needs to be able to read the input files to load a
Solution. Also, it might need to write the best Solution of each benchmark to an output file. For that it uses a class that implements the SolutionFileIO interface:
public interface SolutionFileIO {
String getInputFileExtension();
String getOutputFileExtension();
Solution read(File inputSolutionFile);
void write(Solution solution, File outputSolutionFile);
}
The
SolutionFileIO interface is in the optaplanner-persistence-common jar (which is a dependency of the optaplanner-benchmark jar).
14.2.3.2. XStreamSolutionFileIO: The Default SolutionFileIO
By default, a benchmarker uses a
XStreamSolutionFileIO instance to read and write solutions.
It's required to tell the benchmarker about your
Solution class which is annotated with XStream annotations:
<problemBenchmarks>
<xStreamAnnotatedClass>org.optaplanner.examples.nqueens.domain.NQueens</xStreamAnnotatedClass>
<inputSolutionFile>data/nqueens/unsolved/32queens.xml</inputSolutionFile>
...
</problemBenchmarks>
Those input files need to have been written with a
XStreamSolutionFileIO instance, not just any XStream instance, because the XStreamSolutionFileIO uses a customized XStream instance.
Warning
XStream (and XML in general) is a very verbose format. Reading or writing very large datasets in this format can cause an
OutOfMemoryError and performance degradation.
14.2.3.3. Custom SolutionFileIO
Alternatively, implement your own
SolutionFileIO implementation and configure it with the solutionFileIOClass element:
<problemBenchmarks>
<solutionFileIOClass>org.optaplanner.examples.machinereassignment.persistence.MachineReassignmentFileIO</solutionFileIOClass>
<inputSolutionFile>data/machinereassignment/import/model_a1_1.txt</inputSolutionFile>
...
</problemBenchmarks>
It's recommended that output files can be read as input files, which also implies that
getInputFileExtension() and getOutputFileExtension() return the same value.
Warning
A
SolutionFileIO implementation must be thread-safe.
14.2.3.4. Reading An Input Solution From A Database (Or Other Repository)
The benchmark configuration currently expects an
<inputSolutionFile> element for each dataset. There are 2 ways to deal with this if your dataset is in a database or another type of repository:
- Extract the datasets from the database and serialize them to a local file (for example as XML with
XStreamSolutionFileIO). Then use those files an<inputSolutionFile>elements. - For each dataset, create a txt file that holds the unique id of the dataset. Write a custom
SolutionFileIOthat reads that identifier, connects to the database and extract the problem identified by that id. Configure those txt files as<inputSolutionFile>elements.
Note
Local files are always faster and don't require a network connection.
14.2.4. Warming Up The HotSpot Compiler
Without a warm up, the results of the first (or first few) benchmarks are not reliable, because they will have lost CPU time on HotSpot JIT compilation (and possibly DRL compilation too).
The avoid that distortion, the benchmarker can run some of the benchmarks for a specified amount of time, before running the real benchmarks. Generally, a warm up of 30 seconds suffices:
<plannerBenchmark> ... <warmUpSecondsSpentLimit>30</warmUpSecondsSpentLimit> ... </plannerBenchmark>
Note
The warm up time budget does not include the time it takes to load the datasets. With large datasets, this can cause the warm up to run considerably longer than specified in the configuration.
14.2.5. Benchmark Blueprint: A Predefined Configuration
To quickly configure and run a benchmark for typical solver configs, use a
solverBenchmarkBluePrint instead of solverBenchmarks:
<?xml version="1.0" encoding="UTF-8"?>
<plannerBenchmark>
<benchmarkDirectory>local/data/nqueens</benchmarkDirectory>
<warmUpSecondsSpentLimit>30</warmUpSecondsSpentLimit>
<inheritedSolverBenchmark>
<problemBenchmarks>
<xStreamAnnotatedClass>org.optaplanner.examples.nqueens.domain.NQueens</xStreamAnnotatedClass>
<inputSolutionFile>data/nqueens/unsolved/32queens.xml</inputSolutionFile>
<inputSolutionFile>data/nqueens/unsolved/64queens.xml</inputSolutionFile>
<problemStatisticType>BEST_SCORE</problemStatisticType>
</problemBenchmarks>
<solver>
<scanAnnotatedClasses/>
<scoreDirectorFactory>
<scoreDefinitionType>SIMPLE</scoreDefinitionType>
<scoreDrl>org/optaplanner/examples/nqueens/solver/nQueensScoreRules.drl</scoreDrl>
<initializingScoreTrend>ONLY_DOWN</initializingScoreTrend>
</scoreDirectorFactory>
<termination>
<minutesSpentLimit>1</minutesSpentLimit>
</termination>
</solver>
</inheritedSolverBenchmark>
<solverBenchmarkBluePrint>
<solverBenchmarkBluePrintType>EVERY_CONSTRUCTION_HEURISTIC_TYPE_WITH_EVERY_LOCAL_SEARCH_TYPE</solverBenchmarkBluePrintType>
</solverBenchmarkBluePrint>
</plannerBenchmark>
The following
SolverBenchmarkBluePrintTypes are supported:
EVERY_CONSTRUCTION_HEURISTIC_TYPE: Run every Construction Heuristic type (First Fit, First Fit Decreasing, Cheapest Insertion, ...).
EVERY_LOCAL_SEARCH_TYPE: Run every Local Search type (Tabu Search, Late Acceptance, ...) with the default Construction Heuristic.
EVERY_CONSTRUCTION_HEURISTIC_TYPE_WITH_EVERY_LOCAL_SEARCH_TYPE: Run every Construction Heuristic type with every Local Search type.
14.2.6. Write The Output Solution Of Benchmark Runs
The best solution of each benchmark run can be written in the
benchmarkDirectory. By default, this is disabled, because the files are rarely used and considered bloat. Also, on large datasets, writing the best solution of each single benchmark can take quite some time and memory (causing an OutOfMemoryError), especially in a verbose format like XStream XML.
To write those solutions in the
benchmarkDirectory, enable writeOutputSolutionEnabled:
<problemBenchmarks>
...
<writeOutputSolutionEnabled>true</writeOutputSolutionEnabled>
...
</problemBenchmarks>14.2.7. Benchmark Logging
Benchmark logging is configured like the
Solver logging.
To separate the log messages of each single benchmark run into a separate file, use the MDC with key
singleBenchmark.name in a sifting appender. For example with Logback in logback.xml:
<appender name="fileAppender" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<key>singleBenchmark.name</key>
<defaultValue>app</defaultValue>
</discriminator>
<sift>
<appender name="fileAppender.${singleBenchmark.name}" class="...FileAppender">
<file>local/log/optaplannerBenchmark-${singleBenchmark.name}.log</file>
...
</appender>
</sift>
</appender>
Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.