14.4. Summary Statistics
14.4.1. Best Score Summary (Graph And Table)
Shows the best score per inputSolutionFile for each solver configuration.
Useful for visualizing the best solver configuration.
Figure 14.1. Best Score Summary Statistic

14.4.2. Best Score Scalability Summary (Graph)
Shows the best score per problem scale for each solver configuration.
Useful for visualizing the scalability of each solver configuration.
The problem scale will report 0 if any @ValueRangeProvider method signature returns ValueRange (instead of CountableValueRange or Collection). See ValueRangeFactory for the difference.
14.4.3. Best Score Distribution Summary (Graph)
Shows the best score distribution per inputSolutionFile for each solver configuration.
Useful for visualizing the reliability of each solver configuration.
Figure 14.2. Best Score Distribution Summary Statistic

Enable statistical benchmarking to use this summary.
14.4.4. Winning Score Difference Summary (Graph And Table)
Shows the winning score difference per inputSolutionFile for each solver configuration. The winning score difference is the score difference with the score of the winning solver configuration for that particular inputSolutionFile.
Useful for zooming in on the results of the best score summary.
14.4.5. Worst Score Difference Percentage (ROI) Summary (Graph and Table)
Shows the return on investment (ROI) per inputSolutionFile for each solver configuration if you’d upgrade from the worst solver configuration for that particular inputSolutionFile.
Useful for visualizing the return on investment (ROI) to decision makers.
14.4.6. Average Calculation Count Summary (Graph and Table)
Shows the score calculation speed: the average calculation count per second per problem scale for each solver configuration.
Useful for comparing different score calculators and/or score rule implementations (presuming that the solver configurations do not differ otherwise). Also useful to measure the scalability cost of an extra constraint.
14.4.7. Time Spent Summary (Graph And Table)
Shows the time spent per inputSolutionFile for each solver configuration. This is pointless if it’s benchmarking against a fixed time limit.
Useful for visualizing the performance of construction heuristics (presuming that no other solver phases are configured).
14.4.8. Time Spent Scalability Summary (Graph)
Shows the time spent per problem scale for each solver configuration. This is pointless if it’s benchmarking against a fixed time limit.
Useful for extrapolating the scalability of construction heuristics (presuming that no other solver phases are configured).
14.4.9. Best Score Per Time Spent Summary (Graph)
Shows the best score per time spent for each solver configuration. This is pointless if it’s benchmarking against a fixed time limit.
Useful for visualizing trade-off between the best score versus the time spent for construction heuristics (presuming that no other solver phases are configured).

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.