11.3. Test Results

Figure 11.1. 

The above graph compares the throughput of four configurations (transactions per second). Running the same workload, tests were run with the “all” configuration unmodified, expect for deploying the application and its data source, as the baseline with no optimizations. As discussed in the book, the “all” configuration is the basis for the production configuration. The next test was the “production” configuration, with nothing modified but the application and data source deployed, just like the “all” configuration. As you can see from the results, the changes made to the production configuration had a very positive effect on throughput. In fact, throughput is 39.13% higher at the peaks and that was achieved simply by deploying the application into a different configuration. The next result is using the “production” configuration again, but with the heap size set to match the fully optimized test run, which was 12GB. This test was run to demonstrate that making the heap size larger does not necessarily account for much of a performance gains since the throughput in this test was only 4.14% higher. An increase in heap size must be matched with changes in garbage collection and large page memory. Without those optimizations a larger heap may help, but not by as much as might be expected. The fully optimized result is the very best with throughput 77.82% higher.

Figure 11.2. 

Optimizations made a significant difference in response times. Not only is the fully optimized configuration the fastest by a wide margin, its slope is also less steep, showing the improved scalability of the fully optimized result. At the peak throughput level, the fully optimized result has a response time that is 45.45% lower than the baseline. At the 100 user level, the response time is 78.26% lower!