Chapter 3. Benchmarking Data Grid on OpenShift
For Data Grid clusters running on OpenShift, Red Hat recommends using Hyperfoil to measure performance. Hyperfoil is a benchmarking framework that provides accurate performance results for distributed services.
3.1. Benchmarking Data Grid
After you set up and configure your deployment, start benchmarking your Data Grid cluster to analyze and measure performance. Benchmarking shows you where limits exist so you can adjust your environment and tune your Data Grid configuration to get the best performance, which means achieving the lowest latency and highest throughput possible.
It is worth noting that optimal performance is a continual process, not an ultimate goal. When your benchmark tests show that your Data Grid deployment has reached a desired level of performance, you cannot expect those results to be fixed or always valid.
3.2. Installing Hyperfoil
Set up Hyperfoil on Red Hat OpenShift by creating an operator subscription and downloading the Hyperfoil distribution that includes the command line interface (CLI).
Procedure
Create a Hyperfoil Operator subscription through the OperatorHub in the OpenShift Web Console.
NoteHyperfoil Operator is available as a Community Operator.
Red Hat does not certify the Hyperfoil Operator and does not provide support for it in combination with Data Grid. When you install the Hyperfoil Operator you are prompted to acknowledge a warning about the community version before you can continue.
- Download the latest Hyperfoil version from the Hyperfoil release page.
Additional resources
3.3. Creating a Hyperfoil Controller
Instantiate a Hyperfoil Controller on Red Hat OpenShift so you can upload and run benchmark tests with the Hyperfoil Command Line Interface (CLI).
Prerequisites
- Create a Hyperfoil Operator subscription.
Procedure
Define
hyperfoil-controller.yaml
.$ cat > hyperfoil-controller.yaml<<EOF apiVersion: hyperfoil.io/v1alpha2 kind: Hyperfoil metadata: name: hyperfoil spec: version: latest EOF
Apply the Hyperfoil Controller.
$ oc apply -f hyperfoil-controller.yaml
Retrieve the route that connects you to the Hyperfoil CLI.
$ oc get routes NAME HOST/PORT hyperfoil hyperfoil-benchmark.apps.example.net
3.4. Running Hyperfoil benchmarks
Run benchmark tests with Hyperfoil to collect performance data for Data Grid clusters.
Prerequisites
- Create a Hyperfoil Operator subscription.
- Instantiate a Hyperfoil Controller on Red Hat OpenShift.
Procedure
Create a benchmark test.
$ cat > hyperfoil-benchmark.yaml<<EOF name: hotrod-benchmark hotrod: # Replace <USERNAME>:<PASSWORD> with your Data Grid credentials. # Replace <SERVICE_HOSTNAME>:<PORT> with the host name and port for Data Grid. - uri: hotrod://<USERNAME>:<PASSWORD>@<SERVICE_HOSTNAME>:<PORT> caches: # Replace <CACHE-NAME> with the name of your Data Grid cache. - <CACHE-NAME> agents: agent-1: agent-2: agent-3: agent-4: agent-5: phases: - rampupPut: increasingRate: duration: 10s initialUsersPerSec: 100 targetUsersPerSec: 200 maxSessions: 300 scenario: &put - putData: - randomInt: cacheKey <- 1 .. 40000 - randomUUID: cacheValue - hotrodRequest: # Replace <CACHE-NAME> with the name of your Data Grid cache. put: <CACHE-NAME> key: key-${cacheKey} value: value-${cacheValue} - rampupGet: increasingRate: duration: 10s initialUsersPerSec: 100 targetUsersPerSec: 200 maxSessions: 300 scenario: &get - getData: - randomInt: cacheKey <- 1 .. 40000 - randomUUID: cacheValue - hotrodRequest: # Replace <CACHE-NAME> with the name of your Data Grid cache. put: <CACHE-NAME> key: key-${cacheKey} value: value-${cacheValue} - doPut: constantRate: startAfter: rampupPut duration: 5m usersPerSec: 10000 maxSessions: 11000 scenario: *put - doGet: constantRate: startAfter: rampupGet duration: 5m usersPerSec: 40000 maxSessions: 41000 scenario: *get EOF
- Open the route in any browser to access the Hyperfoil CLI.
Upload the benchmark test.
Run the
upload
command.[hyperfoil]$ upload
- Click Select benchmark file and then navigate to the benchmark test on your file system and upload it.
Run the benchmark test.
[hyperfoil]$ run hotrod-benchmark
Get results of the benchmark test.
[hyperfoil]$ stats
3.5. Hyperfoil benchmark results
Hyperfoil prints results of the benchmarking run in table format with the stats
command.
[hyperfoil]$ stats Total stats from run <run_id> PHASE METRIC THROUGHPUT REQUESTS MEAN p50 p90 p99 p99.9 p99.99 TIMEOUTS ERRORS BLOCKED
Table 3.1. Column descriptions
Column | Description | Value |
---|---|---|
PHASE |
For each run, Hyperfoil makes |
Either |
METRIC |
During both phases of the run, Hyperfoil collects metrics for each |
Either |
THROUGHPUT | Captures the total number of requests per second. | Number |
REQUESTS | Captures the total number of operations during each phase of the run. | Number |
MEAN |
Captures the average time for |
Time in milliseconds ( |
p50 | Records the amount of time that it takes for 50 percent of requests to complete. |
Time in milliseconds ( |
p90 | Records the amount of time that it takes for 90 percent of requests to complete. |
Time in milliseconds ( |
p99 | Records the amount of time that it takes for 99 percent of requests to complete. |
Time in milliseconds ( |
p99.9 | Records the amount of time that it takes for 99.9 percent of requests to complete. |
Time in milliseconds ( |
p99.99 | Records the amount of time that it takes for 99.99 percent of requests to complete. |
Time in milliseconds ( |
TIMEOUTS | Captures the total number of timeouts that occurred for operations during each phase of the run. | Number |
ERRORS | Captures the total number of errors that occurred during each phase of the run. | Number |
BLOCKED | Captures the total number of operations that were blocked or could not complete. | Number |