Chapter 6. Monitoring your application

This section contains information about monitoring your Eclipse Vert.x–based application running on OpenShift.

6.1. Accessing JVM metrics for your application on OpenShift

6.1.1. Accessing JVM metrics using Jolokia on OpenShift

Jolokia is a built-in lightweight solution for accessing JMX (Java Management Extension) metrics over HTTP on OpenShift. Jolokia allows you to access CPU, storage, and memory usage data collected by JMX over an HTTP bridge. Jolokia uses a REST interface and JSON-formatted message payloads. It is suitable for monitoring cloud applications thanks to its comparably high speed and low resource requirements.

For Java-based applications, the OpenShift Web console provides the integrated hawt.io console that collects and displays all relevant metrics output by the JVM running your application.

Prerequistes

  • the oc client authenticated
  • a Java-based application container running in a project on OpenShift
  • latest JDK 1.8.0 image

Procedure

  1. List the deployment configurations of the pods inside your project and select the one that corresponds to your application.

    oc get dc
    NAME         REVISION   DESIRED   CURRENT   TRIGGERED BY
    MY_APP_NAME   2          1         1         config,image(my-app:6)
    ...
  2. Open the YAML deployment template of the pod running your application for editing.

    oc edit dc/MY_APP_NAME
  3. Add the following entry to the ports section of the template and save your changes:

    ...
    spec:
      ...
      ports:
      - containerPort: 8778
        name: jolokia
        protocol: TCP
      ...
    ...
  4. Redeploy the pod running your application.

    oc rollout latest dc/MY_APP_NAME

    The pod is redeployed with the updated deployment configuration and exposes the port 8778.

  5. Log into the OpenShift Web console.
  6. In the sidebar, navigate to Applications > Pods, and click on the name of the pod running your application.
  7. In the pod details screen, click Open Java Console to access the hawt.io console.

Additional resources

6.2. Exposing application metrics using Prometheus with Eclipse Vert.x

Prometheus connects to a monitored application to collect data; the application does not send metrics to a server.

Prerequisites

  • Prometheus server running on your cluster

Procedure

  1. Include the vertx-micrometer and vertx-web dependencies in the pom.xml file of your application:

    pom.xml

    <dependency>
      <groupId>io.vertx</groupId>
      <artifactId>vertx-micrometer-metrics</artifactId>
    </dependency>
    <dependency>
      <groupId>io.vertx</groupId>
      <artifactId>vertx-web</artifactId>
    </dependency>

  2. Starting with version 3.5.4, exposing metrics for Prometheus requires that you configure the Eclipse Vert.x options in a custom Launcher class.

    In your custom Launcher class, override the beforeStartingVertx and afterStartingVertx methods to configure the metrics engine, for example:

    Example CustomLauncher.java file

    package org.acme;
    
    import io.micrometer.core.instrument.Meter;
    import io.micrometer.core.instrument.config.MeterFilter;
    import io.micrometer.core.instrument.distribution.DistributionStatisticConfig;
    import io.micrometer.prometheus.PrometheusMeterRegistry;
    import io.vertx.core.Vertx;
    import io.vertx.core.VertxOptions;
    import io.vertx.core.http.HttpServerOptions;
    import io.vertx.micrometer.MicrometerMetricsOptions;
    import io.vertx.micrometer.VertxPrometheusOptions;
    import io.vertx.micrometer.backends.BackendRegistries;
    
    public class CustomLauncher extends Launcher {
    
      @Override
      public void beforeStartingVertx(VertxOptions options) {
        options.setMetricsOptions(new MicrometerMetricsOptions()
          .setPrometheusOptions(new VertxPrometheusOptions().setEnabled(true)
            .setStartEmbeddedServer(true)
            .setEmbeddedServerOptions(new HttpServerOptions().setPort(8081))
            .setEmbeddedServerEndpoint("/metrics"))
          .setEnabled(true));
      }
    
      @Override
      public void afterStartingVertx(Vertx vertx) {
        PrometheusMeterRegistry registry = (PrometheusMeterRegistry) BackendRegistries.getDefaultNow();
        registry.config().meterFilter(
          new MeterFilter() {
            @Override
            public DistributionStatisticConfig configure(Meter.Id id, DistributionStatisticConfig config) {
              return DistributionStatisticConfig.builder()
                .percentilesHistogram(true)
                .build()
                .merge(config);
            }
        });
    }

  3. Create a custom Verticle class and override the start method to collect metrics. For example, measure the execution time using the Timer class:

    Example CustomVertxApp.java file

    package org.acme;
    
    import io.micrometer.core.instrument.MeterRegistry;
    import io.micrometer.core.instrument.Timer;
    import io.vertx.core.AbstractVerticle;
    import io.vertx.core.Vertx;
    import io.vertx.core.VertxOptions;
    import io.vertx.core.http.HttpServerOptions;
    import io.vertx.micrometer.backends.BackendRegistries;
    
    public class CustomVertxApp extends AbstractVerticle {
    
      @Override
      public void start() {
        MeterRegistry registry = BackendRegistries.getDefaultNow();
        Timer timer = Timer
          .builder("my.timer")
          .description("a description of what this timer does")
          .register(registry);
    
        vertx.setPeriodic(1000, l -> {
          timer.record(() -> {
    
            // Do something
    
          });
        });
      }
    }

  4. Set the <vertx.verticle> and <vertx.launcher> properties in the pom.xml file of your application to point to your custom classes:

    <properties>
      ...
      <vertx.verticle>org.acme.CustomVertxApp</vertx.verticle>
      <vertx.launcher>org.acme.CustomLauncher</vertx.launcher>
      ...
    </properties>
  5. Launch your application:

    $ mvn vertx:run
  6. Invoke the traced endpoint several times:

    $ curl http://localhost:8080/
    Hello
  7. Wait at least 15 seconds for collection to occur, and see the metrics in Prometheus UI:

    1. Open the Prometheus UI at http://localhost:9090/ and type hello into the Expression box.
    2. From the suggestions, select for example application:hello_count and click Execute.
    3. In the table that is displayed, you can see how many times the resource method was invoked.
    4. Alternatively, select application:hello_time_mean_seconds to see the mean time of all the invocations.

    Note that all metrics you created are prefixed with application:. There are other metrics, automatically exposed by Eclipse Vert.x as the MicroProfile Metrics specification requires. Those metrics are prefixed with base: and vendor: and expose information about the JVM in which the application runs.

Additional resources