Chapter 3. Centralized Logging for Core and MBaaS Components

Logging output from RHMAP Core and MBaaS components can be aggregated and accessed through a web console when using a RHMAP Core or MBaaS backed by OpenShift Enterprise 3 (OSEv3).

3.1. Enabling Centralized Logging

Aggregated logging is enabled by deploying an EFK logging stack to your OSEv3 instance, which consists of the following components:

  • Elasticsearch indexes log output collected by Fluentd and makes it searchable.
  • Fluentd collects standard output of all containers.
  • Kibana is a web console for querying and visualizing data from Elasticsearch.

To enable this functionality, follow the official OpenShift guide Aggregating Container Logs.

3.2. Accessing Logs Through Kibana Web Console

The Kibana web console is where logs gathered by Fluentd and indexed by Elasticsearch can be viewed and queried. You can access the Kibana web console via the OpenShift web console, or directly by its URL configured through the KIBANA_HOSTNAME in the deployment procedure.

3.2.1. Viewing Logs of a Single Pod

If you have configured loggingPublicURL in section 28.5.4 of the deployment procedure, the OpenShift web console allows you to view the log archive of a particular pod.

  1. In the OpenShift web console, select a project you are interested in.
  2. Click on the Pods circle of the specific service.
  3. Choose one of the pods to inspect.
  4. Click on the Logs tab.
  5. Click on the View Archive button at the top right corner to access the logs of the chosen pod in the Kibana web console.
Note

By default, Kibana’s time filter shows the last 15 minutes of data. If you don’t see any values, adjust the Time filter setting to a broader time interval.

3.2.2. Accessing Kibana Directly

You can access the Kibana web console directly at https://KIBANA_HOSTNAME, where KIBANA_HOSTNAME is the host name you set in step 4 of the deployment procedure.

3.2.3. Configuring an Index Pattern

When accessing the Kibana web console directly for the first time, you are presented with the option to configure an index pattern. You can also access this configuration screen in the Settings tab.

For MBaaS deployments, there is an index pattern in the format <MBaaS ID>-mbaas.*, matching the ID of the deployed MBaaS target.

For RHMAP Core deployment, there is an index pattern core.*.

To make queries more efficient, you can restrict the index pattern by date and time.

  1. Select the Use event times to create index names
  2. Enter the following pattern in the Index name or pattern input text field. For example:

    [onprem-mbaas.]YYYY.MM.DD
  3. You will see output similar to the following below the input field

    Pattern matches 100% of existing indices and aliases
    onprem-mbaas.2016.02.04
    onprem-mbaas.2016.02.05
  4. Click Create to create the index based on this pattern.
  5. You can now select this newly created index in the Discover tab when doing searches, as well as in other parts, such as the Visualizations tab.

3.3. Tracking Individual Requests in Logs

Every request to the RHMAP platform has a unique internal identifier assigned, which helps in identifying the sequence of events in Core and MBaaS components triggered by the request.

For example, if a user is deploying a form to an environment, the ID in the logging statements resulting from the request will be identical.

Note

Only requests from fhc and Studio get an identifier assigned, not requests from mobile applications to an MBaaS.

Search for log statements related to a specific request in one of the following ways:

  • Using Kibana

    • Filter by the reqId field. For example reqId:"8f248216-c7e4-4898-a833-c98519cf4aea".
    • Use the .all index to search in logs from components of both Core and MBaaS.

    Failed Request Kibana Search

  • Using fhc

    1. Enable fhc to access the logging data, as described in Section 3.8, “Enabling fhc to Access Centralized Logs”.
    2. Use the admin logs syslogs command of fhc:

      fhc admin logs syslogs --requestId 559d8f74-32d2-4c6e-b1a2-b46d2993e874 --projects="core,mbaas"

      Set --projects to a comma-separated list of OpenShift project names to search in.

3.4. Identifying Issues in a RHMAP Core

If you encounter unexpected errors in RHMAP Core UI, you can use Kibana’s Discover tab to find the root of the problem. Every request that the RHMAP Core UI sends has an unique identifier that can be used to gather the relevant logs. The following steps describe the procedure:

  1. Identify the request ID associated to the failed request you want to investigate

    Errors in the platform usually manifests in UI as a notification pop-up, containing information about the URL endpoint the failed request was targeting, the error message and the Request ID. Take the note of the Request ID.

    Failed Request Notification Pop-Up

  2. Query for the relevant logs in Kibana

    Log in to your Kibana instance and go to the Discover tab. Enter a query in form reqId: and you should see all of the logs relating to the failing request.

    Useful fields to display include:

    • msg
    • message
    • kubernetes_container_name
    • level

3.5. Identifying Issues in an MBaaS

If you suspect that an error of an MBaaS component may be the cause of an issue, you can use Kibana’s Discover tab to find the root of the problem. The following steps describe the general procedure you can follow to identify issues.

  1. Select the index for the MBaaS target you are interested in

    Use the dropdown just below the input bar in the Discover view to list all available indices. An index is similar to a database in relational database systems. Select which index your searches will be performed against.

  2. Select a time interval for your search

    Click the Time Filter (clock icon) and adjust the time interval. Initially, try a broader search.

  3. Perform a simple search

    To search for all error events, perform a simple search for error in the Discovery field. This will return the number of hits within the chosen time interval.

  4. Select the msg or message field to be displayed

    On the left hand side of the Discover view is a list of fields. From this list you can select fields to display in the document data section. Selecting a field replaces the _source field in the document data view. This enables you to see any error messages and might help you refine your original search if needed. You can also select more fields to help you locate the issue.

3.6. Viewing All Debug Logs for a Component

If searching for error messages doesn’t help, you can try looking into the debug logs of individual components.

  1. Select the index for the target that you are interested in
  2. Start a new search

    Click on the New Search button to the left of the search input bar, which looks like a document with a plus sign.

  3. Search a component for all debug messages

    For example, to search for all debug messages of the fh-messaging component, enter the following query:

    type: bunyan && level: 20 && kubernetes_container_name: "fh-messaging"

    If you know some part of the error message, you can specify that as part of the search:

    type: bunyan && level: 20 && kubernetes_container_name: "fh-messaging" && "Finished processing"

    You can narrow down your search further by time, as described in step 5 above.

    As a reference, the following are the Bunyan log levels:

    TRACE = 10;
    DEBUG = 20;
    INFO = 30;
    WARN = 40;
    ERROR = 50;
    FATAL = 60;

3.7. Analyzing the search results

  1. Narrow down the time interval

    The histogram shows search hits returned in the chosen time interval. To narrow down the search in time you have the following options:

    • Click on a bar in the histogram to narrow down the search to that bar’s time interval.
    • Select a time window in the date histogram by clicking and dragging between the start/end time you are interested in.
  2. Inspect the document data

    Once you narrow down the search, you can inspect the document data items. Apart from the msg and message fields, you might be interested in kubernetes_pod_name to see the pod a message originates from.

3.8. Enabling fhc to Access Centralized Logs

To enable the fhc admin logs syslogs feature for searching platform logs by request IDs, configure fh-supercore to have access to Elasticsearch by following the steps in this section.

Note

If fh-supercore is not configured for access to Elasticsearch, running fhc admin logs syslogs yields an error message similar to the following:

FH-SUPERCORE-ERROR - Aggregated Logging is not enabled for this cluster.
  1. Enable centralized logging, as described in Section 3.1, “Enabling Centralized Logging”.
  2. Create a route to allow external access to Elasticsearch.

    1. Log in to your OpenShift cluster

      oc login <url-of-openshift-master>
    2. Select the existing logging project.

      oc project <logging-project-name>
    3. Create a route to allow external access to Elasticsearch. Replace the values in angle brackets as appropriate for your environment.

      oc create route passthrough --service=<elasticsearch-route-name> --hostname=<elasticsearch-hostname>.<openshift-master-hostname>
  3. Create a secret for fh-supercore.

    To read from Elasticsearch, fh-supercore will use the existing Kibana credentials. The existing Kibana certificate and key can be used. These can be read from the existing secret and decoded.

    1. Read the secret and output to JSON format.

      oc get secret logging-kibana -o json

      This will output a base64-encoded representation of the certificate in "data.cert" and key in "data.key". We can now decode this and create a plain-text key and cert in our temp directory. Replace the output from the above command into the commands below.

    2. Decode the key and output to the /tmp directory or otherwise.

      echo "<contents-of-data.key>" | base64 --decode > /tmp/supercoreKey.key
    3. Decode the certificate.

      echo "<contents-of-data.cert>" | base64 --decode > /tmp/supercoreCert.crt
    4. Switch to the Core project.

      oc project <core-project-name>
    5. Create a secret for fh-supercore that will use the Kibana credentials to perform searches.

      oc secrets new <core-secret-name> key=/tmp/supercoreKey.key crt=/tmp/supercoreCert.crt

      A new secret is created in the core project called <core-secret-name> as specified above.

  4. Update the deployment configuration of fh-supercore.

    1. Open the editor for fh-supercore deployment configuration.

      oc edit dc fh-supercore
    2. Set properties.

      NameValue

      FH_ES_LOGGING_ENABLED

      true

      FH_ES_LOGGING_HOST

      https://<elasticsearch-hostname>.<openshift-master-hostname>

      FH_ES_LOGGING_KEY_PATH

      /etc/fh/es-keys/key

      FH_ES_LOGGING_CERT_PATH

      /etc/fh/es-keys/crt

      FH_ES_LOGGING_API_VERSION

      1.5 (the version of Elasticsearch used by Openshift 3.2)

      For example, if <core-secret-name> was supercore-elasticsearch

      spec:
        template:
          spec:
            volumes:
              -
                name: supercore-elasticsearch-volume
                secret:
                  secretName: supercore-elasticsearch
          containers:
            -
              name: fh-supercore
              volumeMounts:
                -
                  name: supercore-elasticsearch-volume
                  readOnly: true
                  mountPath: /etc/fh/es-keys