Chapter 3. Centralized Logging for Core and MBaaS Components
Logging output from RHMAP Core and MBaaS components can be aggregated and accessed through a web console when using a RHMAP Core or MBaaS backed by OpenShift Enterprise 3 (OSEv3).
3.1. Enabling Centralized Logging
Aggregated logging is enabled by deploying an EFK logging stack to your OSEv3 instance, which consists of the following components:
- Elasticsearch indexes log output collected by Fluentd and makes it searchable.
- Fluentd collects standard output of all containers.
- Kibana is a web console for querying and visualizing data from Elasticsearch.
To enable this functionality, follow the official OpenShift guide Aggregating Container Logs.
3.2. Accessing Logs Through Kibana Web Console
The Kibana web console is where logs gathered by Fluentd and indexed by Elasticsearch can be viewed and queried. You can access the Kibana web console via the OpenShift web console, or directly by its URL configured through the KIBANA_HOSTNAME in the deployment procedure.
3.2.1. Viewing Logs of a Single Pod
If you have configured loggingPublicURL in section 28.5.4 of the deployment procedure, the OpenShift web console allows you to view the log archive of a particular pod.
-
In the OpenShift web console, select a project you are interested in.
- Click on the Pods circle of the specific service.
- Choose one of the pods to inspect.
- Click on the Logs tab.
- Click on the View Archive button at the top right corner to access the logs of the chosen pod in the Kibana web console.
By default, Kibana’s time filter shows the last 15 minutes of data. If you don’t see any values, adjust the Time filter setting to a broader time interval.
3.2.2. Accessing Kibana Directly
You can access the Kibana web console directly at https://KIBANA_HOSTNAME, where KIBANA_HOSTNAME is the host name you set in step 4 of the deployment procedure.
3.2.3. Configuring an Index Pattern
When accessing the Kibana web console directly for the first time, you are presented with the option to configure an index pattern. You can also access this configuration screen in the Settings tab.
For MBaaS deployments, there is an index pattern in the format <MBaaS ID>-mbaas.*, matching the ID of the deployed MBaaS target.
For RHMAP Core deployment, there is an index pattern core.*.
To make queries more efficient, you can restrict the index pattern by date and time.
- Select the Use event times to create index names
Enter the following pattern in the Index name or pattern input text field. For example:
[onprem-mbaas.]YYYY.MM.DD
You will see output similar to the following below the input field
Pattern matches 100% of existing indices and aliases onprem-mbaas.2016.02.04 onprem-mbaas.2016.02.05
- Click Create to create the index based on this pattern.
- You can now select this newly created index in the Discover tab when doing searches, as well as in other parts, such as the Visualizations tab.
3.3. Tracking Individual Requests in Logs
Every request to the RHMAP platform has a unique internal identifier assigned, which helps in identifying the sequence of events in Core and MBaaS components triggered by the request.
For example, if a user is deploying a form to an environment, the ID in the logging statements resulting from the request will be identical.
Only requests from fhc and Studio get an identifier assigned, not requests from mobile applications to an MBaaS.
Search for log statements related to a specific request in one of the following ways:
Using Kibana
-
Filter by the
reqIdfield. For examplereqId:"8f248216-c7e4-4898-a833-c98519cf4aea". -
Use the
.allindex to search in logs from components of both Core and MBaaS.
-
Filter by the
Using
fhc-
Enable
fhcto access the logging data, as described in Section 3.8, “Enablingfhcto Access Centralized Logs”. Use the
admin logs syslogscommand offhc:fhc admin logs syslogs --requestId 559d8f74-32d2-4c6e-b1a2-b46d2993e874 --projects="core,mbaas"
Set
--projectsto a comma-separated list of OpenShift project names to search in.
-
Enable
3.4. Identifying Issues in a RHMAP Core
If you encounter unexpected errors in RHMAP Core UI, you can use Kibana’s Discover tab to find the root of the problem. Every request that the RHMAP Core UI sends has an unique identifier that can be used to gather the relevant logs. The following steps describe the procedure:
Identify the request ID associated to the failed request you want to investigate
Errors in the platform usually manifests in UI as a notification pop-up, containing information about the URL endpoint the failed request was targeting, the error message and the Request ID. Take the note of the Request ID.
Query for the relevant logs in Kibana
Log in to your Kibana instance and go to the Discover tab. Enter a query in form
reqId:and you should see all of the logs relating to the failing request.Useful fields to display include:
- msg
- message
- kubernetes_container_name
- level
3.5. Identifying Issues in an MBaaS
If you suspect that an error of an MBaaS component may be the cause of an issue, you can use Kibana’s Discover tab to find the root of the problem. The following steps describe the general procedure you can follow to identify issues.
Select the index for the MBaaS target you are interested in
Use the dropdown just below the input bar in the Discover view to list all available indices. An index is similar to a database in relational database systems. Select which index your searches will be performed against.
Select a time interval for your search
Click the Time Filter (clock icon) and adjust the time interval. Initially, try a broader search.
Perform a simple search
To search for all error events, perform a simple search for
errorin the Discovery field. This will return the number of hits within the chosen time interval.Select the
msgormessagefield to be displayedOn the left hand side of the Discover view is a list of fields. From this list you can select fields to display in the document data section. Selecting a field replaces the
_sourcefield in the document data view. This enables you to see any error messages and might help you refine your original search if needed. You can also select more fields to help you locate the issue.
3.6. Viewing All Debug Logs for a Component
If searching for error messages doesn’t help, you can try looking into the debug logs of individual components.
- Select the index for the target that you are interested in
Start a new search
Click on the New Search button to the left of the search input bar, which looks like a document with a plus sign.
Search a component for all debug messages
For example, to search for all debug messages of the
fh-messagingcomponent, enter the following query:type: bunyan && level: 20 && kubernetes_container_name: "fh-messaging"
If you know some part of the error message, you can specify that as part of the search:
type: bunyan && level: 20 && kubernetes_container_name: "fh-messaging" && "Finished processing"
You can narrow down your search further by time, as described in step 5 above.
As a reference, the following are the Bunyan log levels:
TRACE = 10; DEBUG = 20; INFO = 30; WARN = 40; ERROR = 50; FATAL = 60;
3.7. Analyzing the search results
Narrow down the time interval
The histogram shows search hits returned in the chosen time interval. To narrow down the search in time you have the following options:
- Click on a bar in the histogram to narrow down the search to that bar’s time interval.
- Select a time window in the date histogram by clicking and dragging between the start/end time you are interested in.
Inspect the document data
Once you narrow down the search, you can inspect the document data items. Apart from the
msgandmessagefields, you might be interested inkubernetes_pod_nameto see the pod a message originates from.
3.8. Enabling fhc to Access Centralized Logs
To enable the fhc admin logs syslogs feature for searching platform logs by request IDs, configure fh-supercore to have access to Elasticsearch by following the steps in this section.
If fh-supercore is not configured for access to Elasticsearch, running fhc admin logs syslogs yields an error message similar to the following:
FH-SUPERCORE-ERROR - Aggregated Logging is not enabled for this cluster.
- Enable centralized logging, as described in Section 3.1, “Enabling Centralized Logging”.
Create a route to allow external access to Elasticsearch.
Log in to your OpenShift cluster
oc login <url-of-openshift-master>
Select the existing logging project.
oc project <logging-project-name>
Create a route to allow external access to Elasticsearch. Replace the values in angle brackets as appropriate for your environment.
oc create route passthrough --service=<elasticsearch-route-name> --hostname=<elasticsearch-hostname>.<openshift-master-hostname>
Create a secret for
fh-supercore.To read from Elasticsearch, fh-supercore will use the existing Kibana credentials. The existing Kibana certificate and key can be used. These can be read from the existing secret and decoded.
Read the secret and output to JSON format.
oc get secret logging-kibana -o json
This will output a base64-encoded representation of the certificate in "data.cert" and key in "data.key". We can now decode this and create a plain-text key and cert in our temp directory. Replace the output from the above command into the commands below.
Decode the key and output to the
/tmpdirectory or otherwise.echo "<contents-of-data.key>" | base64 --decode > /tmp/supercoreKey.key
Decode the certificate.
echo "<contents-of-data.cert>" | base64 --decode > /tmp/supercoreCert.crt
Switch to the Core project.
oc project <core-project-name>
Create a secret for
fh-supercorethat will use the Kibana credentials to perform searches.oc secrets new <core-secret-name> key=/tmp/supercoreKey.key crt=/tmp/supercoreCert.crt
A new secret is created in the core project called
<core-secret-name>as specified above.
Update the deployment configuration of
fh-supercore.Open the editor for
fh-supercoredeployment configuration.oc edit dc fh-supercore
Set properties.
Name Value FH_ES_LOGGING_ENABLEDtrueFH_ES_LOGGING_HOSThttps://<elasticsearch-hostname>.<openshift-master-hostname>FH_ES_LOGGING_KEY_PATH/etc/fh/es-keys/keyFH_ES_LOGGING_CERT_PATH/etc/fh/es-keys/crtFH_ES_LOGGING_API_VERSION1.5(the version of Elasticsearch used by Openshift 3.2)For example, if
<core-secret-name>wassupercore-elasticsearchspec: template: spec: volumes: - name: supercore-elasticsearch-volume secret: secretName: supercore-elasticsearch containers: - name: fh-supercore volumeMounts: - name: supercore-elasticsearch-volume readOnly: true mountPath: /etc/fh/es-keys

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.