Chapter 4. fh-system-dump-tool
4.1. Overview
The fh-system-dump-tool allows you to analyze all the projects running in an OpenShift cluster and reports any problems discovered. Although this tool reports errors found in any project on the OpenShift Platform, it is primarily used to debug issues with RHMAP Core and MBaaS installations.
Running fh-system-dump-tool may take some time, depending on the complexity of the environment. When the analysis is finished, the tool reports any commonly found issues that might reflect a problem on the cluster or a project.
The fh-system-dump-tool archives the dump directory and the analysis results in a tar.gz file, which can be emailed to Red Hat Support, or decompressed and read locally.
4.2. Installation
Install the fh-system-dump-tool using the following command:
subscription-manager repos --enable= rhel-7-server-rhmap-4.7-rpms yum install fh-system-dump-tool
4.3. Requirements
The fh-system-dump-tool requires a local installation of the oc binary.
The fh-system-dump-tool also requires that the oc binary has a logged in user on the platform you wish to analyze. For fh-system-dump-tool to analyze a project, the logged in user must have access to that project and the logged in user must have the cluster-reader role, or equivalent permissions.
A Core or MBaaS running on OpenShift also contains a Nagios pod which monitors the platform and detects issues. The fh-system-dump-tool uses the Nagios data to analyze the platform and find faults. If the fh-system-dump-tool cannot locate Nagios it cannot perform a complete analysis.
4.4. Usage
The fh-system-dump-tool creates a directory called rhmap-dumps in the working directory and stores archive data in that directory.
To execute the tool use the following command:
fh-system-dump-tool
4.5. Understanding The Output
When the tool starts, it stores dump data and then performs an analysis. If the tool encounters any issues during the analysis phase, the errors are output to stderr. For more information on debugging errors, see Debugging.
Once the dump and analysis process is complete, the tool alerts the user of possible errors found in the OpenShift cluster and projects.
Finally, the dump and the analysis results are all archived into a tar.gz file and the tool reports the location of this file, which is timestamped. If you need to send this file for additional support, make sure that the file name and contents are unaltered, unless you are instructed otherwise by Red Hat Support.
4.6. Information Contained in the Dump Archive
Review the list of platform-level and project-level data that is included in the dumped archive, in case you consider any of the information to be sensitive, before sending the dump archive by email.
4.6.1. Platform Data
At a platform level, the dump includes:
- Description of all persistent volumes
-
The version of the
occlient in use - Details and permissions of the currently logged in OpenShift user
-
The output of the
oc adm diagnosticscommand -
The version of the
fh-system-dump-toolused - The name of all the projects the current user has access to
- The results of the analysis
4.6.2. Project Data
For each project discovered in the cluster, the following data is included in the dumped archive:
The definition in OpenShift for:
- configuration maps
- deployment configurations
- persistent volume claims
- pods
- services
- events
- The most recent logs for all available pods
4.7. Debugging
Start debugging by reviewing the output from the analysis phase.
To debug a system, you only need access to the archive file. In the root of the archive is a file named analysis.json which contains a summary of all the issues discovered while scanning the OpenShift cluster and projects. Use this file to start looking for potential issues with the analyzed OpenShift platform or the RHMAP Core and MBaaS projects installed on it.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.