Red Hat Training

A Red Hat training course is available for Red Hat Enterprise Linux

24.4. Detecting Software Problems

ABRT is capable of detecting, analyzing, and processing crashes in applications written in a variety of different programming languages. Many of the packages that contain the support for detecting the various types of crashes are installed automatically when either one of the main ABRT packages (abrt-desktop, abrt-cli) is installed. See Section 24.2, “Installing ABRT and Starting its Services” for instructions on how to install ABRT. See the table below for a list of the supported types of crashes and the respective packages.

Table 24.2. Supported Programming Languages and Software Projects

Langauge/ProjectPackage
C or C++abrt-addon-ccpp
Pythonabrt-addon-python
Rubyrubygem-abrt
Javaabrt-java-connector
X.Orgabrt-addon-xorg
Linux (kernel oops)abrt-addon-kerneloops
Linux (kernel panic)abrt-addon-vmcore
Linux (persistent storage)abrt-addon-pstoreoops

24.4.1. Detecting C and C++ Crashes

The abrt-ccpp service installs its own core-dump handler, which, when started, overrides the default value of the kernel's core_pattern variable, so that C and C++ crashes are handled by abrtd. If you stop the abrt-ccpp service, the previously specified value of core_pattern is reinstated.
By default, the /proc/sys/kernel/core_pattern file contains the string core, which means that the kernel produces files with the core. prefix in the current directory of the crashed process. The abrt-ccpp service overwrites the core_pattern file to contain the following command:
|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e
This command instructs the kernel to pipe the core dump to the abrt-hook-ccpp program, which stores it in ABRT's dump location and notifies the abrtd daemon of the new crash. It also stores the following files from the /proc/PID/ directory (where PID is the ID of the crashed process) for debugging purposes: maps, limits, cgroup, status. See proc(5) for a description of the format and the meaning of these files.

24.4.2. Detecting Python Exceptions

The abrt-addon-python package installs a custom exception handler for Python applications. The Python interpreter then automatically imports the abrt.pth file installed in /usr/lib64/python2.7/site-packages/, which in turn imports abrt_exception_handler.py. This overrides Python's default sys.excepthook with a custom handler, which forwards unhandled exceptions to abrtd via its Socket API.
To disable the automatic import of site-specific modules, and thus prevent the ABRT custom exception handler from being used when running a Python application, pass the -S option to the Python interpreter:
~]$ python -S file.py
In the above command, replace file.py with the name of the Python script you want to execute without the use of site-specific modules.

24.4.3. Detecting Ruby Exceptions

The rubygem-abrt package registers a custom handler using the at_exit feature, which is executed when a program ends. This allows for checking for possible unhandled exceptions. Every time an unhandled exception is captured, the ABRT handler prepares a bug report, which can be submitted to Red Hat Bugzilla using standard ABRT tools.

24.4.4. Detecting Java Exceptions

The ABRT Java Connector is a JVM agent that reports uncaught Java exceptions to abrtd. The agent registers several JVMTI event callbacks and has to be loaded into the JVM using the -agentlib command line parameter. Note that the processing of the registered callbacks negatively impacts the performance of the application. Use the following command to have ABRT catch exceptions from a Java class:
~]$ java -agentlib:abrt-java-connector[=abrt=on] $MyClass -platform.jvmtiSupported true
In the above command, replace $MyClass with the name of the Java class you want to test. By passing the abrt=on option to the connector, you ensure that the exceptions are handled by abrtd. In case you want to have the connector output the exceptions to standard output, omit this option.

24.4.5. Detecting X.Org Crashes

The abrt-xorg service collects and processes information about crashes of the X.Org server from the /var/log/Xorg.0.log file. Note that no report is generated if a blacklisted X.org module is loaded. Instead, a not-reportable file is created in the problem-data directory with an appropriate explanation. You can find the list of offending modules in the /etc/abrt/plugins/xorg.conf file. Only proprietary graphics-driver modules are blacklisted by default.

24.4.6. Detecting Kernel Oopses and Panics

By checking the output of kernel logs, ABRT is able to catch and process the so-called kernel oopses — non-fatal deviations from the correct behavior of the Linux kernel. This functionality is provided by the abrt-oops service.
ABRT can also detect and process kernel panics — fatal, non-recoverable errors that require a reboot, using the abrt-vmcore service. The service only starts when a vmcore file (a kernel-core dump) appears in the /var/crash/ directory. When a core-dump file is found, abrt-vmcore creates a new problem-data directory in the /var/tmp/abrt/ directory and moves the core-dump file to the newly created problem-data directory. After the /var/crash/ directory is searched, the service is stopped.
For ABRT to be able to detect a kernel panic, the kdump service must be enabled on the system. The amount of memory that is reserved for the kdump kernel has to be set correctly. You can set it using the system-config-kdump graphical tool or by specifying the crashkernel parameter in the list of kernel options in the GRUB 2 menu. For details on how to enable and configure kdump, see the Red Hat Enterprise Linux 7 Kernel Crash Dump Guide. For information on making changes to the GRUB 2 menu see Chapter 25, Working with GRUB 2.
Using the abrt-pstoreoops service, ABRT is capable of collecting and processing information about kernel panics, which, on systems that support pstore, is stored in the automatically-mounted /sys/fs/pstore/ directory. The platform-dependent pstore interface (persistent storage) provides a mechanism for storing data across system reboots, thus allowing for preserving kernel panic information. The service starts automatically when kernel crash-dump files appear in the /sys/fs/pstore/ directory.