Chapter 11. Contacting Red Hat support for service

If the information in this guide did not help you to solve the problem, this chapter explains how you contact the Red Hat support service.

11.1. Prerequisites

  • Red Hat support account.

11.2. Providing information to Red Hat Support engineers

If you are unable to fix problems related to Red Hat Ceph Storage, contact the Red Hat Support Service and provide sufficient amount of information that helps the support engineers to faster troubleshoot the problem you encounter.

Prerequisites

  • Root-level access to the node.
  • Red Hat support account.

Procedure

  1. Open a support ticket on the Red Hat Customer Portal.
  2. Ideally, attach an sosreport to the ticket. See the What is a sosreport and how to create one in Red Hat Enterprise Linux? solution for details.
  3. If the Ceph daemons fail with a segmentation fault, consider generating a human-readable core dump file. See Generating readable core dump files for details.

11.3. Generating readable core dump files

When a Ceph daemon terminates unexpectedly with a segmentation fault, gather the information about its failure and provide it to the Red Hat Support Engineers.

Such information speeds up the initial investigation. Also, the Support Engineers can compare the information from the core dump files with Red Hat Ceph Storage cluster known issues.

11.3.1. Prerequisites

  1. Install the debuginfo packages if they are not installed already.

    1. Enable the following repositories to install the required debuginfo packages.

      Example

      [root@host01 ~]# subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms
      [root@host01 ~]# yum --disablerepo='*' --enable=rhceph-5-tools-for-rhel-8-x86_64-debug-rpms

      Once the repository is enabled, you can install the debug info packages that you need from this list of supported packages:

      ceph-base-debuginfo
      ceph-common-debuginfo
      ceph-debugsource
      ceph-fuse-debuginfo
      ceph-immutable-object-cache-debuginfo
      ceph-mds-debuginfo
      ceph-mgr-debuginfo
      ceph-mon-debuginfo
      ceph-osd-debuginfo
      ceph-radosgw-debuginfo
      cephfs-mirror-debuginfo
  2. Ensure that the gdb package is installed and if it is not, install it:

    Example

    [root@host01 ~]# dnf install gdb

11.3.2. Generating readable core dump files in containerized deployments

You can generate a core dump file for {storage-product| 5 which involves two scenarios of capturing the core dump file:

  • When a Ceph process terminates unexpectedly due to the SIGILL, SIGTRAP, SIGABRT, or SIGSEGV error.

or

  • Manually, for example for debugging issues such as Ceph processes are consuming high CPU cycles, or are not responding.

Prerequisites

  • Root-level access to the container node running the Ceph containers.
  • Installation of the appropriate debugging packages.
  • Installation of the GNU Project Debugger (gdb) package.
  • Ensure the hosts has at least 8 GB RAM. If there are multiple daemons on the host, then Red Hat recommends more RAM.

Procedure

  1. If a Ceph process terminates unexpectedly due to the SIGILL, SIGTRAP, SIGABRT, or SIGSEGV error:

    1. Set the core pattern to the systemd-coredump service on the node where the container with the failed Ceph process is running:

      Example

      [root@mon]# echo "| /usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e" > /proc/sys/kernel/core_pattern

    2. Watch for the next container failure due to a Ceph process and search for the core dump file in the /var/lib/systemd/coredump/ directory:

      Example

      [root@mon]# ls -ltr /var/lib/systemd/coredump
      total 8232
      -rw-r-----. 1 root root 8427548 Jan 22 19:24 core.ceph-osd.167.5ede29340b6c4fe4845147f847514c12.15622.1584573794000000.xz

  2. To manually capture a core dump file for the Ceph Monitors and Ceph OSDs:

    1. Get the MONITOR_ID or the OSD_ID and enter the container:

      Syntax

      podman ps
      podman exec -it MONITOR_ID_OR_OSD_ID bash

      Example

      [root@host01 ~]# podman ps
      [root@host01 ~]# podman exec -it ceph-1ca9f6a8-d036-11ec-8263-fa163ee967ad-osd-2 bash

    2. Install the procps-ng and gdb packages inside the container:

      Example

      [root@host01 ~]# dnf install procps-ng gdb

    3. Find the process ID:

      Syntax

      ps -aef | grep PROCESS | grep -v run

      Replace PROCESS with the name of the running process, for example ceph-mon or ceph-osd.

      Example

      [root@host01 ~]# ps -aef | grep ceph-mon | grep -v run
      ceph       15390   15266  0 18:54 ?        00:00:29 /usr/bin/ceph-mon --cluster ceph --setroot ceph --setgroup ceph -d -i 5
      ceph       18110   17985  1 19:40 ?        00:00:08 /usr/bin/ceph-mon --cluster ceph --setroot ceph --setgroup ceph -d -i 2

    4. Generate the core dump file:

      Syntax

      gcore ID

      Replace ID with the ID of the process that you got from the previous step, for example 18110:

      Example

      [root@host01 ~]# gcore 18110
      warning: target file /proc/18110/cmdline contained unexpected null characters
      Saved corefile core.18110

    5. Verify that the core dump file has been generated correctly.

      Example

      [root@host01 ~]# ls -ltr
      total 709772
      -rw-r--r--. 1 root root 726799544 Mar 18 19:46 core.18110

    6. Copy the core dump file outside of the Ceph Monitor container:

      Syntax

      podman cp ceph-mon-MONITOR_ID:/tmp/mon.core.MONITOR_PID /tmp

      Replace MONITOR_ID with the ID number of the Ceph Monitor and replace MONITOR_PID with the process ID number.

  3. To manually capture a core dump file for other Ceph daemons:

    1. Log in to the cephadm shell:

      Example

      [root@host03 ~]# cephadm shell

    2. Enable ptrace for the daemons:

      Example

      [ceph: root@host01 /]# ceph config set mgr mgr/cephadm/allow_ptrace true

    3. Redeploy the daemon service:

      Syntax

      ceph orch redeploy SERVICE_ID

      Example

      [ceph: root@host01 /]# ceph orch redeploy mgr
      [ceph: root@host01 /]# ceph orch redeploy rgw.rgw.1

    4. Exit the cephadm shell and log in to the host where the daemons are deployed:

      Example

      [ceph: root@host01 /]# exit
      [root@host01 ~]# ssh root@10.0.0.11

    5. Get the DAEMON_ID and enter the container:

      Example

      [root@host04 ~]# podman ps
      [root@host04 ~]# podman exec -it ceph-1ca9f6a8-d036-11ec-8263-fa163ee967ad-rgw-rgw-1-host04 bash

    6. Install the procps-ng and gdb packages:

      Example

      [root@host04 /]# dnf install procps-ng gdb

    7. Get the PID of process:

      Example

      [root@host04 /]# ps aux | grep rados
      ceph           6  0.3  2.8 5334140 109052 ?      Sl   May10   5:25 /usr/bin/radosgw -n client.rgw.rgw.1.host04 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug

    8. Gather core dump:

      Syntax

      gcore PID

      Example

      [root@host04 /]# gcore 6

    9. Verify that the core dump file has been generated correctly.

      Example

      [root@host04 /]# ls -ltr
      total 108798
      -rw-r--r--. 1 root root 726799544 Mar 18 19:46 core.6

    10. Copy the core dump file outside the container:

      Syntax

      podman cp ceph-mon-DAEMON_ID:/tmp/mon.core.PID /tmp

      Replace DAEMON_ID with the ID number of the Ceph daemon and replace PID with the process ID number.

  4. Upload the core dump file for analysis to a Red Hat support case. See Providing information to Red Hat Support engineers for details.

11.3.3. Additional Resources