How to investigate core dumps

Latest response

How would one go about investigating a program coredump?
I can view info about the core dump with :
coredumpctl info

But how do you investigate what actually caused the core dump to happen?
Many thanks,

Responses

Use gdb (https://access.redhat.com/solutions/736733 and https://stackoverflow.com/questions/5115613/core-dump-file-analysis). You could start with the bt (backtrace) command in gdb providing the stacktrace at the time of the dump.

Maybe I need further help with gdb,

BFD: Warning: /var/tmp/coredump-QtDU78 is truncated: expected core file size >= 23071272960, found: 2147483648. [New LWP 21627] [New LWP 21629] [New LWP 21626] [New LWP 21628] [New LWP 21630] [New LWP 21622] [New LWP 21635] [New LWP 21636] [New LWP 21634] [New LWP 21631] [New LWP 21633] [New LWP 21632] Cannot access memory at address 0x7f242f00d128 Cannot access memory at address 0x7f242f00d120 Failed to read a valid object file image from memory.

The backtrace doesn' then run.(Backtrace stopped: Cannot access memory at address 0x7f240a1278c0) I'm unsure if this means that the original core dump is no longer of use, or I am doing something incorrect with gdb. Many thanks

When your corefile is truncated, you are out of luck (as far as I know). Please refer to https://access.redhat.com/solutions/61334 to obtain a complete corefile on a next crash.

I also get a truncated core dump (Red Hat Enterprise Linux release 8.4 (Ootpa) and ulimit -c reports unlimited. My truncated core dump is 135M root root 135M Jul 26 15:56 core.java.2032.7de85b729b394264901c2b9b9ae4c651.1359758.1627307763000000.lz4 Any further suggestions on why this is the case?

Thanks for your reply, I'm not clear as to why I'm not getting a full crash as my ulimit settings are:

core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 256185
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 4096
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 256185
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Core dump appears to be always truncated at 2147483648. I've found that /etc/systemd/coredump.conf contains :

[Coredump]
#Storage=external
#Compress=yes
#ProcessSizeMax=2G
#ExternalSizeMax=2G
#JournalSizeMax=767M
#MaxUse=
#KeepFree=

And wonder if this is more the culprit to allow max size more than 2G.

I also get a truncated core dump (Red Hat Enterprise Linux release 8.4 (Ootpa) and ulimit -c reports unlimited. My truncated core dump is 135M root root 135M Jul 26 15:56 core.java.2032.7de85b729b394264901c2b9b9ae4c651.1359758.1627307763000000.lz4 Any further suggestions on why this is the case?