Configuring kdump to save on remote host

Latest response


I am trying to configure kdump so that the kernel dump file gets saved onto a remote host. I currently have this configured and working, but I've got it setup so that everything runs under the root account. The problem is we normally set "PermitRootLogin" in /etc/ssh/sshd_config to "no" on our systems. I have to change this to "yes" in order for the dump to work correctly.

My question:
Is it possible for me to configure /etc/kdump.conf so that kdump can be run as another user? I have tried using sshkeys (id_rsa) for a different user account but was unsuccessful. I wonder if it because our local and remote systems (both RHEL7.5) are running in FIPS mode (/proc/sys/crypto/fips_enabled = 1). This seems to prevent me from successfully running "kdumpctl propagate" because it returns with "ERROR: FIPS mode initialized".

Any suggestions/thoughts would be greatly appreciated!



Hi Sam, kdump can SCP the files to another server as a non-root user on the remote server.

There are two options in your kdump.conf to configure:

Specify the remote user and server IP or hostname. ssh <user@server>

This is the key that kdump will use for authenticating to the remote host, by default it would use: /root/.ssh/kdump_id_rsa sshkey <path>

The kdumpctl propagate function will check that the key exists, create it if not, and then use ssh-copy-id to send the key to the remote server.

To help narrow it down, check if you can SSH between the nodes using the key based auth.

Do you know if the keys were created on a system before it had FIPS enabled? It is possible those keys may need to be converted to a FIPS-compatable format.

This article FIPS mode can't decrypt existing passphrase-protected ssh keys has a lot of information on that issue. Basically you can convert the existing key to be FIPS-compatable, or regenerate the key on a FIPS enforcing system.

Hope that helps.

Thanks for the response Kevin!

I do have those options configured in /etc/kdump.conf. I went ahead and even created a new user account on both local and remote systems and generated new RSA keys for that user (ssh-keygen -t rsa -b 2048 [no password]). My kdump.conf shows the following:

ssh kdumpuser@ sshkey /home/kdumpuser/.ssh/id_rsa

I've already updated /home/kdumpuser/.ssh/authorized_keys on the remote host so it contains the content of for kdumpuser as well as the content of for the root account. As kdumpuser, I am able to ssh from local to remote. As root, I can also ssh from local to remote with "ssh kdumpuser@".

When I run "kdumpctl propagate", I am still getting the following:

Using existing keys... /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/kdumpuser/.ssh/" /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/bin/ssh-copy-id: ERROR: FIPS mode initialized

Failed to propagate ssh key, /home/kdumpuser/.ssh/kdump_id_rsa failed in transfer to

I would say that because I just created this new account with new keys, the keys should be FIPS-compatible. Why am I still getting this error?

Hi Sam, I was not able to reproduce that error, if I have some time I can try to spin up some RHEL 7.5 guests and see if there is anything there.

kdumpctl propagate is just calling ssh-copy-id: (ssh-copy-id -i $KEYFILE $SSH_USER@$SSH_SERVER)

If ssh-copy-id finished with any return code other than zero, kdumpctl will display the error "Failed to propagate ssh key $KEYFILE failed in transfer to $SSH_SERVER"

All that said -- it is entirely possible your key-based auth works OK and that kdump is ready to use.

If you can successfully ssh between those two hosts and "kdumpuser" is able to write to /var/crash on (or whatever you have path= set to) then perhaps try to trigger a test and see if the kdump transfers as desired

make sure kdump is running OK on the systemctl status kdump node and then trigger the crash.

Note -- this WILL trigger the host to crash & reboot echo c > /proc/sysrq-trigger

Hi Kevin,

I think you are correct when you said this: "All that said -- it is entirely possible your key-based auth works OK and that kdump is ready to use."

After checking to make sure my user was able to login to the remote host, I went ahead and ran "kdumpctl restart". This successfully rebuilt the kdump.img file and "systemctl status kdump" shows it as active and running. Unfortunately, I cannot trigger a crash since these are live production systems. However, next time these systems go down, hopefully I'll have a crash dump to analyze!

Thanks again for your help!

Folks, I tested it with non-root user looking at your conversation on my non prod system, rebuilt the image and crashed with HPE iLO > NMI server command, works fine. thank you.

Copying data                                      : [  9.9 %] -        eta: 7m11s

remote host:
[root@remote IP-2021-07-26-12:57:31]# ls -lrt
total 10688444
-rw-r--r-- 1 svcops ariba      184046 Jul 26 12:57 vmcore-dmesg.txt
-rw-r--r-- 1 svcops ariba 10944776381 Jul 26 13:12 vmcore.flat
[root@remove IP-2021-07-26-12:57:31]#