Changing 'runlevel' Via Putty Terminates Session

Latest response

I recently updated my system on 8/23/19 with over 700 updates. Since then, when I have a Putty session open and change the systems run level (mostly from 5 to 3 and 3 to 5). It will change the run level and also terminates/kills the Putty session I had opened. I can have multiple sessions open to the same server and they will all be terminated. I have tried changing the run level from both a Putty session and the GUI console itself. Both scenarios will terminate any Putty session open.

I have reviewed the /var/logs and nothing is mentioned about SSH sessions being terminated. I don't know where else to look at this point.

Red Hat Enterprise Linux Workstation release 7.7 (Maipo)


I've been on a local system recently (RHEL workstations) and when I'd be in a pseudo-terminal, and switch to using systemctl isolate, it would kick me out of my terminal login and I'd have to re-log in.



The act of having to re-log in has become a nuisance for me. Does anyone know if this is the new expected behavior or is there a resolution?


I think it is by design:

"isolate" is only valid for start operations and causes all other units to be stopped when the specified unit is started. This mode is always used when the isolate command is used.

Would the same problem occur if this, for example, is run instead:

# systemctl set-default

Maybe someone else has better explanation.


Dusan Baljevic (amateur radio VK2COT)

I should clarify that I am not using 'sytemctl' to switch between runlevels. I am using 'init 3' or 'init 5'. This behavior does not occur in RHEL 6 or prior to the YUM upgrade for RHEL 7. I'd like to know what changed or a workaround.

Hi Christopher,

In your specific case, maybe this is the explanation...

On RHEL 7, telinit(8) and init(1) commands are just symbolic links:

$ stat /sbin/telinit            
  File: /sbin/telinit -> ../bin/systemctl
  Size: 16              Blocks: 0          IO Block: 4096   symbolic link
Device: fd02h/64770d    Inode: 137392      Links: 1
Access: (0777/lrwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:bin_t:s0
Access: 2019-09-07 22:10:34.169557170 +1000
Modify: 2019-09-07 22:10:34.169557170 +1000
Change: 2019-09-07 22:10:34.169557170 +1000
 Birth: -

$ stat  /sbin/init
  File: /sbin/init -> ../lib/systemd/systemd
  Size: 22              Blocks: 0          IO Block: 4096   symbolic link
Device: fd02h/64770d    Inode: 137315      Links: 1
Access: (0777/lrwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:bin_t:s0
Access: 2019-09-07 22:10:34.168557149 +1000
Modify: 2019-09-07 22:10:34.168557149 +1000
Change: 2019-09-07 22:10:34.168557149 +1000
 Birth: -

... and then this applies (from telinit(8) on-line manual):

runlevel       2, 3, 4, 5

Change the SysV runlevel. This is translated into an activation request for,, ... and is equivalent to systemctl isolate, systemctl isolate, ...


Dusan Baljevic (amateur radio VK2COT)

Hi Dusan, Thank you for providing the information.

I did some testing and found that the scenario I described occurs when the 'systemd' package is updated to version 219-67.el7_7.1. This is not an issue with version 219-57.el7 or prior. I wasn't able to find any release notice or bug fix that would explain this change in behavior.

Downgrading to the prior version for the following packages resolve this issue. systemd systemd-libs systemd-sysv systemd-python systemd-devel libgudev1 libgudev1-devel

BEFORE Downgrade systemd-libs-219-67.el7_7.1.x86_64 systemd-sysv-219-67.el7_7.1.x86_64 systemd-devel-219-67.el7_7.1.x86_64 systemd-219-67.el7_7.1.x86_64 libgudev1-devel-219-67.el7_7.1.x86_64 libgudev1-219-67.el7_7.1.x86_64

AFTER Downgrade systemd-python-219-67.el7.x86_64 systemd-libs-219-67.el7.x86_64 systemd-devel-219-67.el7.x86_64 systemd-sysv-219-67.el7.x86_64 systemd-219-67.el7.x86_64 libgudev1-devel-219-67.el7.x86_64 libgudev1-219-67.el7.x86_64