Patching with Reboot on Satellite 6.3.1

Latest response
I have 2 -3 thousand servers and workstations that have to be patched at different intervals and rebooted after patching.  The are in different VRFs behind firewalls.  I have capsules in each VRF to interface with the Satellite and the client.  I've created host collections to schedule the patching but there is no option to tell it to reboot after the patches have been applied.  I don't want to have to install something on each system, like cron jobs or copy ssh keys.  
 Is there some way to tell the systems to reboot after patching?  Satellite 5.7 had it, please tell me that hasn't been lost.  That was a critical item for large companies.  A command line solution, Hammer, or Ansible, that can be scripted would be ideal.  WebUI will suffice. 
Any help will be greatly appreciated!


Hello Ken,

Have you considered to open a support case? Satellite 6.3.1 is very new, so we as customers might not know all the features yet that you require.

If your servers and Workstations would only run RHEL 7.5 it would be an option to consider Cockpit. It provide a WebUI that also can check if new patches are available, but you need to hop to each server/workstation. It can use password login to connect to each server/workstation.

Ansible might be a better option.


Jan Gerrit Kootstra

Hi Ken,

For your comment "I don't want to have to install something on each system, like cron jobs or copy ssh keys." is not right, You must have either foreman-proxy or some user ssh keys which has right to patch and reboot.

I am using ansible to patch , reboot, check host come back and check applications are running etc. But to do that either you have to install Ansible on Satellite Server or buy Ansible Tower/Engine and cron from satellite/capsule server.


Ansible can be a quick and simple solution, and if you already have e.g. jump servers from where you can ssh to the servers, you can install it there, it is just a "yum install ansible" away. (With the ansible repo enabled. Used to be EPEL but was changed recently).

Then you can run playbooks, or even ad-hoc commands. You need a list of hostnames, and that can be obtained with hammer commands.

I've used remote execution within satellite/foreman in the past to do this. This still means you have to roll out ssh keys for remote execution though.

Greetings Klaas

Perhaps, "job Templates" could help... create a "job template" that :

1. count the amount of patch to apply (no need of reboot if no patch to apply, right ?)
2. if count > 0 the apply patch (yum -update ... option) + reboot
3. else do nothing


I have opened a case with Red Hat, 2 to be exact, and got 2 different answers. We are under DFARS and putting the root keys on each server is definitely a no go with security. We have Satellite 5.7 that we've been using and it had no problem doing the patches and a reboot but from what Red Hat Support told me they removed that function. That makes no sense to me since that was in my opinion one of the main functions of the Satellite. Now having to copy the root ssh keys to each server is real amateur idea. No one in their right mind is going to give one server unlimited root access without a password to one location. IF that server get's hacked you whole enterprise is completely compromised. There has to be a reasonable way to do this or Satellite 6.x is useless.

if someone compromises your satellite 5 server you'll have a hard time recovering from that as well. Having central management means a central point where an attacker could do a lot of harm. Btw: If you use remote commands in satellite 5 you have that same functionality right now (without ssh keys though, that's through the rhn client) already running in your environment :)

But if you really don't want remote execution: use yum-cron with an automated reboot afterwards :)

As Klaas mentioned, this is no different from Satellite 5.

Satellite 5 presented the ability to run arbitrary commands on a client being managed via Satellite. A malicious user could run a rm -rf / on any subset of your systems.

The only arguable difference that you can possibly make is that Satellite 5 used the machine's identity (the system ID /etc/sysconfig/rhn/systemid) as the only authentication method.

In Satellite 6, you have a ton more options to limit the potential damage that a user can do (either intentionally or not). These include, but are not limited to:

  • With an SSH based transport, any/all methods for securing SSH can be used.
  • You can limit the commands that can be run via sudo/su. (Pro-tip: use centralized sudo such as via Red Hat IdM)
  • You can connect via an unprivileged user. connecting via root via SSH is not required.
  • Each Satellite / Capsule (if you are using them) maintains distinct SSH keys, meaning that the scope of affected systems is limited in the event of key compromise.
  • You have built in roles & RBAC permissions so that you can limit who has the ability to issue jobs via remote execution

Why not just reboot after the patching anyway?

My method is as follows:

Apply the errata to the systems that are in a particular host collection:

hammer job-invocation create --job-template-id 107 --inputs "action=update" --search-query host_collection="$host_collection"

Then I have a job template that I use with "/sbin/shutdown -r +1" in it, I call that job template using the host collection name as the search in a hammer command inside a bash script:

hammer job-invocation create --job-template-id 132 --search-query "$host_collection" 

Doesn't that require having the root ssh keys on the client? Or at least some user keys that has shutdown privilege. I was looking at this but I don't see a way to do it without the keys being on the client.

If this thread is too old to reactivate, I understand, but I do have a couple of questions here.

Over at GitHub, for Foreman's remote execution feature, it describes SSH use there as via a "provider", and a couple of other providers are mentioned including MCollective and Ansible.

EDIT: here's a link to the GitHub page for remote execution design:

It describes conceptually how the SSH, MCollective and Ansible providers are intended to work, and mentions that SSH is the only one currently provided but that the plan is to provide more.

I really would like to avoid having to justify another pipeline between the Satellite server and its managed hosts, given the existence already of the puppet-agent and katello-agent. Is there any way to do remote execution within these already-trusted connections?