Scheduling patching

Latest response

We've migrated to Satellite 6.1 from 5.6 (as you would) and I'm attempting to update documentation for our next Dev patch cycle.
Previously, we could schedule a reboot, deploy patches and another reboot.
I am looking for a way to do this with satellite 6.1 that doesn't involve touching each server individually but can't seem to find it.

I understand that Puppet (opensource) is integrated with satellite 6.1.

Is the assumption here that puppet or mcollective be used in place of action chaining?


I was searching for the same thing last week and it doesn't appear to exist in the current versions. The only options I found were the "Apply Errata" from the errata page and that happened instantly with no schedule option. I don't know of a reboot option in this version. I haven't seen anyone recommend a puppet process to accomplish it but I would be interested to hear if that can be accomplished there.

Yes - I've headed in the same direction with Bulk actions - install errata , BUT and its a big BUT, it just goes off and does it once you click the button - no scheduling. I appreciate a lot of the new methods in satellite 6.1 but there are big gaps in terms of workflow compared to 5.6. I did come across a puppet module scheduled_runonce that provides similar functionality - but getting puppet up and functioning (which we are doing) is not trivial.

I have hit this issue as well, you will find that Red Hat response will likely be to use the Hammer CLI to trigger the patch deployment In my eyes this defeats the purpose of having the UI (which has unfortunately lost some workflow features). My issue is that the workflow for patching is often carried out by administrators that aren't necessarily happy to use a CLI so the end result is scripting something bespoke/in house to use the CLI (ie. writing your own UI).

Can I suggest that you raise a Red Hat ticket for it as an issue? these items will hopefully get more focus in the subsequent Satellite 6 builds if more people raise them formally.

Rumor has it that scheduling is due out as part of Satellite 6.2, and I believe there are already several RFE's open for it.

Dave Caplan and Rich Jerrido had a slide at the end of their presentation at Summit with proposed upcoming features in 6.2 What will actually be there remains to be seen.

When I read "scheduling" I think of "cron" and "hammer", not of an administrator using a CLI or a UI.

For generic and predictable scheduling against Hammer I would agree (eg. everything gets patched nightly/weekly), but it depends greatly on your target environment

One of the benefits (and selling points) for Satellite is that it makes the administration of patching and lifecycle management of multiple nodes straight forward, It also provides an interface for administrators (who may not be proficient in patching of the OS) to accomplish this task without too much technical buy in.

If you are using cron + Hammer the question would need to be asked, why use/pay for Satellite at all? Surely an enterprise that is paying for an enterprise class tool isn't going to discard the advertised benefit/features of the UI in favour of using cron + hammer to schedule their patching?

Additional to this, I am yet to work in an environment that has had a patching cycle that is so consistent that it can be scheduled with cron (when separate environments/approvals etc. are taken into account) If anything 'at' in combination with Hammer CLI would provide the capability to schedule a patching execution at a point in time in the future (to match Satellite 5 behaviour).

That's the beauty of Satellite, there are multiple ways to accomplish the same task. "cron" and "hammer" may be your preference, but for others may prefer the point, click, shoot simplicity of a GUI or the incredible flexibility of the API, which is far beyond flexible when compared to cron/hammer when properly implemented.

I could go and rewrite all of our patching code using cron and hammer, but to be honest, that is the least elegant and least enterprise-like use of Satellite IMO.

If you're phone was suddenly no longer able to make calls because of an upgrade, and that was by design, you might take issue with the software creator. The same is true here. These "features" have been long standing tenets of the Satellite product that need to be extended to future products, because there are already well established internal policies and practices that require them for many customers.

To each their own.

+1 to Will's comment. And one of the design goals of Satellite 6 is to have all of its capabilities exposed via the API, CLI, and UI, and to let you, the end user use whichever is best for your deployment.

Also, what Satellite gives you above and beyond just 'cron+hammer' is simplified scheduling and a cleaner audit trail. cron+hammer means that you'd have to check in two places when something goes wrong. (syslog & Satellite).

Not that there is anything wrong with cron+hammer. Note: Satellite didn't have a decently features CLI until Satellite 5.7 (and Satellite 6 of course).

Here, for the time being, Satellite is merely used as an internal repository server (environment 'Library' and the 'Default Organization View').

Hello, So what is the status of scheduling stuff in satellite UI now ? Is it in place in 6.2 now, we are very keen to be able to use this automatich scheduling of errata updates in the UI.

Cheers /Anders

Indeed. Satellite 6.2's Remote Execution feature includes the scheduling component.

Do you know if servers in different timezones will execute the schedule on the local time? Or will they use the Capsule server time? Thanks!

As Rich mentioned Remote Execution works great in 6.2.x. It works just like the "old days" and better IMO.

However Errata updates are having a bit of an issue.

I'd suggest waiting for 6.2.5 if this BZ is still scheduled to be resolved then, otherwise, you can't patch by errata type.

Okidoki, thanks! :-)

Does anyone achieved automated patch management if you can share ?

I struggled with this for a while. The actions and output in the API aren't identical to the Hammer CLI. I wanted to write everything in Python, but that ended up being an EPIC fail. What I ended up doing was making Host Collections for different levels of patching in my Satellite Organization; NP-Group[1-4], PRE-Group[1-4] and lastly PRD-Group[1-4]. The first Mon, Tue, Wed and Thurs of the month, the NP Groups [Host Collections] get updated. I wrote scripts for them all and call them via cron. Here's the code to update a host collection.

#!/usr/bin/env bash

for server in `hammer --output csv host-collection hosts --organization CS --name NP-Group01 | grep -E [1-9] | sed 's/.*,//g'`
        echo "Updating " $server
        hammer host package upgrade-all --async --host $server

The above script, and its NP counterparts, are called via the following cron entries:

0 3 1-7 * * [ "$(date '+\%a')" = "Mon" ] && /opt/sat6-mgmt/cs/hostcollection-mgmt/
0 3 1-7 * * [ "$(date '+\%a')" = "Tue" ] && /opt/sat6-mgmt/cs/hostcollection-mgmt/
0 3 1-7 * * [ "$(date '+\%a')" = "Wed" ] && /opt/sat6-mgmt/cs/hostcollection-mgmt/
0 3 1-7 * * [ "$(date '+\%a')" = "Thu" ] && /opt/sat6-mgmt/cs/hostcollection-mgmt/

I'm sure it could be more elegant but it works. I didn't go the remote execution route because we didn't want to put ssh keys on the hosts. Yes, we could have used Puppet to do it, as we have a working Puppet setup with r10k, but it wasn't the route we wanted to go down. When Remote Execution can utilize the Katello agent, we will revisit this area. I check the tasks daily in the UI to see if any of the updates failed. If they do, I resolve the issue manually.

I use a similar bash script that calls hammer commands to reboot and patch my systems based off their host collection names set in arguments that run of a cron job. I also added in to watch the job to see if a task fails, and if it does to page out to my team.

hammer job-invocation create --job-template-id 91 --inputs "action=update" --search-query "$hc"
hammer job-invocation create --job-template-id 92 --inputs "action=restart" --search-query "$hc" 

Susprised no one has mentioned using Ansible (and by extension Tower or Rundeck) to achieve this as it's Red Hat flavour of the month. We have moved to this for some application stacks that need to have the application layer gracefully shutdown/restarted and brought up in specific sequences after patching.

hi PixelDrift

Can you please pinpoint any documentation or url or code you have used for automating patching using Ansible . we are also looking for different options and redhat sugggested to try it .

This is overly simplistic but works, I'm sure there are some role in the Ansible Galaxy as well.

- hosts: all
  gather_facts: true
# serial: set to number of servers to connect to at one time or comment line out to reboot all at once.
# serial: 7

  - name: apply all patches
    yum: name=* state=latest
    become: yes

  - name: restart machine
    shell: sleep 2 && shutdown -r now "Ansible updates triggered"
    async: 1
    poll: 0
    become: yes
    ignore_errors: true

  - name: waiting for server to come back
    local_action: wait_for host={{ inventory_hostname }} state=started port=22 delay=15 timeout=300 connect_timeout=15
    become: no      

can we run this playbook inside the Satellite Server as snippets for remote patch management ? if yes how ?

I'm not sure it would work in 6.2 but to be fair I've never tried either. I believe 6.3 will have some ansible integration in it so you may have better luck when that version becomes available.

Has anyone input on this how to integrate ansible playbooks with remote execution? As far as I understand Ansible integration in 6.3 is a) dynamic inventory for ansible tower and b) provisioning callback to Ansible Tower.

We are especially looking for patching hosts which are part of an Openshift cluster and therefore we need orchestration to unschedule, upgrade and schedule openshift nodes and just a few at a time to ensure the platform is running. Best regards, Johannes

Coming back to this topic after almost 2 years. We are having some success with remote execution jobs now. Took a bit to ensure ssh access from satellite and deployment of key and enable sudo access for user, but despite a glitch or too - it looks like the right solution


Can you please clarify what you mean by "Took a bit to ensure ssh access from satellite..."? Do you mean it was difficult to achieve, or that it took some time? In other words, was it simple enough to do, but required effort per server, or was it complex and difficult to achieve? I work on the Satellite documentation, so any input you can provide would help me better understand your situation.

I prefer to use ansible tower/Satellite6, ansible playbooks included all custom tasks for patching, ansible tower dynamic inventory sync hosts from host collections in Satellite6, I recommend that using ansible since you might have specific requirements for patching basic on dev/qa/prod, like the time of reboot after patching/disable or enable paging for prod/reboot order for systems.

Here is what I would like to try. I would like to have the flexibility to run a job from ansible tower that patches a host-collection/hostgroup (with the ability to exclude hosts on the fly) First I would like to be able to run a pre-patch validation or planning portion, patch, reboot --excluding specific hosts if need be, and run a post validation and send notifications of successes and failures along the way. I am looking for the easiest most efficient way of doing this and I have come up with a few plans but nothing that is rock solid.