A naïve approach to two-phase patching: What is the better way?

Latest response

Hi, folks,

What I'm doing right now is a weekly, three-part, two-phase process:

2) Run a yum update on all non-production servers.
3) Run a yum update on all production servers, tell it no, capture the transaction, and put it in the root directory.
1) Run a yum load-transaction on the previous week's captured transaction.

I've put the steps in the wrong order to help explain the process, which should give you an idea how convoluted it seems.

This works fine for now--it's better than nothing, which is what it replaces--but it's potentially fragile, and I don't want to put effort into bullet-proofing it which I could better use in learning the better way to do this. So: What is that better way? And how do I do it?


John A


This is what "Content Views" and "Life Cycle Environments" in Red Hat Satellite 6.x were invented for. Satellite 6 is very flexible, and there are entirely different models that can be used to achieve your goal, but the way I've used it in my organization is this:

1) define a "Life cycle" including 3 stages: "Library", "Test", and "Production".

2) define a Content View, say"Example-dot-com-RHEL 7".

3) Assign the "Production" systems to the "Production" Environment of the "Example-dot-com-RHEL 7" Content View Assign the "Non Production" systems to the "Test" Environment Assign your own sysadmin-test system(s) to the "Library" Environment.

4) Each week, "Publish" a new Version of the Content View to the "Library" Environment, then patch & test your sysadmin-test systems. Assuming the patches work, "Promote" the Content View to the Test Environment. Run a "yum update" on all of the test machines.

5) Each following week, "Promote" the previous week's Content View version from "Test" to "Production", then start a new cycle of publishing to "Library", test, promote to "Test", test again, etc. The Production servers are now guaranteed to have exactly the same patch set that was tested the previous week, regardless of any changes Red Hat may have released in the 7 days since then.

Thanks! I hadn't anticipated doing this the right way being so simple. The documents make it sound like a bigger deal. I spent a few minutes creating two lifecycle environments, Test and Production, and then one content view that contains everything current. That took a long time to build, and again, I think I might be doing it suboptimally.

James, if you don't mind, could you say a little more about how you partition your content into Content Views?

And just to be certain: A Content View is a snapshot, right? I've been thinking about it in complicated ways. Now I think I've overcomplicated things.

A Content View is a customized clone of some/required repositories. These content views when published & promoted to the higher environment would provide option of excluding/filtering out not-required or faulty patches/packages as noticed in lower environment. That is the basic purpose of content view. James has explained the flow very well which is most generally practiced approach in most environment.

Satellite would also be my first choice, if you want to give it a try you can also use upstream (https://theforeman.org/ with Katello Plugin) If you are looking for a smaller footprint you can also use pulp standalone: https://pulpproject.org/ or build on something like this: https://github.com/Tronde/poor-man-s-rhel-mirror

In any case, what you are doing is solving the problem client-side which is a good idea if you don't want to solve it server-side -- server-side solving is way more common.

Using Satellite is my strong preference! But I had a deadline to meet, and I did learn a bit about Ansible in the process. I'm going to call that a small victory and move on.

Also I think the client-side approach only works on older rhels. I think rhel8 with the switch to dnf has removed the load transaction option.