Patching and how to setup it up in Satellite 6.1.8

Latest response

I have read the docs on setting up content views to allow patching of my machines. I have setup the groups of development, test, support and production.

The docs have not complete in what they tell you to do. Is there a cook boot style of instructions written such that I can be successful in this endeavor. I also want to get this info included in the documentation that RedHat delivers. I seems that every time I want to do something more with my Satellite system that is JUST BASIC to what the system should do I end up spend an unusual
amount of time trying to figure it out. It is real tiring do this.

So does anyone have some input on Content Views, Environments and such that will help.

Bob Teeter

Responses

Take a look at the Content Management Guide, which is a new Guide for Satellite 6.2. Particularly look at Chapter 6 & 7 (Creating an Application Life Cycle and Creating Content Views).

Note: while this guide is new for Satellite 6.2 and has some 6.2 only features, Chapters 6 & 7 are applicable to 6.1.

Patching can be done in several ways, depending on if you're using Puppet and if you're machines are correctly registered to Satellite.

I assume you have setup Satellite 6.1.8 (and earlier) correctly, meaning that you have imported the Manifest file, synchronized the RHEL content for one or more RHEL releases, created and published a content view and created activation keys and host groups.

I also assume you have built and or registered machines to Satellite so they appear in the host groups and content hosts sections. The content hosts should be put into the correct host collections. If you're using VMware or other virtual instances, you will need to have virt-who correctly configured.

  • Note in Satellite 6.2 the host collections/content hosts and hosts groups should be merged for simplicity.

Now you do the following:

1) From Satellite, you can automate the sync of content views so they always bring in the latest errata for the release you're using. You do this using Sync plans.

  • Note that old versions of content views are not automatically culled/deleted so you may want to automate this via cron. Also you may want to think about automating the promotion of content views across life cycle environments, but maybe not production.

2) There are a couple of hammer commands which can be used to patch servers, first list the servers:

hammer content-host list --organization-id

  • This will list the content hosts, you should see some ID's there for available errata. Note that the organization ID will need to be replaced for your Org.

hammer content-host errata apply --organization-id --content-host-id --errata_ids

  • This command will force errata to be applied to a server assuming that it meets the previously mentioned criteria. With the right script, you can automate this process. You can also use the REST API to perform these actions (Easier).

3) You can also enforce patching from the client using Puppet (to install and configure yum cron) or just by installing yum cron via Ansible. There is a good public puppet module out there to do this.

I think a good patching strategy involves utilising the steps in both 2 & 3 which make changes at both the client and the server. Note that some of this functionality and some of the commands may have changed in Satellite 6.2. The Red Hat insights plugin may also prove pretty useful for checking patch states.

Finally, the Errata view (https://satellite-fqdn.com/errata) and Content Host Errata View ( Satellite > Hosts > Contents Hosts > Particular Host > Eratta) are very useful for checking patch levels.

Thank you for the input - Repos are auto synced every night. All machines are registered to the satellite system correctly. The machines have been patching from the library entries. What I have finally figured out is how to create a content view for my 2nd quarter patching and how at extend those to my development,test and production views. I now can use the GUI to access each machine and assign it to an environment and the assign it to the 2-quarter content view.

/rant just so you know BUT I still do not have a cookbook set of instructions. All I am asking is for an appendix in the user manual that lists the step to achieve what I have done and not have to read 50 pages of less than literate (IT based) descriptions. I am sorry if this hurts the documentation people's feeling but every time I get to these problems I seem to have to invest major time if figuring out what needs to be done.

Solution - How about asking the customer to be on a review committee to PRE verify the documentation that they are sending out on the unsuspecting customers. I know that for my installation we spend $10,000 a year for our installation and I would hope that RedHat would have enough pride of product to fix these issues up front. /off rant

So back to fixing 110 machines in Satellite 6.1.8

Bob Teeter

Why don't the documentation team have public repositories for documentation and accept pull requests from the public (or at least customers)?. The current process for raising issues/bugs against documentation isn't worth the effort (especially if it is minor fixes in documentation).

This is not a bad suggestion at all - you could try and open an RFE for this in Bugzilla. This would actually make it easier for internal staff to contribute back as well. Some consultants have their own books on Gitbook for Satellite 6.

A great idea. Doc changes always seem to take AAAAGES to resolve when they are filed via RH Support.

Hey Bob,

Are there some commands in particular you are looking to find? I know in 6.1 the hammer and API documentation was pretty awful. The 6.2 documentation is looking much better.

Was my strategy/solution useful at all?

To be fair just 'patching' means different things to different people, and there isn't a one-size fits all solution. Just 'patching' can range from 'just register the boxes and let me install errata via a centralized UI' to 'I need to ensure that I have a patch cadence (monthly|quarterly|whatever) based on a fixed set of content that I've curated, whilst maintaining the ability to release a fix for a critical zero day vulnerability'.

The former is trivial to do (register your systems to Satellite, install katello-agent, and install errata to your hearts content via the UI or Hammer). The former requires you to know nothing about content-views, content-view filters, or lifecycle environments. Nor does it require you to have an opinion on how to model those items. It is basically a glorified repository server, but with a UI.

The latter is a bit more challenging as it requires you to understand such things as 'what exactly /is/ a content view', and what happens when I press the 'promote' button as an example. It is my belief that the aforementioned Content Management Guide covers this. (However, I work on Satellite /EVERY/ day, so I suffer from the 'curse of knowledge').

What I'd like your feedback on is the following. For Satellite 5, there is the Red Hat Satellite Channel Lifecycle Management with spacewalk-clone-by-date guide which covers setting up patching in a structured manner for Satellite 5. Would an updated version of this guide (updated with the 6.x particulars) satisfy your needs?

Robert,

As Rich Jerrido has highlighted, we (the Satellite documentation team) have put a lot of effort into the Satellite 6.2 documentation to cover common use cases. In your instance, the Content Management Guide is likely to be the most useful. However, I acknowledge that it's a rather length guide. If you're willing to clarify just what documentation you think would meet your requirements, as per this discussion, we could try and put together such a document in collaboration.

Russell,

As a representative of the Satellite documentation team (and Red Hat documentation in general), can you comment on my suggestion higher in this thread regarding making documentation repositories public and accepting pull requests from a wider audience?

PixelDrift.Net Support,

I can't give an official answer at this point, but there is a Red Hat precedent in the OpenShift documentation, all of which is contained in publicly-accessible repositories at [1]. I would hope we could move to make all documentation repositories public, but am not sure just if or when that might happen.

So that we can judge public interest in doing so, perhaps create a new discussion on the topic? This would make it more visible to everyone who monitors the Customer Portal discussions. Having a separate discussion thread would allow all supporters to vote up the idea.

[1] https://github.com/openshift/openshift-docs

Hi Russell,

Just a comment that I'd also welcome the ability to submit requests for documentation updates. If I'm following a guide and find something that isn't quite right or a missing tip that others might find useful, it can be a bit cumbersome to raise a support ticket, reference where in the documentation the suggestion should go, what the updated text should be, etc. If I were able to easily submit an update for Red Hat to review, it would be a quick win for all.

Richard.

Richard,

Thanks for your reply. We're actively working to make it easier for customers to provide feedback on documentation. We need to be sure, however, that feedback goes to the right person and is acted upon appropriately.

Right now you're welcome to send feedback to the documentation department's general email address: ccs@redhat.com. Many people track email coming in to that address and will assign it for action as needed, which usually means a ticket will be raised then acted upon.

Russell,

Surely that 'right person (people)' would be the owner of the repository and would approve merge/pull requests?

Sending comments to an email gives Red Hat visibility, but there is close to zero visibility for customers and other users of the documentation. I have found Bugzilla is slowly becoming the same, with more and more bugs being closed to those outside Red Hat.

I think that for an opensource company, a response of "send it to this email and we'll deal with it" doesn't really sound like a particularly open process.

PixelDrift.NET Support,

Thanks for your feedback. If we made our documentation available in public repositories then the approver of merge requests would be notified. No decision has yet been made as to whether or not we will be making this change.

Right now there are several methods of providing feedback on documentation, with hopefully one to suit everyone. You can comment on documentation at the bottom of the page, raise a discussion here in the Discussions area, or send an email to the email address ccs@redhat.com. We may introduce more methods in the future, but these are available now. As with the documentation, we are always welcome to feedback about the feedback methods we have available.

Tickets opened for documentation issues should be publicly accessible by default, only being private if they contain company confidential information, either Red Hat's or a customer's.

Russell,

Thanks for the update. I have a real world example here (although this isn't Red Hat documentation, it is for a man page, so appreciate it isn't exactly the same issue as raised)

There was a minor issue in the psacct man page, so I went through the process of raising it in the Red Hat bugzilla here: https://bugzilla.redhat.com/show_bug.cgi?id=1240179

After roughly 6 months a patch is created and it looks likely to be merged. Then, 3 days later (as part of internal process?) my bug is marked as a duplicate of another bug that I don't have access to view, why is this?. Can you advise why the linked bug 1233049 is private?

-edit-

Just on this point

You can comment on documentation at the bottom of the page

This isn't consistent. Some documentation appears to have an 'announcement' page which you can comment on, but other documentation when found through search doesn't have a section for comment.

Perhaps I am missing something obvious, but if I find the following link through searching the portal, how do I find the associated comments for this document? I can't see anything at the bottom of the document page, regardless of which format I choose to read it in. https://access.redhat.com/documentation/en-US/Red_Hat_Satellite/6.1/html/Puppet_Guide/index.html

PixelDrift.NET,

Thanks for that. First - thank you for raising that BZ ticket. That ticket was marked as a duplicate of another as you correctly guessed, because of a Red Hat internal process. What I can't yet explain is just why the other Bugzilla ticket was marked as private.

Regarding the option of commenting on documentation, I've just confirmed that this is being implemented progressively. To see an example of the feature, view the Red Hat Satellite 6.2 Puppet Guide here... https://access.redhat.com/documentation/en/red-hat-satellite/version-6.2-beta/puppet-guide/

At the very end of the document you should see a "Comments" section just like that used in the Discussions area. Until that is added to all documentation, I would suggest either raising a Bugzilla ticket, or even easier, start a new discussion, noting the issue.

Russell,

I have previously raised multiple issues with the example Puppet module used in this document. I am surprised that the latest version of the document persists with the example puppet configuration. In short, the 'exec' commands that are used in the module subscribe to the httpd package, so won't trigger unless the package state is changed (eg. from not installed to installed). Through the documentation the module is built progressively and re-run (with noop) at each step with the expectation that this event will be fired (but the package will already be installed in some instances).

The iptables and semanage items should be configured to run with an 'unless' that determines if they have already run. ie. an 'unless' that checks for the firewall rule, currently it will always trigger when the package is installed (or upgraded).. and worse.. if the httpd package is already installed on the host the firewall rule or semanage exec's won't fire. They shouldn't be tied to the httpd package installation, they should be configured independently with their own determination for if they need to be executed.

The documentation now includes this statement:

It is inadvisable to use executable resources to constantly chain many Bash commands. 

If this is inadvisable, why is the primary example in this document used to introduce a new user to the technology using this method? The first question a new user would ask is "well if I don't do is that way.. how should I do it? there's no example".

In short, Puppet module application should be idempotent, this module isn't because it makes the assumption that the httpd package will always need to be installed and 'run once' style scripts hang off this. This should be fixed.

OK guys - I have posted 2 examples of the Cook Book approach for an appendix to any manual. They are:

https://access.redhat.com/discussions/1199583

https://access.redhat.com/discussions/1275493

In each of them I have tried to put the commands in sequence and an explanation if the commands are not clear on what needs to be done. Each case is the bare minimum required to get a basic example implemented. Not a multiple country/multiple site example but just the basics so that any admin can use this as a starting point on the journey down the twisted path that we call IT.

My suggestion is that you have a subject specialist such as a senior engineer who knows all about the material supply a list of commands that can implement a specific operation supply the list and them help fill in the verbiage about what the command mean so that a person who has only worked with RedHat software since the 4.2 release in 1994 can implement the software that needs to be done.

I am not picking on any specific person or group. I just want the things that I have to do to be easier to do. And the time it takes not measured in furlongs per fortnight.
When I started dealing with satellite on RedHat. I started with Satellite 6.0 on Redhat 7.0. Boy was that a disaster. It took over 2 months of talking the the back line engineers and the most senior of them to resolve the basic install and start to get the system operational. 3 times I had to wipe out the machine and start over as what little info was available did not work.

So this is why my request for an appendix for a cookbook document.

Bob Teeter

Robert,

Thanks for your feedback. It's great to hear just what you're expecting from documentation as it helps us in planning.

With Satellite 6.2 we have been working to deliver what I believe is the style of documentation you call the "cookbook" approach. Please look over the following examples and see if they're of the style you're seeking. Note that the "Managing Errata" chapter covers what I believe you refer to as "patching".

As Satellite is designed to be flexible in its deployment, to meet different customers' requirements, it is difficult to provide cookbook style documentation which provides a linear path for customers to follow. The following examples go some way to meeting those needs.

Quick Start Guide - Installing, configuring, and provisioning physical and virtual hosts from Red Hat Satellite Servers https://access.redhat.com/documentation/en/red-hat-satellite/version-6.2-beta/quick-start-guide/

Managing Errata https://access.redhat.com/documentation/en/red-hat-satellite/version-6.2-beta/content-management-guide/#Managing_Errata

Russell,

In 'Managed Errata', both sections 9.2 and 9.3 'For Web UI users' need attention. These steps should be provided in a structured list with some screenshots provided to give context to statements such as "in this example, it is RHSA-2016:0008".

In 9.3, the following "Let’s apply a single OpenSSL errata to our test system through this tool." is obviously a copy and paste from 9.2 because it is referring to patching a single host, but 9.3 is meant to be discussing applying errata to multiple hosts.

Also in 9.3 you have the following in the hammer cli section:

"List a set of registered systems using the UUID of the errata as a filter:"

It's not adequately explained why I need to use UUID in this instance, but in the previous instance patching a single host, the Errata ID was sufficient?. The output of the command before ("hammer erratum info --id RHSA-2016:0008") should also be included so that it's clear where the UUID value is being taken from.

"You need to run this command for each client system and replace --content-host with the name of the system for each execution. "

So what has really been provided in this section is the hammer command to patch a single system, not multiple systems. It's then left as an exercise for the reader to run this against multiple hosts. Surely this section should include a full hammer based example for patching multiple hosts?

Hi PixelDrift.NET Support,

Addressing each of your points:

1) In 'Managed Errata', both sections 9.2 and 9.3 'For Web UI users' need attention. These steps should be provided in a structured list with some screenshots provided to give context to statements such as "in this example, it is RHSA-2016:0008".

I'll see what I can do about adding some extra context around this item.

2) In 9.3, the following "Let’s apply a single OpenSSL errata to our test system through this tool." is obviously a copy and paste from 9.2 because it is referring to patching a single host, but 9.3 is meant to be discussing applying errata to multiple hosts.

Thanks for spotting this. I have implemented a docs fix.

3) Also in 9.3 you have the following in the hammer cli section: "List a set of registered systems using the UUID of the errata as a filter:" It's not adequately explained why I need to use UUID in this instance, but in the previous instance patching a single host, the Errata ID was sufficient?. The output of the command before ("hammer erratum info --id RHSA-2016:0008") should also be included so that it's clear where the UUID value is being taken from.

I've rectified this and it should be public in the next beta release. You should be able to use the erratum ID instead of the UUID.

4) "You need to run this command for each client system and replace --content-host with the name of the system for each execution. " So what has really been provided in this section is the hammer command to patch a single system, not multiple systems. It's then left as an exercise for the reader to run this against multiple hosts. Surely this section should include a full hammer based example for patching multiple hosts?"

This one is a little tricky because there doesn't seem to be a hammer command to apply an erratum to multiple hosts. I'll see if I can come up with a script to accomplish this.

Thanks for taking the time to look at this Daniel.

As mentioned above, if this documentaton was in public version control these could have very easily been pull requests! :D

PixelDrift.NET Support,

Thanks very much for that feedback. With that we can better align the documentation to customers' expectations. I'll ensure the person working on the Content Management Guide is aware of your comments and see if we can get them incorporated soon.

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.