Best practices to patch Linux servers

Latest response

I know this is not a first time people asking this question, but yes never i found a satisfied solution that i can implement. See my requirement below.
Out of this discussion I wanted to know one (of course standard way) of the best ways to implement patching in my environment. Currently I follow patching automation using a standard shell script in combination with RHN API. My current systems setup as follows.

I manage 2000+ RHEL server (5,6 and 7)
All servers are connected to RHN satellite
Patching policy is quarterly.

Does anyone has better approach then what i follow, please help me out.


You seem to have the actual patching pretty well managed. However, the "best" patching policy for you is the one that suits your requirements. So what are those? What kind of SLAs (Service Level Agreements) are you committed to?

Patching is generally motivated by system security. Remember that IT security in general has three aspects:

  • Confidentiality - wrong people must not get access to certain things (at least password files and other authentication infrastructure; possibly entire systems and the information in them)
  • Integrity - the information processed must not be damaged in the process, neither by bugs nor by malice
  • Availability - the systems and the information in them must be available for use by the right people; that's why they exist in the first place.

Are you an ISP/webhosting facility where customers bring all sorts of PHP stuff and WordPress installations of various quality and patch levels, and then expose all of it to the internet? If that is the case, and you patch only quarterly, you are probably hosting some malware at this very moment. In this kind of environment, you'll want the ability to fast-track any relevant security patches, even if it costs you some uptime.

Or are you a compute farm for researchers who run simulations that can take days or weeks? Inbound internet access only with key-authenticated SSH or secure VPN? Then you're pretty well protected from external threats and should think mostly in terms of internal ones. If a possibility of different researchers spying on each other is not a real concern, environment reliability and uptime will be your main concerns. In this situation, your patching decisions can be very different.

In any case, you might want to maintain a smaller group of "testing" servers, which will get the quarterly patch set some time before the rest of the servers, so that if there turns out to be a bad patch, or a bad interaction between new patches and commonly-used applications, you have a chance to know it before it affects all your systems.

What if we have Server hosted on AWS which OS are Amazon Linux ? Can we patch the Amazon Linux with RHN satellite Server.

Some other considerations...

**ADDED 1/11/2021 It's a good idea to disable EPEL on a satellite server (or other server) if you had it temporarily enabled. Keeping EPEL active on a satellite server can introduce EPEL related "upgrades" on your satellite which cause severe issues. I spent hours doing "surgery" on a previous satellite due to this and only activate EPEL for specific binaries such as EPEL's iostat or htop for example, then I disable it. Scrub your system for out-of-date repositories in /etc/yum.repos.d because these can cause you big issues. Make sure your /root directory is not getting full. I helped one customer out of a serious issue where their /root/ went to 100% full upon a kernel upgrade. There was not sufficient space over some years of numerous upgrades and their /root became full and the kernel didn't finish writing. (apparently). **

We have one customer who uses Ansible (not tower) to patch their large network, but in rational stages. Additionally, the patching has pre-steps such as looking for non-standard aytpical yum repositories in /etc/yum.repos.d/ and appropriately either moving them to a known location on that system, or just disabling that atypical non-standard repository. Some repos will "get in the way" of normal updates. Now of course, some will say they require [fill in the blank] repository, and if so, it ought to be regularly maintain (namely if it a non-Red Hat repository). Some systems that legitimately require EPEL, have the repository "shut off" with a cron before patching (and each night at midnight) (changing "enabled=1" to "enabled=0" in the repo file), so that it doesn't install conflicting rpms (yes, this can easily happen, even if rare, it's enough to take this action in at least that environment) NOTE "EPEL" here is just one example of many possibilities. The issue is not necessarily EPEL, but any atypical or unexpected repository that will cause/throw yum update issues. Be aware of stray or antique yum repositories that suffer from a lack of upkeep or simply are outdated due to never being updated (third party especially) which often cause terrible conflicts

There are some servers that work together for one customer, and must be rebooted in a specific order since they work together, so the ansible playbook/script ensures that happens.

Another stage in the patching we do is a yum clean all and yum repo list so that when new patches are published through a content view, the cached information doesn't "get in the way" of a new set of patches, while not the rule, there are enough "exceptions" where this is an issue enough to warrant doing this in that environment. Your actual mileage may vary.

Of course, I think everyone's hit on this, start small, and widen the scope in a rational/sensible method. Start with vmware and perform snapshots especially where risk is involved. Evaluate the results. Take the progressive steps. Some patch "test" "dev" and "production" segments of their networks, and even within that "stair step progression" take additional steps to approach patching in a sane manner. We did the patching of the Oracle servers with Oracle DBAs involved because they wanted to test things before/after. Ok, maybe that would work for you, or not. In any case- pre-coordination or pre-established agreements of timeframes is useful so a lack of "surprises" is helpful.

We have some other thoughts we can share, and this list above is not all-inclusive, it is not exhaustive. And what is listed above may or may not work for you.

Take lots of notes of your patching events, look for what is sane.

I'll post more in the near future.



Hey Ramesh,
The most important things were already written by Matti and RJ. I would like to point your nose to A Patchmanagement for RHEL, just to show you another example on how you could accomplish a patchmanagement even without a satellite server.

You sort the servers into one out of four phases. The servers of one phase will be patched on a specific due time. Between the phases is a week of time to discover problems the may have come with the most current patches. Please feel free to look into it. Comments and questions are welcome.

Best regards,

Hi Ramesh,

As mentioned before you should try to take advantage of automating your patching process using ansible core. You can execute on demand commands across your VM inventory to check health, status, and available resources for each of the systems. You can also download pre-built ansible roles from Ansible Galaxy and make needed changes (why reinvent the wheel).

Also if you have patches applied in multiple environments then you may want to freeze your available patch list in "Content View" in Satellite, to keep updates at the same level. Hope this helps. Thanks.


Hi. Where is the step by step process ???

Hi All, We are doing Quarterly Patching on every year. Now i need first quarter patching information (what are the package updated/installed)? Please let us know.

Hello, Question regarding latest patch level listings, I saw somewhere in RHEL a list of all of the RHEL patch levels & the associated architecture, but now I can't find it, would anyone happen to know?

Ed Clarke,

I can't think of a "patch level" except to say that if you've downloaded content from Red Hat (let's say) from a Red Hat Satellite server you have proper access to, then distribute those patches (or if you patch directly to Red Hat), then you've patched to a "level" of what your system(s) see as "current" as of the date you either downloaded and distributed to your own Satellite servers, or from the date you patched with systems directly connected to Red Hat.

There are ways where you can present a custom content view to a specific date, or a specific release of RHEL, which is certainly available, but that takes some work depending on your environment/scenario.

Please let us know some details behind your two-line question - what is driving your question in your reply above? Do you need to patch to a specific date, to a specific kernel, a specific RHEL release? Please provide what is behind your question so we can attempt to tailor an answer that may fit better.

Kind Regards,

Hello RJ, Thank you for your quick reply, of course its typical that after I post a question, I find the answer :) So let me share what caused me to post the question initially. I am a sys admin for our IBM LinuxONE servers (z13 mainframe running Linux) here at AT&T. our security office was checking patch levels & they that that I was not current, but they were looking at x86_64 arch, my servers are s390x arch, so different kernel releases. Here are the kernel lists that I was looking for for each architecture: s390x RHEL 7.x - x86_64 RHEL 7.x - Thanks, Ed

Hi Ed,

I am glad to hear you found the answer - that happens to me at times, or I find something I wrote previously here that I forgot about. Is there anything else we can assist?

Good luck/Regards,