How can we update a disconnected or an air-gapped system (A system without internet connection)?
Environment
- Red Hat Enterprise Linux (without internet connectivity)
- Red Hat Network (RHN)
- Red Hat Subscription Manager (RHSM)
- Red Hat Network Satellite
Issue
- How can I update a disconnected system?
- How can I update an air-gapped system?
- How can a system without internet connection regularly be updated?
- How can offline installation/upgrade of packages be performed without connecting to RHN?
- How can I patch the system without internet connection ?
Resolution
Depending on the environment and circumstances, there are different approaches for updating an offline system.
Approach 1: Red Hat Satellite
For this approach a Red Hat Satellite server is deployed. The Satellite receives the latest packages from Red Hat repositories. Client systems connect to the Satellite and install updates. More details on Red Hat Satellite are available here: The best way to manage your Red Hat infrastructure .
- Pros:
- Installation of updates can be automated.
- Completely supported solution.
- Provides selective granualarity regarding which updates get made available and installed
- Satellite can provide repositories for different major versions of Red Hat products
- Cons:
- Purchase of Satellite subscription required, setup and maintenance of the Satellite server.
Approach 2: Download the updates on a connected system
If a second, similar system exists
- which is the same Product variant (Workstation for Workstation) and Major release (RHEL 7 for RHEL 7)
- and if this second system can be activated/connected to the RHN
then the second system can download applicable errata packages. After downloading the errata packages can be applied to other systems. More documentation: How to update offline RHEL server without network connection to Red Hat Network/Proxy/Satellite?.
- Pros:
- No additional server required.
- Cons:
- Updating procedure is hard to automate, will probably be performed manually each time.
- A new system is required for each architecture / major version of RHEL (such as 6.x)
Approach 3: Update with new minor release media
DVD media of new RHEL minor releases (i.e. RHEL6.1) are available from RHN. These media images can directly on the system be used for updating, or offered i.e. via http and be used from other systems as a yum repository for updating. For details, please refer to the kbase solutions with detailed instructions, which are specific to the various RHEL major versions: for RHEL5, for RHEL6, for RHEL7, for RHEL8 and for RHEL9
- Pros:
- No additional server required.
- Cons:
- Updates are restricted to updated packages that are part of the minor releases. Erratas released after the minor release becomes available will be contained in the next minor release.
- Fetching the update media and updating the systems is difficult to automate.
- The media only contains the base RHEL packages. They do not contain packages from the optional repository. This prevents the bundled download of the packages from these these channels as media image.
Approach 4: Manually downloading and installing or updating packages
It is possible to download and install errata packages. For details refer to this document: How do I download security RPMs using the Red Hat Errata Website? .
- Pros:
- No additional server required.
- Cons:
- Consumes a lot of time.
- Difficult to automate.
- Dependency resolution can become very complicated and time consuming.
Approach 5: Create a Local Repository
This approach is applicable to RHEL 5/6/7/8/9. With a registered server that is connected to Red Hat repositories, and is the same Major version. The connected system can use reposync to download all the rpms from a specified repository into a local directory. Then using http,nfs,ftp,or targeting a local directory (file://) this can be configured as a repository which yum can use to install packages and resolve dependencies.
- Pros:
- Automation is possible.
- For Development and testing environments, this allows a static (unchanging) repository for the Dev systems to verify updates before the Prod systems update.
- Cons:
- Advanced features that Satellite provides are not available in this approach.
- Does not provide selective granularity as to which errata get made available and installed.
- A distinct system is required for each architecture / major version of RHEL (such as 6.x)
- The clients can not version lock to a minor version using a local repository. The repository server must version lock before the reposync to collect only the specified version packages.
- The clients will not see any new updates until the local repository runs
reposync -n
to download new packages and for RHEL 6 & 7createrepo --update
to create new metadata. Thecreaterepo
command should be avoided on RHEL 8 normally for createrepo_c version lower than 0.16.2-1.el8. The known issues it causes can be solved at How to add the modules information after cloning the RHEL8 repository - The clients must run
yum clean all
to clear out old metadata and collect the new repo metadata after any changes in reposync metadata.
Checking the security erratas :-
To check the security erratas on the system that is not connected to the internet, download the copy the updateinfo.xml.gz
file from the identical registered system. The detailed steps can be checked in the link shared in Approach 5
Root Cause
Without a connection to the RHN/RHSM the updates have to be transferred over other paths. These are partly hard to implement and automate.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.
23 Comments
"Approach 3: update with new minor release media" will not work with RHEL 6. Many packages (over 1500) in the "optional" channels simply are not present on any iso images. There is an open case, but the issue will not be addressed before RHEL 6 Update 4 (and possibly never).
I agree with "Randy Zager" "optional" packages should be available offline along with other channels which are not available in ISO's.
Can Approach 5 "additional server, reposync fetching" be applied with RHEL 7 servers?
However, won't I need to stand up another RHEL 7 server in additional to the RHEL 6 server?
Correct. When using an external server to reposync updates, you will need one system for each Major Version of RHEL that you want to sync packages from.
RHEL 7 does not have access to RHEL 6 repositories just as RHEL 6 can't access RHEL 7 repositories
what I am looking for is the instructions on the reposync install AND how to update off line clients
do I have to manually install apache?
You will need: a RH Satellite or RH Proxy server, an internal yum server, and a RHN client for each OS variant (and architecture) you intend to support "offline". E.g. supporting 6Server, 6Client, and 6Workstation for i686 and x86_64 would normally require 6 RHN clients, but only three RHN clients would be necessary for RHEL7, as there's no support for i686 architecture
Yum clients can (according to the docs) use nfs resource paths in the baseurl statement, so apache is not strictly necessary on your yum server, but most people do it that way...
Each RHN client will need: local storage to store packages downloaded via reposync (e.g. "reposync -d -g -l -n -t -p /my/storage --repoid=rhel-i686-workstation-optional-6"). You'll need to run "createrepo" on each repository that gets updated, and you'll need to create an rsync service that provides access to each clients' /my/storage volume
Your internal yum server will need a cron script to run rsync against your RHN clients so you can collect all these software channels in one spot.
You'll also need to create custom yum repo files for your client systems (e.g. redhat-6Workstation.repo) that will point to the correct repositories on your yum server.
I'd recommend you NOT run these cron scripts during normal business hours... your sys-admins will want a stable copy so they can clone things for other offline networks.
If you're clever, you can convince one RHN client system to impersonate the different OS variants, reducing the number of systems you need to deploy.
You'll also most likely want to run "hardlink" on your yum server pretty regularly as there's lots of redundant packages across each OS variant.
While logged in, I can't access https://access.redhat.com/knowledge/node/33279 (Update a Disconnected Red Hat Network Satellite). I get "access denied".
Could you add link to https://access.redhat.com/solutions/1443553 to Approach 2 and 5? Using container make possible consolidating RHEL 6, 7, 8 reposync processes into one server.
Kazuo, this is a good detail. Thanks.
I'm curious why number 5 from above, "Create a Local Repository" isn't applicable to RHEL8? Does AppStream somehow break the ability to reposync or createrepo?
It actually does work for RHEL 8. I had not noticed this and will have that updated shortly. The steps for RHEL 8 are slightly different but are noted in the 23016 solution.
What type of automation You mean when it comes to Approach 5? Could You give some examples?
That is especially in contrast to Approach 4, "manual download" requires much work by hand, downloading to desktop with a browser and then distributing to the "to be patched system(s)". Approach 5 can for example via daily cronjob run reposync to fetch down the newest erratas. Also rsync or such could be used to distribute a downsynced repository to multiple other systems in the intranet - that allows also to choose times for that transfer when the network is not loaded with other things.
These solutions are worthless because the only ones that will not involve a lot of manual effort require an instance on the outside world. The key point of this question involves systems that are DISCONNECTED from the internet. It should not require a redhat instance to access the update repositories. There should be a way to authenticate and pull down the repositories without placing my redhat systems on the internet.
Thank you for the feedback, but I fear that it will not be read by the specialists who can influence our solutions in this regard. I recommend to open a case, bring up your environment/requirements where you are looking for a solution, and then if nothing fits that situation a request for enhancement might be opened.
I have been using a variation of Approach 2 for a few years now to pull packages for my offline system and it works flawlessly. Stand up a RHEL system with online access only for downloads and you can use "yum install --downloadonly --downloaddir= " to pull the packages down locally. Put all the RPMs in a tarball and copy to some physical media and take it to your offline systems for update. You can create your own local repo and put those RPMs in it. If you are real creative, you can put this into a bash script or an Ansible ad-hoc script to automate the process.
Read this for more details: https://access.redhat.com/solutions/10154
Would it be possible to run a local »rpm -qa« and with that list in hand download the needed package?
When running »dnf upgrade« it must be something like that which is been doing in the background. There might be several tasks to do like first download »repomd.xml« then do some calculation locally and so on.
This would only work if you had no modules installed.
Other exceptions would be if a package was obsoleted, rpm would not give you that package. You would also not be able to use "--security" or any groups. Unless you took the extra steps to make a local repository in which case, if you're already downloading a lot of packages + building repodata, you might as well use a local repo and
reposync -n --download-metadata
to ensure you're not missing updates or modularity.Thanks, but
yum module list --installed
require internet access.I think I have to stick with the
Approach 5
. It would also be much easier for a newbie to do the upgrade in the future.Even offline, if dnf has ever install a modular package, it should keep track of that and display under
@modulefailsafe
. This would be the problem in fact because if you try to update a modular package without knowing the new module definitions, then it would just filter out any updates. The old modular data doesn't know about the new packages. Approach 5 is the best in most cases as it gives you all the updates, helps other systems, and lets you install packages you may need later. It just takes a little more space to hold all the packages.To get from "rpm -qa" output to something you can feed into "dnf install " needs many modifications on the package names and is hard to accomplish. It would be easier to have just a list with packages+versions and then use that on all other systems would be easier, so to not use one already installed system as template but this "template list". The snapshot method of creating a copy of the repo (step 5) is also great, indeed. Maybe we should consider to hint at methods which detect duplicate files on the file system and then create hardlinks for these, that would greatly help with storage requirements for such situations.