Satellite 6.2 Airgap Installation

Latest response

I have been trying to decipher the documentation on airgapped Satellite, with little success.

I have a Satellite Server (RHEL 7, Satellite 6.2) connected to the Internet, and another Satellite Server (RHEL 7, Satellite 6.2) on the disconnected network.

Following the instructions in Appendix C of the Content Management Guide, I have a drive with 110GB of exported content from the connected server, but the next steps are less than clear.

I attempt to enable a repository on the disconnected server, but the error message complains "/content/dist/rhel/server/7/7.3/x86_64/os does not seem to be a valid repository. Please try refreshing your manifest"

I have imported the manifest from the downstream server, but none of the repositories can be enabled through the web interface.

Please help.

Responses

Hello Greg

First thought, is the Organization and Location set the same on both Satellites? I have not tested this entire procedure myself, but am willing to do so if no one can help solve this.

Greg,

There are a lot of options available for an airgap content sync. The method that has worked best for me in the past is creating a staging directory to perform a sync against. I found the best location is the web public directory /var/www/html/pub/ of the satellite server itself.

Once you have posted that information for the server to readily access, you need to point the satellite server to the public directory to act as the CDN. This is under Content > Red Hat Subscriptions, then clicking on the Manage Manifests button. Once there, you should change the 'Red Hat CDN URL' to the satellite server's public directory http:://(satellite_server_FQDN)/pub and begin your import.

I have seen situations where people create a specific sub-directory for the satellite import staging directory, such as /var/www/html/sat_import/ and setting the CDN URL to match http:://(satellite_server_FQDN)/sat_import

Once that is set correctly, you should be able to perform a content sync.

I understand that is a loose explanation, but hopefully it can get you started down a working path.

have you looked at setting the cdn url as described in C.7.2. Importing a Content View as a Red Hat Repository of https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/using_iss

What should the directory tree structure look like under the CDN URL?

I have 10 directories on my export drive that all look something like "Default_Organization-Red_Hat_Enterprise_Linux_Server-Red_Hat_Enterprise_Linux_7_Server_-_RPMs_x86_64_7_3" with great ugly directory trees under each of them.

What you should be looking for is the directory 'content' alongside the file 'listing'. If you reviewed my previous comment, your staging directory should have that directory and file available. You have two options to accomplish this:

  1. Copy The "Default_Organization....." directory into the /var/www/html/pub directory for export, and set the CDN URL to match the location of the 'content' directory and 'listing' file (i.e. http://(satellite_FQDN)/Default_Orgainzation.../Library).

  2. Copy only the 'content' directory and 'listing' file into the /var/www/html/pub directory, and set the CDN URL to the pub directory (i.e. https://(satellite_FQDN)/pub).

You would have to repeat this for each of the 10 exports you have created.

A more advanced option - You can (usually) safely merge all 10 directories into one directory. What you may have to do is manually update the 'listing' file within each sub-directory to include the newly combined directories. This will allow you to synchronize all 10 directories previously noted at once.

QUICK SUMMARY

hammer repository export ... creates isos which begin with the path /*ORGANIZATION*/Library/content/.... After reconfiguring your CDN to point to http://your-satellite/pub you must only copy the path STARTING from directory /content/ to the /var/www/html/pub directory.

CDN value: "http://your-satellite/pub"

Correct Path: /var/www/html/pub/content/path/to/rpms

Incorrect Path:: /var/www/html/pub/*ORGANIZATION*/Library/content/path/to/rpms

PAINFUL DETAILS

I have had the same issue for a long time, and I revisited this problem, did some deep digging, and FINALLY got this working. I can confirm the documentation is NOT correct, and even if it were, it is lacking in detail at best.
The remainder of this post is going to go into painful detail, because I have not seen anything anywhere that explains thoroughly how the export/import process actually works.
All documentation references will be from the content management guide.

Chances are, everyone reading this discussion is working in a secure environment, where having any server connected to the internet is simply not allowed.

All competent admins have came to the same conclusion - We need two satellite servers. One connected to the internet that will download all the content, and a disconnected Satellite server that will consume this content. We will export each repository to a content ISO, and import that ISO to our disconnected Satellite server.

I previously had a long ticket open with RedHat, where they basically recommended that you NOT export content views, but export each individual repository, import the repositories, and build your content views on the disconnected Satellite. A bit inconvenient, but still manageable.

Here is the code to export your repositories to DVD sized ISOS (because I'm shocked you cant export all repositories):

#!/bin/bash
# Ensure you have first followed appendix C.5. Configuring ISS
# WARNING /var/cache/pulp will grow significantly
ORGANIZATION="Your-organization"
REPOSITORIES=$(hammer repository list --organization "${ORGANIZATION}" | cut -d '|' -f 1 | grep '^[[:digit:]]' | sort -n)
INCREMENTAL=0 # Set to 1 if you are exporting incrementals
INCREMENTAL_DAYS=7 # how many days since your last export
SINCE=$(date -d "-$INCREMENTAL_DAYS days" +%Y-%m-%dT%H:%M:%SZ)

for id in ${REPOSITORIES[@]}; do
    if [[ "${INCREMENTAL}" -eq 1 ]]; then
        hammer repository export --id "${id}" --export-to-iso 1 --iso-mb-size 4500 --organization "${ORGANIZATION}" --since "${SINCE}"
    else
         hammer repository export --id "${id}" --export-to-iso 1 --iso-mb-size 4500 --organization "${ORGANIZATION}"
    fi
done

Once this is complete, you will see several directories which contain your ISO exports in /var/lib/pulp/katello-export.

You will then burn these ISOS to a DVD, Blu ray, or an external/internal drive if you're lucky. After copying all this content to your Satellite server, you will have to mount each ISO, and copy the contents to /var/www/html/pub.

Here is where most people and I make the 'mistake'. Following what the documentation tells us, we will copy the content to that location. So the directory ends up with something like /var/www/html/pub/*ORGANIZATION*/Library/content/whole/lot/of/directories/*.rpm

The problem is that RedHat has default repository sets. And once you run the hammer command on them to find out where they actually expect to get their content, you can see why this won't work:

hammer product info --name "Red Hat Satellite" --organization "Your-organization" # Satellite product randomly chosen

**OUTPUT SNIPPET**
15)Repo Name:    Red Hat Satellite 6.2 (for RHEL 7 server) (RPMs)
   URL:          /content/dist/rhel/server/7/7server/$basearch/satellite/6.2/os
   Content Type: yum

This command will output the URLs that it is expecting the content to be located for each repository. You will notice that the root directory that it begins with is "/content". If you have configured your CDN to point locally at http://your-satellite/pub then you can see why you need to have your content stored in the 'content' directory immediately after pub.

The following code will be the CORRECT way to load your exported content into your disconnected Satellite server

#!/bin/bash

ISO_DIR=/path/to/export/isos
MOUNT_DIR=/mnt/tmp
ORGANIZATION="Your-Organization"
WEB_DIR=/var/www/html/pub/

if [[ "${EUID}" != "0" ]]; then
    echo "You must be root to run this script."
    exit 1
fi

if [[ ! -d "${MOUNT_DIR}" ]]; then
    mkdir -p "${MOUNT_DIR}"
fi

for iso in $(find "${ISO_DIR}" -type f -name "*.iso"); do
    echo "Mounting ${iso}"
    mount -o loop "${iso}" "${MOUNT_DIR}" > /dev/null 2>&1
        echo "Syncing ${iso}"
        rsync -a "${MOUNT_DIR}"/"${ORGANIZATION}"/Library/* "${WEB_DIR}"
    echo "Unmounting ${iso}"
    umount "${MOUNT_DIR}"
done

rmdir "${MOUNT_DIR}"

This should put your content isos into the directory structure that Satellite expects it to be in.

The code examples used here were hand typed from my existing solutions that work, however I have not validated the code for any typos. If this doesn't make much sense I'm sorry, I typed it pretty quick. I just don't want anyone else going through something that is so easy to fix, but becomes a nightmare due to insufficient documentation.

Thanks for the very informative post. Note that you can export the special Default Organization View. (That's the special 'all the repos' view). And with regards to how your export should be structured, see Subscription-manager for the former Red Hat Network User: Part 7 - understanding the Red Hat Content Delivery Network

Rich,

Reading that blog series is almost as exciting as the first time I learned how to properly pipe xargs to find. I wish I had started off by reading those posts first. Might've saved me one or two headaches.

Using Hammer you can do an incremental export of the Default Organization View. I'm struggling to find an easy way to import the view on the command line, without going down the path of keeping a separate copy in the web server.

Thoughts?

With an export of the Default Org View, you basically have a full export of the CDN, so you can sync it just like you were internet connected. Your import process (the first time) will be

  • stage the content somewhere the disconnected Satellite can access.
  • change your CDN URL to http://destination/pub/cdn-import/ (or whatever it is called)
  • Import Manifest
  • Select repos (from the Content->Red Hat Repositories page) and sync.

When you do your incremental export, the import process becomes:

  • delete the original import contents (from http://destination/pub/cdn-import/).
  • copy the incremental export to the sync directory. (which contains only the new RPMs andnew repodata)
  • Sync your repos.

Do you have any suggestions on best way to keep the CDN whole, as opposed to just the new packages? That way when we have problems we have all the updates available? I'm just a little gun shy since we've had to rebuild and re-install the satellite a few times.

Thanks.

Hello, I have raised a bug[1] against the Content Management Guide to get this subject reviewed. If anything is incorrect, rather than missing or vague, we should fix that ASAP, even if it needs a separate bug. Feel free to comment in the bug and follow the progress.

Thank you [1] Bug 1455785 - Review Inter-Satellite Sync procedures.

Hello

An update to Exporting Content View Version to a Directory has been made.

Thank you

Luke Pafford -- THANK YOU

(I found out about Rich Jerrido's method later, and after my public facing satellite developed an issue, so couldn't use that immediately, but had a channel-by-channel dump)

Your post here was pure gold. I made the mistake of taking the Red Hat official documentation at face value, a sad sore mistake. I in principle used your method, but did not use ISO files (I used appendix D in the guide, exporting to one directory and carrying it on a massive hard drive).

On the RECEIVING satellite where I imported the content, I had rsync'd it to a directory, only to discover what Luke mentioned, so I did do a "cp -alf" of everything under each "Library" file in the manner Luke specified (hat tip to my TAM who told me about that method of using the "cp" command because it's quick, nearly immediate - as it uses hardlinks!!.

So I did a rabid cp using hardlinks of all content under the Default_Organization*/*/*/content to /var/www/html/pub/ which did just as Luke mentioned. (using yes | cp -alf source target with arguments of "archive", "hard link", and "force" (hat tip to my TAM for that gem). So after that, I pointed my cdn just as Luke mentioned to http://redactedsatelliteserver.fqdn/pub/ and every channel appeared.

The documentation has not improved at all. They should take Rich Jerrido's method and push that as the official method. Matthew Sawyer was correct in principle, yet Luke's method added necessary details and gave sufficient material to actually be useful when in a tight deadline, when I didn' t have the luxury of redoing the export in the method Rich Jerrido explained (looking forward to that method).

I'm hoping the documentation or bugzilla folks seriously take the bugzilla Stephen Wadeley submitted, because those with disconnected satellites who carry their content exports from their public facing satellite SERIOUSLY need documentation they can rely on.

Thank Luke for your response, I followed the steps indicated by you in the post of moving from pun/organization/content to pub/content. However I encountered a new issue, when I try to enable the Red Hat Enterprise Reposotory for which the contents I uploaded to the offline satellite, I get the error on the GUI no valid metadata files found for /content/dist/rhel/server/7/7Server/x86_64/os . Have you encountered a similar error? or suggestion on how to resolve this?

Thank you.

Carlo,

I haven't looked at Satellite in a while, but based off the error message you described, it appears as if there is an issue with your repositories, and not Satellite.

Before I go any further, the best thing you can do when working on repo management is install the 'yum-utils', and 'createrepo' packages. It will be in your best interest to inspect the contents of these packages, and learn all the commands, as they are incredibly useful. To inspect the contents of the packages -

yum -y install createrepo yum-utils
rpm -ql createrepo
rpm -ql yum-utils

Every single yum repository needs to have a 'repodata' directory within a repo. So for your repo 'content/dist/rhel/server/7/7Server/x86_64/os' there should be a directory called 'repodata' under 'os'.

Within the repodata directories are compressed xml files that contain all the metadata for the repository. 'Regular' repo metadata should exist in the 'filelists.sqlite', 'primary.sqlite', and 'other.sqlite' files.

Group metadata ( yum grouplist works because of this metadata) is contained in the 'comps.xml' file.

When working with RedHat, they also include security errata which is contained in the 'updateinfo' file.

With all that said, if you're importing/exporting repos according to the Satellite documentation, they should take care of all this management for you. Ensure that the repodata directory exists, and it contains all the correct files.

You may also have something as simple as incorrect permissions on the files

repo=/content/dist/rhel/server/7/7Server/x86_64/os
chmod 755 "${repo}/repodata" && chmod 644 "${repo}/repodata/*"

Thank you again Luke, based on what you mentioned I figured that the last ISO from the repo I imported from the connected satellite has the repodata files. I was trying to test with the first ISO only. With the information you provided I noticed repodata is missing in the first ISO, and I guess the last one is the one that has all of them. I will test and later post if that worked. Thanks.

Luke once I again I followed your recommendation and looked for the repodata/ directory , this directory was in the last .iso from the repo I was trying to synchronize to the downstream satellite, what I did is I copied all rpms in to from each iwso to /var/www/html/pub/content/dist/rhel/server/7/7Server/x86_64/os/ and when I mounted the last .iso from the repo I was importing from the connected satellite I also copied the /repodata/ directory. I moved to enable the repo on the GUI and it worked. Problem solved. Thanks again

Has anyone ever made this work? There are so many different ways of doing this and so far nothing is concrete and working. I need to get this to work. Using the default content view as suggested in places resulted in only 6.9 and 6.10 being exported. We also have 7.4 and 7.5 but they didn't make it. I made an export content view that includes everything and that didn't work either.

I made it work Matthew. They way I upload content to the disconnected satellite is by uploading One repo at a time, and that means getting rid of the content directory prior to uploading the next repository. As of right now I only manage 2 versions. Hope my explanation clicks. I know the struggle.

63 repositories at the moment. I will need to learn how to export 1 first.

You can export the repositories from the connected satellite to ISO files. Then manually transfer those ISOs to the offline satellite. Once you had transfer those ISOs to the offline satellite, the next step is to mount each ISO and transfer the from the mounted ISO the content/* directory to the path locally on the Offline satellite directory: /var/www/html/pub/ <--- that's the content I was talking about in my previous comment, also you do this step only on the first ISO you mount, because you will copy the directory tree, on the next ISOs you mount, you will have to traverse to the last directory and transfer it locally to the/var/www/html/pub/content/'last directory where rpms are found' Then once you are done transferring the contents, on the GUI side, synchronize the repo, it will take some time if its the first time you do it. Once you are done, repeat the steps for each of the repos. Caveat, the last ISO from each repo has the database of all RPMS, and this is something that I found no where documented only on this post thread. Hopefully it helps.

I made some edits to the last comment, It might look confusing but once you do it a couple of times, it will become natural, you can do a bash script to automate it.

Minor detail,

Once you have transfer all rpms, from each of the ISOs, from each repo. Do a chown -R foreman:foreman /var/www/html/pub/*

By the way, the instructions in the Red Hat satellite(rol.redhat.com) instructional material made more sense than the official documentation for a disconnected (or as you call it) airgapped install. I installed several "disconnected" satellites this way, and they all worked fine. That being said, I'm not sure if the instructions for 6.3 are different.

the link is only for people enrolled to the training material, do you know any other place where I could get the same info?

Was able to make instructions in Appendix D work.

That is the appendix for content management, right?

Has anyone tried do a Custom SSL certificate in a capsule satellite installation?

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.