Local CDN Server

Latest response

3.1. Configuring Satellite to Synchronize Content with a Local CDN Server

Sat 6.10
Connected Sat
Disconnected Sat

Do you really needs a CDN setup on the disconnected server if you are moving the content update manually anyway?

Responses

Hi Dennis,

We use a connected-to-Red Hat CDN satellite server to acquire and do a content-view export which is tremendously easier than doing a number of reposyncs.

We bring our content-view export as I've described in various previous discussions and ingest them in the process we've also described previously. Doing this presents the repositories once you take the steps we've discussed previously.

The reason to take the updates from the connected-to-Red Hat satellite - and bringing this to the isolated satellite and importing it using the methods we've previously discussed, is then you can create the specific content view for server, workstation etc.

When we ourselves bring the exported content view for ingest, we bring it to the DISCONNECTED satellite (in the documentation/procedure we've previously discussed) and do the ingest process we've previously discussed. In our case, we park it at the satellite under /pub/content and accordingly tell the satellite (as mentioned in the video we mentioned in previous discussions) to look there. We then do a synchronization to ingest the exported content view.

Not sure if this is clear, but basically, in order to have the patches available, park the data at a web-accessible location your satellite can see (we do it on the satellite, location mentioned above) and we have a huge amount of storage to accommodate this.

Regards,
RJ

RJ Hinton,

so back in the 6.3 days, we ran an export on the connected sat server, then external drived it over to a web server which was connected to a number of disconnected networks we support. We created a folder to store the exports and softlinked the folder to each disconnected network. We then configured the CDN settings on each disconnected Sat server to point to the web server. The disconnected Sat server was able to pull the content from there. The idea was to do ONE export/incremental export copy to the web server and let each disconnected Sat server pull the content versus making a copy for each network separately.

In 6.11, it doesn't appear we can do it this way anymore. From what I see it expects an "export" stored in /pub/content, or another Sat server on the network, or Red Hat CDN (connected). With these options, I think I can still store the content on the web server, but I'll need to run a cron job to pull the content into /pub/content then run a cron job to do the import. Or am I totally out to lunch?

Interesting. I'm on 6.10 and export a content view, manually move it over to the disconnected satellite, then import the content view there. I do this for each single content view, around 12 content views. After successful import, I deleted the exported/imported file to safe disk space. Not sure if that's the best practice.

I'm pretty sure we've been placing our export on our disconnected satellite servers to pub/content/ on satellites version 6.x ever since 6.x was released. We do this on the satellite itself, I think I may have maybe included space sizes in different discussions.

We have so much space, I do not immediately delete it the content view at /var/www/html/pub/content/ but do a cp -alf source target (same file system) and then clobber the source upon next update. the "alf" in the cp does hardlinks. not doing "mv" and using "cp" retains selinux contexts to avert the need of selinux relabling the destination.

/dev/mapper/san1-var                  xfs       2.0T  1.1T  915G   55%    /var

This below would be our CDN satellite where we export the content view - we can probably remove some things. Yup, excessive storage overkill.

/dev/mapper/san1-export            xfs       5.0T  2.7T  2.4T   53%   /var/export

Regards,
RJ