Can I use NFS mount as /tmp/export directory on satellite 6.2 for import purpose?

Latest response

I am ready to run hammer import on Satellite 6.2 after I finished running export from satellite 5. However instead of keeping spacewalk_export.tar.gz on /tmp on satellite 6 as suggested by RedHat, I have created a NFS mount as /mnt/Sat_export/ under which I have spacewalk_export.tar.gz. My question is can I use this NFS mount as equivalent to /tmp after changing apache permission? If so, what need to be done on SELinux side?
Also, can I use /var directory instead of /tmp for the import on satellite 6? Thanks.

Responses

I have tested this and NFS works. Just need to make sure the apache permission on the directory and SElinux setting.

I'd highly recommend against mounting anything on /tmp (in particular) or /var itself, but you could mount to a subdirectory such as /var/www/html/pub/content .

When I do a satellite export from my server, when I go to import it, I place it in a web-facing location with enough storage to hold the updates.

One thing I do to reduce the size/footprint of the 800+GB is do a hardlink -cv /path/to/import. Be careful, only do this against the import directory, and hopefully, before you have put it (let's say) on an external drive (you have to rsync -Hau where the "-H" carries the hardlinks). I reduce my export from over 800GB to just under 200GB. The "hardlink" command does a deduplication of rpms in this case (My Red Hat Technical Account Manager "TAM" told me about the hardlink command in this scenario).

In order to import the channels, you will need to have it visible from the web server. So perhaps /var/www/html/pub/content (of course with proper permissions, and verify selinux contexts)

I have found when I do a content-view export (thank you Rich Jerrido of Red Hat), I end up with a number of sub-directories that lead to a "content" folder and that's where everything is kept.

In my case, the rsync carried the selinux contexts because I placed it under the existing web directory.

Are you really taking an export from a Satellite 5 to Satellite 6? I've only done an export from one satellite 6 to another satellite 6 system (or a v5 to another v5 in the past).

i know this very very old - lol , but been looking for good solid export/import from contented to disconnected as the size is ridiculously large in our environment. but to answer the SELINUX on the nfs mount, its not an issue we use NFSv3 in a lot of stuff still, which the netapp will not work with the fcontext, on the server your mount in at you can use the context= option on the mounting side. my example is something we use to have oracle send its large log files to a volume on the netapp. Mystorage:/remotelogs/DBserver4 /var/log/mylogs nfs defaults,context=system_u:object_r:var_log_t:so",nodev,nosuid,noexec,acl 0 0

so it needs to be for the httpd
content="system_u:object_r:httpd_sys_content_t:s0" could be used, the rest of your mounting option are dependent on your environment.

Hi Frank,

For years, we do a content-view export to /var/export where we have SAN connected storage. we reduce the size of the content view export from 1.3TB to 350G or so with the hardlink command mentioned below and in other satellite posts in the forum. It is vital to use "-Hau" with rsync to retain the deduplicated size of the content view.

You can try an NFS mount, but evaluate the write speed and you may have to do some sealert and subsequent commands if you are denied write on nfs.

For us, for our collection of RPMs and repositories after the successful export, we do a hardlink -cvv /var/export/name_of_organization_export.1.0 which takes our 1.3TB of content-view export down to probably 350G

IMPORTANT when doing an rsync to the external drive, you MUST use -Hau with rsync to RETAIN hardlinks from the previous command, or the resultant expands back out to the original size. This must be done for every rsync to retain the reduced size after the hardlink command mentioned earlier For import, we have a severe amount of storage on /var on our "disconnected" network where we IMPORT our content view.

For us, we do not use NFS. If you use NFS, evaluate it's ability to work in your environment. Before we used SAN storage, we had a local raid6, but them migrated to VMware and SAN attached storage, or at minimum speedy vmdk files from VMware.

Regards,
RJ

Thank you RJ, I'm still a noob with the Sat6, and less than that with the exporting content. we do have a guy that is our goto for satellite stuff , so when he's he not here we kind of stumble around like toddlers :) when we do a hammer command to export hammer content-export complete version --id 81 the i have the export-*.tar.gz archive and the 2 two json files.
when you use hardlink do you extract that archive and then re tar.gz it?