Satellite Best practices

Latest response

Hello , we have been using satellite for more than 3 years and since some time it has become a priority service that needs to be up and running 24x7 , we use it for deployments for different departments of the company and also for patching. We have a custom software channel that we clone from base channel and it is used with an activation key for new systems so basically all the servers are registered to that channel that is also cloned quarterly for patching, last week we had an issue after the clone and the build process started failing because of this issue, we are now trying to find the best way to use satellite for builds and patching without affecting builds because of cloning channels fail.

im thinking to have a base channel used for builds and then clone a new channel every quarter for patching ,that will isolate builds from patching however i will have to move systems across channels every quarter , what do you think it would be the best way of doing this ?

We are also thinking to have one more satellite for a staging environmnet where we make all the changes and after we check everything is ok we move them to prod, an inter satellite sync, have any of you used this technology ? any recommendation ?

last but not least, we also need a 24x7 service , we are running satellite in a virtual machine and we are trying to evaluate moving to an HA environment, we havent found much information about this kind of implementation, is any of you using satellite with HA?

Thanks a lot for your time and help!

Responses

I use a "base channel" for my builds... then patch up to a clone - by migrating to the clone and updating. I do not, however, use cobbler, etc.. and therefore all of my kickstart files are basically static (which I will assume is different than your situation?

In my kickstart file, I use:

url --url http://rhnsat01.company.com/ks/dist/ks-rhel-x86_64-server-6-6.4

We have not explored HA for Satellite (though I did find this, which is a bit dated now - http://www.sistina.com/f/pdf/rhn/Satellite-HA.pdf). Instead, we were considering having some sort of DR strategy, but (knock on wood) our Satellite has been very solid, or Red Hat support has resolved any issues in a very short amount of time - therefore, the initiative did not seem worth it, at the time anyhow. I'm looking forward to what other folks post!

Thanks for your response, it is a different situation but it helps so basically you move the machines from your base channel to the cloned channel for patching, right ? how do you do that, using the gui and changing the channel or with an api ? . regarding ha i also found that article but it doesnt seems to be supported on newer versions. Support has been good for us as well however we are ina situation that if it fails it could affect a build in progress which it could generate a big impact for the business. thanks again for your response.

I believe the GUI actually manages most of the migration for you, if not completely.

I have a horribly-written script that I use which utilizes spacecmd (which you can get from the EPEL REPO)
NOTE: All of my clones begin with crwxYYwkWW - YY=year, WW=week

#!/bin/bash

# $Id: migrate_host.sh,v 1.8 2014/09/25 17:51:58 root Exp $
# Maintained at: 
#  rhnsat01:/root/API/Cloning/migrate_host-ENV-based.sh
#  OUTPUTDIR = /var/www/html/pub/Migrations

#######################
##
##  N  N    OO    TTTTTT  EEEEE
##  NN N   O  O     TT    E
##  N NN   O  O     TT    EEE
##  N  N   O  O     TT    E
##  N  N    OO      TT    EEEEE
## 
## THIS SCRIPT IS A COMPLETE HACK AND TRAGEDY.  SHAKESPEARE COULD NOT HAVE DONE BETTER...
##
#######################

if [ $# -ne 2 ]
then

  echo "ERROR: unexpected parameters"
  echo ""
  echo "USAGE: "
  echo "       ${0} <ENV> <hostname>"
  echo "       ${0} crwx14wk25 rhvsrv91.corp.crwx.com"
  echo "            <ENV> typically resembles: crwx14wk25"
  echo "                  <hostname> is typically a FQDN"
  echo ""
  echo "  Will migrate rhvsrv91.corp.crwx.com from it's current channel(s)"
  echo "    to the crwx14wk25 channels"
  exit 9
fi

ENV=${1}
CLIENTNAME=$2
OUTPUTDIR=/var/www/html/pub/Migrations

echo "NOTE: Migrating $CLIENTNAME to $ENV"
echo "pausing 5 seconds to allow you to CTRL-C"
sleep 5

# Gather the Base/Child Channels and store them
spacecmd -q system_listbasechannel $CLIENTNAME > ${OUTPUTDIR}/${CLIENTNAME}.BaseChannel.0
spacecmd -q system_listchildchannels $CLIENTNAME > ${OUTPUTDIR}/${CLIENTNAME}.ChildChannels.0

OLDENV=`spacecmd -q system_listbasechannel $CLIENTNAME | cut -f1 -d\-`

echo ""
echo "NOTE: Checking Satellite for $CLIENTNAME"
#echo "spacecmd -q system_details $CLIENTNAME | grep Name && echo \"Host found: $CLIENTNAME\" || echo \"Host not found: $CLIENTNAME\"  "

#  CHECK SATELLITE TO MAKE SURE CLIENT EXISTS 
#  NOTE: if you have more than 1 entry in Satellite with the clientname provided, this will fail
NUMRETURN=`spacecmd -q system_details $CLIENTNAME | grep Name | wc -l`
if [ $NUMRETURN -ne 1 ]
then
  echo "Host not found (or too many host entries): $CLIENTNAME" 
  echo "Check the WebUI for Satellite for $CLIENTNAME"
  exit 9 
else 
  echo "Host found: $CLIENTNAME" 
  echo ""
fi

###################################################################################################################
# Make sure that the BaseChannel is only 1 line of output
# then set the client to use the new BaseChannel
if [ `cat ${OUTPUTDIR}/${CLIENTNAME}.BaseChannel.0 |wc -l` -ne 1 ]
then
  echo "ERROR: Something went wrong.  There are too few/many lines in ${OUTPUTDIR}/${CLIENTNAME}.BaseChannel.0 "
  exit 9
else
  echo
  echo "yes | spacecmd -q system_setbasechannel $CLIENTNAME ${ENV}-`cat ${OUTPUTDIR}/${CLIENTNAME}.BaseChannel.0 | sed "s/${OLDENV}-//g"`"
  yes | spacecmd -q system_setbasechannel $CLIENTNAME ${ENV}-`cat ${OUTPUTDIR}/${CLIENTNAME}.BaseChannel.0 | sed "s/${OLDENV}-//g"` && echo "We're cool" || exit 9
fi

# Add the Child Channels (pre-migration)
for CHILD in `cat ${OUTPUTDIR}/${CLIENTNAME}.ChildChannels.0 | sed "s/${OLDENV}-//g"`
do 
  echo "yes | spacecmd -q system_addchildchannels ${CLIENTNAME} ${ENV}-${CHILD}; sleep 1"
  yes | spacecmd -q system_addchildchannels ${CLIENTNAME} ${ENV}-${CHILD} && sleep 1
done

###################################################################################################################
#   HOUSEKEEPING - CHECKING WHETHER THE CHANNEL COUNT AFTER IS THE SAME AS BEFORE
echo "spacecmd -q system_listbasechannel $CLIENTNAME > ${OUTPUTDIR}/${CLIENTNAME}.BaseChannel.1"
spacecmd -q system_listbasechannel $CLIENTNAME > ${OUTPUTDIR}/${CLIENTNAME}.BaseChannel.1
echo "spacecmd -q system_listchildchannels $CLIENTNAME > ${OUTPUTDIR}/${CLIENTNAME}.ChildChannels.1"
spacecmd -q system_listchildchannels $CLIENTNAME > ${OUTPUTDIR}/${CLIENTNAME}.ChildChannels.1

if [ `cat ${OUTPUTDIR}/${CLIENTNAME}.ChildChannels.0 |wc -l` -ne `cat ${OUTPUTDIR}/${CLIENTNAME}.ChildChannels.1 |wc -l` ]; 
then 
  echo "Something does not look right.  There are an incorrect number of child channels now"
fi
exit 0

I plan on rewriting this whole migration script using Python though.

Also - I have 2 "types" of REPOs. Ones that are managed by Satellite, and then ones that are simply reposyncs (daily). My script only impacts the Channels which are Satellite managed.

Love the bit about Shakespeare

Ivan,

This link may provide some useful information
RHN Satellite Channel Lifecycle Management with spacewalk-clone-by-date

I have setup Satellite/Spacewalk in several different ways, but what I have generally settled on is:
1. Clone of update channel occurs monthly (business requirement)
2. The activation key is associated with the latest clone after successful clone, so all new server builds have the latest level patch build
3. Different groups of servers (dev/test/prod) are moved from their current clone channel to the latest clone channel on a schedule (eg. 1 week apart) in line with formal change procedures.

For me, this introduces a problem that I haven't been able to solve gracefully (and as this thread has James' attention, I will ask it here!)

When you have a cloned channel that was cloned at the start of the month, eg 20141001, how are you determining the relevant errata for the channel that has been released since the first of the month? eg. if a patch was released on 20141015.

It's to answer the age old question "what patches are outstanding on host X?"

You can:
1. Clone the latest errata and packages to the clone channel, but then those package will become available and potentially get upgraded accidentally on hosts in the clone channel
2. List errata from the main update channel from date X, but this won't determine which are relevant to hosts in the clone channel
3. List errata that hasn't yet been cloned (same issue as 2.)
4. Move the host group temporarily into the 'latest' channel, run the report and move them back to their original clone channel

Can someone provide a simple solution to this one?

Hey Pixel - I don't know if I have asked this previously, but... do you use spacecmd? I think there are several possibilities using spacecmd to do this (easily).. which means there are absolutely some not-as-easy ways to do it with the API ;-)

system_comparewithchannel

spacecmd {SSM:0}> help system_comparewithchannel
system_comparewithchannel: Compare the installed packages on a
                           system with those in the channels it is
                           registerd to, or optionally some other
                           channel
usage: system_comparewithchannel <SYSTEMS> [options]
options:
         -c/--channel : Specific channel to compare against,
                        default is those subscribed to, including
                        child channels

So - this does not provide an answer of how does client X compare to date Y. But.. you can compare the current state of the host to your base channel or a clone, etc...

There is also a way to compare the diff between a clone and another Channel.

spacecmd {SSM:0}> help softwarechannel_diff
softwarechannel_diff: diff softwarechannel files

usage: softwarechannel_diff SOURCE_CHANNEL TARGET_CHANNEL

Do either of those recommendations seem like they would suffice?

EDIT: I, too, have considered moving my host between channels and running "yum check-update" just to see the delta ;-)

James,

Cheers for this.. it is extremely close to meeting my requirement, and it wasn't a feature I was aware of in spacecmd so appreciate the response!

My only remaining issue is that the comparison is essentially a 1 to 1 with

system version -> channel version

What I am trying to achieve (will help with errata processing too) is if I have package-v0.1 installed, and the channel has package-v0.4, I would like to know about package-v0.2 and package-v0.3 as well (ie. the packages between), this will make it easier to create a summary of all errata that is missing by processing the errata for each package eg.

package-v0.2 errata
package-v0.3 errata
package-v0.4 errata

= all missing errata

Will any amount of spacecmd foo get me there? or should I start digging up the API docs?

Thanks again.

PREFACE: this response is almost entirely speculation and assumption ;-)

I'm not saying that it is not possible to display all versions of a package between your client's current and the channel's current... but I have a suspicion of why it would not be possible (and my example does not apply to Red Hat channels).

If you were to build a Base Channel of CentOS on 2014-10-22 (I believe it simply grabs the most current packages that day. Then - say you had a client built on 2013-06-01... and subscribed it to your 10-22 Base Channel. The delta it would see would be Client = X, Channel = Z... but the Channel would not have any way of knowing what all the packages it was missing between getting from X-to-Z (Y - in this case).

Now - when you do a satellite-sync to the Red Hat channels, I'm fairly certain it pulls down the entire Base Channel... every single package. And in that case it would know that there was a package between X and Z.

So - given the 2 examples (and contingent on my assumptions being accurate), I wonder if the "safe bet" is to assume you can not accurately reconstruct the upgrade path therefore don't even try?

I need to think about this some more ;-)

EDIT: Another thing I just thought about.. an incremental upgrade is not necessary. I.e. you don't have to patch X-to-Y, then Y-to-Z (or not that I have seen). You can just update from X-to-Z and most often it can figure out the Delta and not even need the entire Z package for the update.

James,

Cheers for the response.

I understand the update process itself doesn't follow X -> Y -> Z, but instead X -> Z.

If I boil it down to the basic requirement.
- What errata / CVEs are outstanding in the snapshot channel that are in the current/latest channel?

My plan of attack was to get a list of all versions of the package (ie. Y and Z) and then consolidating the errata information for them (package_listerrata) to determine all the errata that is outstanding for a system/package.

Why am I so concerned about getting Y and Z?
Consider this scenario:

X - Currently installed version
Y - High importance security update
Z - Feature enhancement 

If I have version X installed and then retrieve the list of package differences (only Z is returned), I then extract the errata for Z which tells me that I have a feature enhancement outstanding, I don't know that Y was a High importance security update, and is still outstanding.

Perhaps I am over thinking / complicating this.. so am open to any other suggestions to achieve the same.... almost need something along the lines of errata_comparewithchannel

As I was reading your response... it started to become more clear and then... I understood ;-)

I could think of a perfect different but similar recent example... we had our Clone snapped in July.. then the first bash update was released... then another... and rather than simply update my clone to the newest bash, I wanted to update to the "between" bash (so that I could test impact and then move forward with the most recent).

I have to admit that although I think I understand the question better.. this is a bit over my head.

If I was to mangle the process, I think I would do something along the lines of... (client-side)

yum --showduplicates list bash | awk '/Available/,/EOF/'

I realllly should learn Perl I bet...

PACKAGE="bash"
VERSION=`rpm -qa $PACKAGE | awk -F ".x86" '{ print $1 }' | sed "s/${PACKAGE}-//"`
yum --showduplicates list bash | awk '/Available/,/EOF/' | awk "/${VERSION}/,/EOF/"
bash.x86_64                4.1.2-15.el6_5.2                rhel-x86_64-server-6 
bash.x86_64                4.1.2-29.el6                    rhel-x86_64-server-6 

I also wonder if...

spacecmd package_listerrata bash

Could do the trick? - I only have access to a Spacewalk machine recently built at the moment, but I'll try the listerrata on my Sat Server tomorrow.

I ended up going back to my old idea of moving the servers into the latest channel, checking the errata and moving them back. To achieve this with spacecmd I do the following:

spacecmd system_lock hostname.local
spacecmd system_listbasechannel hostname.local
spacecmd -y system_setbasechannel hostname.local rhel6_x86_64_latest 
spacecmd system_listerrata hostname.local
spacecmd -y system_setbasechannel hostname.local rhel6_x86_64_latest-20141001
spacecmd system_unlock hostname.local

The 'lock' doesn't quite achieve what I was hoping (stop users updating via yum when the server is associated with the 'latest' channel), but I have left it there. The chances of a yum update occurring when the server is switched is extremely slim in my configuration and I will be running the reports in the early AM.

Interestingly, the commands listed can be applied to groups, so if you keep groups associated to a single base channel it should be pretty easy to do in bulk.

Single host takes about 5-6 seconds.

I have ended up cloning Errata through all environments at the moment. Allows me to accurately report on outstanding Errata at all times (assuming the cloning is up to date) with the caveat that scheduling the application of Errata is slightly more difficult due to not being able to select all Errata and be done with it. However, it's not difficult to select all the Errata up to a certain point and apply only them.
Not ideal - but the best we could come up with.

i really appreciate your input here, this helped me a lot to decide a build/patch cloned channel solution. We are running satellite on a vm with 16gb of ram and some times to clone channels or do massive channel operations it takes a lot, what do you think about upgrading the current vm with more resourses + tunning vs build a dedicate vm for the db (postgres in this case).

Also, does any of you use the bidirectional configuration, im thinking that if a am able to sync content between 2 satellites i could provide a 24x7 service using a load balancer, when one satellite has issues if i have all the channels sync'ed then builds can be done from the one that is up and running.. im still reading documentation but it doesnt seem to have much info about the content that can be syncronized.

thanks again

Ivan,

I have found that regardless of what efforts I have made to speed up the clone channel operations, it always seems slow. From my memory of investigating the slowness, it was CPU bound and the clone operation process only appears to consume a single core.

If you are getting failures during the clone operation it would be worth looking at this link: https://access.redhat.com/solutions/43122

I modify the maxmemory value as above, but bump it up to 4G (server has 16G currently) and then don't have any issues with broken repo's.

I have seen the slowness on Oracle/Postgres and have moved the clone operations to scripted/scheduled tasks that happen at night.

Perhaps someone has the quick fix for slow clones? looking at you again James!

Now that you mention it (slow performance)... my Satellite "feels" much quicker after the postgres migration (the box is Physical with 96GB of memory though). I do recall when I was building out POC/Test Satellite environments using Virtual Machines that adding a few Virtual CPUs helped the performance issues quite a bit. Tuned also seems to help, but the profile will depend quite a bit on what type of host it is (physical vs virtual, SAN vs local disk, etc..)

So - I think you already nailed it. Perhaps Remmele has some input ;-)

Hello Guys, Adding resources to satellite and some tunning worked perfect, I am now trying to implement a base channle migration during build and trying to evaluate the best method, i thought running rhnreg_ks with a new activation key was going to be the best solution however this creates a duplicated profile in satellite which it is really annoying . I have a cloned build channel (it is cloned every night) with an activation key associated to it and all the cloned child channels. I also have cloned base channels for prod and non prod. The idea (in my head) is to change the base channel after the build process which it register the server to the build channel and patch the server to the patch channel and their respective child channels depending if it is prod or nonprod server. I honestly cant find the way to do it from the kiskstart itself .

this is the idea (its a 1 time process since the server will reamin in the patching channel until it is decomm)
server gets build ---> registered to BUILD channel --> gets patched ---> i would like to move it to the PATCH channel

input will be appreciated .

Ivan,

Why can't you add the node directly into the correct activation key at build time based on some information from the host (ie. IP address / hostname). Rather than register twice, have logic that places the node in the single base channel at build time based on local configuration.

If you do go ahead with moving the node between base channels, there are two real options. Using the Spacewalk API (Python tools are pretty good), or look into using spacecmd.
https://fedorahosted.org/spacewalk/wiki/spacecmd

I have moved most of my bespoke scripts from Python to straightforward spacecmd scripts now, it is extremely versatile and excellent for manipulation of existing node configurations.

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.