API Solutions in your environment

Latest response

Hi everyone,

  I wanted to start a discussion around the use of the satellite API.  Do you have any solutions in place that you wouldn't mind sharing involving its use?  I used to use it in a previous role, and found it difficult to find any resources with sample scripts other than the few that are provided.  I would love to hear how it is being used and in what role.

 

Thanks!

 

Jim Lyle

Technical Account Manager(TAM)

Responses

We have only scratched the surface of the API at our org. 

 

Some of the stuff that we use the API for are:

  • Dump out all the RPM's for a specific channel for creating a standard yum repo
  • Cross referencing RHSA's with CVE's and finding out what systems are affected
  • Ensuring system profile names are standardized
  • Ensure all systems have an updated package profile
  • Generating specifically formatted custom reports or lists of information in CSV or HTML format

All useful applications for sure. Thanks!

I use mainly the API to integrade VMware Orchestrator (yes I know, I need to switch to RHEV) to deploy machines. 

I allso use the API to sync the main chanels to our test, dev and prod environment according to our lifecykle.

Thanks for sharing!  Interesting use case to intergrate Vmware Orchestrator.  Mind sharing what methods you use?

 

Thanks!

 

Jim

Jim,

 

 

We create lists of available updates for our customers and "out of date" installations using the Satellite API.

 

 

Kind regards,

 

 

Jan Gerrit Kootstra

Hi Jan,  yes, I can see that as being handy and easy to automate.  Do you manually kick off the queries when needed, or have you automated them?  I could see an automated query followed by an email of the results to be quite handy.

 

Thanks again!

 

Jim

  

Hi,

 

I'm using APIs more and more these days. These are some of examples of APIs usage at build time:

 

  • Add server to groups based on location and hardware details (eg: if a server has fiber cards, it's added to the SAN group. If a server's has an IP that belongs to a subnet in Europe, it gets added to the EU group, and so on)
  • Subscribe servers to channels based on hardware profile (eg: again, if a server has fiber cards, it gets subscribed to the san channel, where the NetApp, PowerPath, and other EMC agents are)

 

Some other scripts I have that run weekly or monthly and collect information for support or auditing purposes:

 

  • Weekly update of support entitlements from Dell. For each physical server registered to the Satellite, it gets the service tag from the hardware details, goes to Dell, gets the support information and saves it as a note on the system.
  • Monthly report of systems with PowerPath installed. This is for audit purposes, it creates a list of servers with the EMCpower.LINUX package installed, gets some basic hardware details, puts everything on a spreadsheet and sends by email.

 

I've also created a web form that allows people to patch servers by checking/unchecking tasks. Besides the obvious package update task, there are other tasks as force filesystem check, send email before and after, update firmware, disable monitoring while patching, etc. This form then translates those tasks into an actual script that is scheduled at the system selected at a specific time and date. You could just do this using the "run command" feature, but I wanted to make it easier for people and save them from writing different huge scripts.

 

I'm now also working on using it to authenticate users so that they are able to do bare metal server builds entirely through comand line, using Dell's vmcli tool. Vmcli works on it's own, but wrapping it through Satellite authentication allows me to save information on who did it and when. It's also useful to extract the list of kickstarts available to choose. Still in progress though.

 

Thanks,

Eric.

Hi Eric,

 

Thank you very much for that detailed response.  I especially love the top 2 use cases.  I am actually working with a customer to implement that very thing at this time.

 

The webform you created is very impressive, again, I can think of a real life scenario that would assist another customer.  They have remote sites across the US and an SA that is responsible for patching servers in their assigned local.  A nice website to allow them to do that would work well for them   I will mention that to them, we hadn't considered that.

 

One thing I have seen being done is using the API to add users to the satellite and give them the appropriate persmissions.  They had AD at their site, and based on group membership, would automatically add them, give permission and send an email.  Was pretty slick.

 

I would much appreciate if you let the group know how your bare metal server builds project works out, very interesting indeed.

 

 

Thanks again!

 

Jim Lyle, RHCE

Technical Account Manager(TAM)

There is a very handy little tool that I use, "spacecmd" - which can be found on EPEL.  Remotely from your desktop, or on the satellite server itself.

I have had mixed results using spacecmd, it doesn't seem to perform as well as it should and I therefore tend to use it for one-off requirements. If you are serious about getting the most out of Satellite within your unix estate then I'd recommend making use of the API.

Some of the bespoke CLI tools that make use of the API that I have written in Python include:

  • Reporting an overview of all software channel statistics with the option of reporting the actual file system usage for a list of given channels (taking into account duplicate packages by comparing the md5 checksums for packages between channels). Usefull for audit.
  • Reporting a list of all Red Hat Proxy servers in our environment and the number of client systems that each proxy system is serving.
  • Interactive TUI channel subscription management tool for subscribing / unsubscribing hosts from child software channels (faster than logging into Satellite WebUI :-) ) for admins.
  • Satellite entitlements monitoring.
  • Automated RPM package channel assignement tool (forfills a bespoke need that rhnpush does not).

We use it to..

  • Update description with info on * Internal Baseline version. * RHEL version. * Product Configuration (Database, Fileserver, Etc) *Hardware platform (dmidecode|grep -i "product name"). *Who installed the server (authentication during %PRE or in an external deployment portal, re-use of the session key in %POST).
  • Move the server to Red Hat channels from cloned channels, in-case it's a server configuration that requires to get all the newest updates. Due to how my baseline uses activation keys, it wasn't straight forward to join a new base channel using a activation key.

We use spacecmd to..

  • Generate reports on what erratum / CVE effects what systems
  • Generate a baseline, that way - disaster recovery for the RHN Satellite is simplified, even if we suffer data corruption. We just install an empty RHN Satellite and run the "create-our-RHEL-baseline" script.
  • Pretty much all the features of spacecmd, including cloning activation keys and exporting / importing configuration channels, activation keys and kickstart profiles.

I just started working with the API but currently I use a Python script to schedule remote commands for patching. Here, we use F5's Big IP for load balancing web apps. So, in the same script I am able to disable servers with F5's API then patch/boot them using Satellite's API. After reading this thread I can see that I am drastically under utilizing this tool!

I would love to see a repository set up for people like us to share our scripts.( I commented on adding this feature in this thread: https://access.redhat.com/discussion/looking-feedback-what-can-we-do-improve-satellite-documentation so speak up if you're interested )

Jim,

 

 

The listing is started by hand, the list is emailed to the Satellite user who activated the listing.

 

Kind regards,

 

Jan Gerrit

Hi all,

 

I have a question: "Has anyone written a script to get the hypervisors ESX-nodes, KVM nodes or XEN-nodes that are used by the Satellite clients?"

 

Kind regards,

 

Jan Gerrit Kootstra

We began using the API around 2 years ago, and have a few dozen scripts.  Many of them simply allow us to do something that's much faster thru the CLI than thru a GUI.  Others integrate with other processes (making a new host to monitor in Zabbix, add a new system to Cobbler, etc.).  Here are a few things we have API scripts for:

  • show us the errata/package differences between a source channel (i.e.rhel-x86_64-server-5) and a cloned channel (sandbox, devel, beta)
  • bring said updates into the cloned channel
  • clone a channel
  • clone a kickstart tree
  • upload custom values (read from a spreadsheet or MySQL database) to a system
  • show which version(s) of our custom packages are installed on a given system

On the client/customer side, we use it to:

  • occasionally upload customer-specific versions of config files (i.e. /etc/oratab, /etc/crontab) to a locally-managed configuration channel
  • download custom info for a server to a local file, which we use to custom-build LVMs

We also created a Python library of commonly-used functions, so we just import the ones we need in a particular script.

Like others I am only scratching the surface of the API right now, but one helpful use for us right now is issuing remote commands through the API.

For example, I have a daily script that runs on the Satellite server, sends remote commands to ALL servers through the Satellite API, in order to get NFS filesystem usage from each of our servers. The remote command includes sending the data file back to the satellite server, which a script runs later to compile the data and send the report to our NFS storage admins.

It's convenient to have all this done from the central Satellite server alone, rather than have this script be located on every server (~200), with a crontab entry for it.

I'm only just getting the hang of the API and so far have found it extremely useful we're primarily using the API to run a bunch of regular reporting scripts:

  • script collecting the number of errata available for systems - then collating that to a bunch of pie charts on a web page to go on the status screen in the ops room
  • run a command on the systems to collect CPU information like number of cores, populated sockets, and type, wait for the script to run across (most) systems then collect the result from the API and add that to custom values for each system - information which can then be reported on a regular basis for software licensing
  • run a command to figure out the z/VM LPAR our machines run on and upload that to a custom value so we have an easy quick reference even if the machine is stopped
  • run ad hoc reports on numbers of systems with a particular package, systems that are not a member of a group that probably should be, etc

I have one issue with the automated reporting scripts though - there is no 'read only' user account available - I can't store the password anywhere as it has root access across all the subscribed systems.  Any suggestions to fix that would be very gratefully received, running API scripts via cron would make my life a lot easier.

I do a lot of different things with it:

1. reporting for troubleshooting issues (base_channel settings, activation keys, etc)

2. check entitlements

3. mass update settings in KS profiles (add/remove packages, update partiton layouts, set options)

4. true vulnerability reports (checking for vulnerabilities against the RH channels, not the ones a system is currently associated to).

The closest you'll get AFAIK is just unchecking all the 'admin' roles on the account. Not nearly the same thing but this apparently needs to be a feature request. In your scripts you could do something like 'wget' does by including a file that has the credentials or better yet runs an external command (eg. gnupg --decrypt) and parses in the values which then are supplied to the API login routine.

An example Perl snippet might be

@rhncreds = system(gpg -d ~/.rhncreds)

my $session = $client->call('auth.login',$rhncreds[0], $rhncreds[1]);

Better yet use an associative arraay...

 

Interesting stuff David,

What's your approach to getting true vulnerability reports?

I gave it a try some time ago, but ended up with a script that moved systems temporarily to Red Hat channels to extract a list of RHSAs that wasn't applied.

Sorry if the formatting is off.  I can't get leading whitespace to persist in this editor.  Anyone know how to do that?

Basically what I'm doing is identifying for each system what arch they are, then grab all the packages from the RHEL channels for that release and arch:

rhel5 = client.channel.software.listAllPackages(authKey,'rhel-x86_64-server-5')
rhel5_tools = client.channel.software.listAllPackages(authKey,'rhn-tools-rhel-x86_64-server-5')
rhel5_cluster_storage = client.channel.software.listAllPackages(authKey,'rhel-x86_64-server-cluster-storage-5')
rhel5_cluster = client.channel.software.listAllPackages(authKey,'rhel-x86_64-server-cluster-5')
rhel5_prod = client.channel.software.listAllPackages(authKey,'rhel-x86_64-server-productivity-5')
rhel5_supp = client.channel.software.listAllPackages(authKey,'rhel-x86_64-server-supplementary-5')
rhel5_virt = client.channel.software.listAllPackages(authKey,'rhel-x86_64-server-vt-5')
rhel5_x64_all = rhel5 + rhel5_tools + rhel5_cluster_storage + rhel5_cluster + rhel5_prod + rhel5_supp + rhel5_virt

Once I have that in place, I then grab the system's current packages:

system_package_list = client.system.listPackages(authKey,system['id'])

and loop through all the packages in the system_package_list and compare that to the base package list (rhel5_x64_all, in this case), looking for matches:

for package in system_package_list:

arch = getArch(package['arch'])
matched = False
for base_package in base_package_list:

pname = package['name'] == base_package['name']
parch = arch == base_package['arch_label']
release = package['release'] == base_package['release']
version = package['version'] == base_package['version']
if pname:

matched = True
pkey = base_package['name']+base_package['arch_label']
package_dict.setdefault(pkey,{})
package_dict[pkey][base_package['id']] = {}
package_dict[pkey][base_package['id']]['data'] = base_package
if release and version:

package_dict[pkey][base_package['id']]['current'] = True

else:

package_dict[pkey][base_package['id']]['current'] = False

if not matched:

unmatched_packages.append(package['name']+'.'+package['arch'])

Then, I loop through the package_dict and grab all the advisory info, to figure out bugfix,security, enhancement levels.

It's also important to note a couple things:

1. I had to map the arch information, because how it's referenced is different between system.listPackages (which returns AMD64) and packages.findByNvrea (which expects x86_64).

2. While looping over the systems, I re-auth each time to avoid timing out when checking large numbers of systems.

Fantastic stuff David,

Many thanks. That'll make things so much easier for us.

Not a problem.  Here's the whole thing:

http://pastebin.com/33RjL1tY

Couple things to note:

1. There's nothing in here for RHEL6. We don't have any in our env yet, so I just haven't gotten to it.  It should be pretty obvious how to add it, though.

2. I have it built to accept a config file that has login credentials, so you don't have to pass them on the commandline if you don't want to.  The format of the config file is:

[config]

login=yourid

password=yourpassword

3. It's not fast (that's why I reauth after each system) This is really something to use as a daily or weekly report.  IIRC, the initial time cost is about 2 minutes to load the package data, and about 30seconds/per system (anecdotal observations only).

 

Typical usage is:

./vulnerability_report.py -c ~/configfile server1 server2 ...

or

./vulnerability_report.py -c ~/configfile

or 

./vulnerability_report.py -c -l userid -p pasword

 

Typical output:

System: server1, Arch: x86_64, Release: 5Server, Bugfixes: 307, Security Fixes: 139, Enhancements: 45

System: server2, Arch: x86_64, Release: 5Server, Bugfixes: 90, Security Fixes: 27, Enhancements: 10

 

I'm sure there are still some warts in here, but it's generally pretty clean.  I'm probably going to see if I can find ways to speed it up, too, most likely by only grabbing package lists once and only if necessary (ie, figure out what arches need to be checked and grab packages only for those, but that means looping the systems twice, blah blah blah). 

That is an issue for me as well, as I'm not comfortable leaving the user and password anywhere in plain text, or on a tared file or else. Since all of the scripts that I have are Perl, what I've done to overcome this was to compile them using ActiveState Perl dev kit. Yes, you have to purchase an additional application (in our case we already had it), but so far so good.

That is an issue for me as well, as I'm not comfortable leaving the user and password anywhere in plain text, or on a tared file or else. Since all of the scripts that I have are Perl, what I've done to overcome this was to compile them using ActiveState Perl dev kit. Yes, you have to purchase an additional application (in our case we already had it), but so far so good.

Fantastic David, thank you so much.

Regarding username and passwords in Python, I wrote down some stuff on my blog on using AES encryption in Python scripts last time I struggeled with getting rid of plain text passwords:

http://blog.hacka.net/#post74

I'll make sure to post here if I make any enhancements or changes.

Hey David,

Here's my adapted version with added RHEL6 support but removed RHEL3, 4 and i386/i686 support. I've also added the Red Hat Directory Server channel.

http://pastebin.com/cib1CQda

Here's an adapted version that reads AES encrypted username and password stored from a file. Please note that the secret key is hard coded (/root/.linuxtskey).

http://pastebin.com/d1U5FpPK

Here's my handy AES encryption script as well, you can use that to create the encrypted entries for the username and password:

http://pastebin.com/wd756jjr

AES encryption support requires you to install python-crypto available from EPEL:

RHEL5: http://dl.fedoraproject.org/pub/epel/5/x86_64/repoview/letter_p.group.html

RHEL6: http://dl.fedoraproject.org/pub/epel/6/x86_64/repoview/letter_p.group.html

Has anyone created a package/module to create the session and connect to the server?

The different scripts in the spacewalk repo all use different methods and every script basically needs the same session management.  It would be great to be able to reuse a generic module.

I'm thinking about a module like http://git.fedorahosted.org/cgit/spacewalk.git/tree/scripts/ncsu-rhntools/rhnapi.py but with the session management like the one in http://git.fedorahosted.org/cgit/spacewalk.git/tree/utils/spacewalk-manage-channel-lifecycle.

Maybe even with a cache like the one used in http://git.fedorahosted.org/cgit/spacewalk.git/tree/utils/cloneByDate.py.

Thanks in advance.

I want to get a list of packages that are installed on a system but that aren't available in any of the channels the system is currently subscribed to.

Either because they were installed outside of RHN or because the version is more recent than the latest version in the channels the system is currently subscribed to.

I thought I could simply go over the list of packages returned by system.listPackages() and check for each the packages.listProvidingChannels() but my results appear to indicate that the 'id' is not a unique ID for the package.

>>> installed_packages = rhn.server.system.listPackages(rhn.session,1000010105)
>>> first = installed_packages[0]
>>> first
{'name': 'HPOvAgtEx', 'epoch': ' ', 'version': '2.10.006', 'release': '1', 'arch': 'i586', 'id': 5462}

>>> rhn.server.packages.getDetails(rhn.session,5462)
{'description': 'The e2fsprogs package contains a number of utilities for creating,\nchecking, modifying, and correcting any inconsistencies in second\nand third extended (ext2/ext3) filesystems. E2fsprogs contains\ne2fsck (used to repair filesystem inconsistencies after an unclean\nshutdown), mke2fs (used to initialize a partition to contain an\nempty ext2 filesystem), debugfs (used to examine the internal\nstructure of a filesystem, to manually repair a corrupted\nfilesystem, or to create test cases for e2fsck), tune2fs (used to\nmodify filesystem parameters), and most of the other core ext2fs\nfilesystem utilities.\n\nYou should install the e2fsprogs package if you need to manage the\nperformance of an ext2 and/or ext3 filesystem.\n', 'build_date': '2007-12-14', 'file': 'e2fsprogs-1.39-10.el5_1.1.x86_64.rpm', 'arch_label': 'x86_64', 'vendor': 'Red Hat, Inc.', 'name': 'e2fsprogs', 'license': 'GPL', 'build_host': 'hs20-bc2-4.build.redhat.com', 'checksum': '72a2a4a81451c3f756fe64e516c44da6', 'payload_size': '2276844', 'last_modified_date': '2008-01-07', 'summary': 'Utilities for managing the second and third extended (ext2/ext3) filesystems\n', 'epoch': '', 'providing_channels': ['acc-rhel-x86_64-server-5', 'dev-rhel-x86_64-server-5', 'prd-rhel-x86_64-server-5', 'rhel-x86_64-server-5', 'tst-rhel-x86_64-server-5'], 'cookie': 'hs20-bc2-4.build.redhat.com 1197603331', 'version': '1.39', 'checksum_type': 'md5', 'release': '10.el5_1.1', 'path': 'redhat/NULL/72a/e2fsprogs/1.39-10.el5_1.1/x86_64/72a2a4a81451c3f756fe64e516c44da6/e2fsprogs-1.39-10.el5_1.1.x86_64.rpm', 'id': 5462, 'size': '1003796'}

So the ID returned for the first package (which happens to be one installed outside RHN) matches that of another package.

findByNvrea for this package does return an empty list:

>>> rhn.server.packages.findByNvrea(rhn.session, first['name'], first['version'], first['release'], first['epoch'], first['arch'])
[]

I'm not sure what to make of this.

Any tips?

Hello All,
 
I was hoping I could get some insight as to how you guys use the API to handle patch management. I see there are a lot of options but none of them seem to be THE way to go. I used to use the scheduleScriptRun method in Satellite to remotely run a "yum -update -y" but this caused issues. When the command times out it completely screws the server due to the yum update failing in the middle of removing/updating packages. Now I use some logic (seen below) to get relevant errata for servers indicated in the arguments. Then, apply each errata one at a time per server. I then check the scheduled actions queue to see if any of the jobs contain the phrase "Errata Update". If present, the script waits. If not, then the script reboots the servers one at a time.
 
Is there a better way to do this? The issue I keep running in to is finding an accurate and reliable way to determine when the servers have finished patching.
*EDIT I couldn't get the code I use to display properly so I removed it. If interested, let me know and I'll try to post it again.

Hey Chris. You might actually get better results posting this as a separate discussion rather than a comment on this topic, but there are quite a few patch management-related solutions here so I'd love to hear if anyone has advice for you.

Thanks David, Will do.

I'm just getting into the API. From what's been shown here, I realize that it can add a lot of additional functionality to satellite. Is there anywhere that I can access some of the scripts mentioned here? I saw a post for a public repo, or shared site. That was about a year ago. Has one been established, and what is the URL for it?

Something like the Puppet Forge might be useful. Unfortunately I imagine most folks Satellite APIs have embedded proprietary data (logins, hostnames, paths, etc...)

I'm interested if someone finds a repo out there with bits ;-)