Using r10k with Red Hat Satellite 6
Table of contents
- Background
- Legal Disclaimer
- Version Information
- Installing r10k
- Configuring r10k
- Setting up the repository
- Deploying the modules
- Importing into Satellite 6
- Assigning the environment
- Pros and cons of working with r10k
Background
From time to time, we get questions from customers already using Puppet, on whether it is possible to incorporate an existing Puppet workflow based on r10k into Satellite 6.
These customers are often well accustomed to the r10k workflow and do not have any desire to change it, but still want to migrate to Satellite 6 as their Puppet platform, for support and integration reasons.
It turns out that by itself, getting r10k to work with Satellite 6 is not very complicated, but there are a few caveats. If you work around those - by following the documentation below - you'll have a Satellite 6 server with r10k in no time!
Disclaimer
Red Hat does not officially support r10k or this workflow, but considering that r10k just manages Puppet modules that are served out by the embedded Puppet master in Satellite 6, we do not anticipate problems with this configuration.
Version information
This was tested most recently on a Satellite 6.1.6 server, running on Red Hat Enterprise Linux 7.2 with r10k 2.1.1.
Installing
You can install r10k on Red Hat Enterprise Linux 7 with the following command.
gem install r10k -v 2.1.1
This will install r10k into /usr/local/share/gems/gems/r10k-2.1.1 with the r10k binary in the /usr/local/bin directory. The command will also install a series of dependencies. You can find out which gems are pulled in as dependencies by running gem dependency r10k1.
Configuring
r10k expects a global configuration file in either /etc/r10k.yaml or /etc/puppetlabs/r10k/r10k.yaml. The first location is deprecated, but neither location works really well for a Satellite 6 Puppet installation in /etc/puppet. To keep all Puppet related configuration together, I have put my configuration file in /etc/puppet/r10k.yaml.
Create an /etc/puppet/r10k.yaml file containing the following, adapted to your specific requirements:
---
# The location to use for storing cached Git repos
cachedir: '/var/cache/r10k'
# A list of git repositories to create
sources:
# This will clone the git repository and instantiate an environment per
# branch in /etc/puppet/environments
someorg:
remote: 'git@github.com:someorg/somerepo.git'
basedir: '/etc/puppet/r10k/environments'
postrun: ['/usr/local/bin/fix_perms.sh']
The /usr/local/bin/fix_perms.sh script resets ownership and SELinux context on the files and directories r10k checks out. Otherwise, r10k will leave ownership at the user running the command and user_tmp_t as SELinux label, which will prevent Puppet from reading them.
Thus we'll create an executable /usr/local/bin/fix_perms.sh script with the following context:
#!/bin/bash
chown -R apache: /etc/puppet/r10k/environments
restorecon -Fr /etc/puppet/r10k/environments
Next, we'll create the /etc/puppet/r10k/environments directory:
mkdir -p /etc/puppet/r10k/environments
The /etc/puppet/puppet.conf configuration file specifies where Satellite 6 and Puppet will search for Puppet environments, in the environmentpath setting. If you use the default environmentpath, all Satellite 6 managed environments will be purged when executing r10k. As this breaks Satellite 6, we will instead create a second location to create and search for environments, and update our /etc/puppet/puppet.conf to reflect that fact. Hence, in /etc/puppet/puppet.conf, change:
environmentpath = /etc/puppet/environments
to
environmentpath = /etc/puppet/environments:/etc/puppet/r10k/environments
Setting up the repository
We need to create a git repository that contains the actual information r10k uses to create environments. This information includes what Puppet modules to download and from where, as well as information on where to look for modules on the local system. Create the base git repository by running:
mkdir ~/r10krepo
cd !$
git init .
Two files need to be present in that directory: Puppetfile and environment.conf. The Puppetfile contains a list of modules to be pulled from either git or the Puppet Forge. Below is an example Puppetfile, that installs a couple of modules from the Puppet Forge, and one from a Github repository.
forge 'forge.puppetlabs.com'
# Forge Modules
mod 'puppetlabs/ntp', '4.0.0'
mod 'puppetlabs/stdlib', '4.6.0'
mod 'puppetlabs/postgresql', '4.3.0'
mod 'puppetlabs/mysql', '3.3.0'
mod 'puppetlabs/concat', '1.2.1'
mod 'apache',
:git => 'https://github.com/puppetlabs/puppetlabs-apache',
:tag => '1.4.0'
The environment.conf configuration file could look like this example below. The $basemodulepath variable in the default configuration of Satellite 6's Puppet expands to the directory we install the OpenSCAP and Red Hat Insights Puppet modules in. Leave this as is, if you want to use those:
modulepath = modules:$basemodulepath
Now, we'll add the newly created Puppetfile and environment.conf files to the git repository. Run:
git checkout -b qa
git add .
git commit -v
In order to allow r10k to pull from this repository, we push2 it to the central location you configured in /etc/puppet/r10k.yaml:
git remote add origin git@github.com:someorg/somerepo.git
git push origin qa
As you can see, we push the contents of the local qa branch to a remote qa branch. r10k uses the branches in the remote git repository to setup environments with the same name. In this example, the qa branch will result in a local Puppet environment with the same name.
Deploying the modules
We can now deploy all modules referenced in our Puppetfile into the qa Puppet environment, by running:
r10k deploy environment qa -p -v -c /etc/puppet/r10k.yaml
This command will first check out the git repository mentioned in the /etc/puppet/r10k.yaml file. It will read the Puppetfile that we stored in there in the qa branch, and deploy all modules mentioned in there into the qa environment. This environment will be created under /etc/puppet/r10k/environments. The /usr/local/bin/fix_perms.sh script we created earlier, will take care of ownership and SELinux labeling.
If you run the command above without qa, r10k will deploy all environments at once. r10k will iterate over all branches in your git repository and deploy an environment for each branch it finds.
Import into Satellite 6
The final step is to import the newly created environment into Satellite 6. The first phase of that is to check whether the environment already exists in Satellite 6. To figure that out, we run:
hammer environment list | awk ' /^[0-9]/ { print $3 }' | grep qa
If you don't see any output, you'll need to create the environment first. An environment needs to be created within the scope of an organization and location. Below, the 'qa' environment is created within the Primary_DC location for the ACME_Corp organization.
hammer environment create --name qa --location Primary_DC --organization ACME_Corp
The above command will make your qa environment show up in Satellite 6. Now you'll need to import the classes from this environment. You can do this through the web interface, but it's not easy to automate that (and relatively slow, as it iterates over all environments), so we'll do it through Hammer instead, and import our specific new qa environment only:
hammer proxy import-classes --id 1 --environment qa
The id above is the identifier of our Capsule server; in my case this was my Satellite server itself, hence the 1. You can find a list of your Capsule servers with hammer proxy list.
If you want to deploy r10k managed Puppet modules onto more than one Capsule, you will need to execute the above procedure on every one of them. You will also need to keep the modules on your Capsules in sync with each other by running the r10k deploy commands on all Capsules with little time in between.
Executing these steps on all Capsules in the infrastructure will create identical environments on all Capsules, that you can then easily assign through Satellite.
Assigning the environment
Now that the environment is known in Satellite, you can go edit a system or host group and assign the environment to it through the Puppet environment drop-down menu when editing the host or host group. Smart parameter assignment will work in the same way as with Puppet modules in content views.
As an example on how to do this with Hammer, you can assign the production environment to a host with id 40 by running:
hammer host update --id 40 --environment production
Or, similarly, to assign the production environment to a complete host group (with id 12), run:
hammer hostgroup update --id 12 --environment production
The pros and cons of working with r10k
The primary benefit of working with r10k is not having to change an existing workflow. Apart from that, using r10k, you can iterate rapidly without having to publish and promote content views, while still being able to easily use Satellite features like OpenSCAP, that rely on Puppet for client-side deployment.
On the downside is, however, that with r10k you will need to deploy your modules on each and every Capsule by hand. It also becomes a bit more complicated to define a 'build' for a specific group of hosts, since the build is now defined in two places: r10k for the configuration, Satellite for provisioning and software management.
-
I haven't verified whether or not any of these gems conflict with the ones we ship for Satellite 6, but haven't experienced any problems as of yet. ↩
-
The assumption r10k makes, is that you have ssh setup for git already, so if you want to use git over ssh, you need to manually exchange keys first. ↩

Comments
After upgrading to Sat6.2 I had to reverse the order of environmentpath: environmentpath = /etc/puppet/r10k/environments:/etc/puppet/environments
The newer version of puppet will create a 'production' directory in the first environmentpath it comes to if it does not already exist. If using 'production' as one of the branch names in git for r10k puppet will find and use the first one which ends up being the empty one created by puppet agent. Reversing the order or even removing /etc/puppet/environments from environmentpath solves the problem.
Great remark, thanks! I'll look into this a bit later and update the post if necessary.
On Satellite 6.2, z-stream upgrades reset the puppet.conf to default, removing the r10k environment path. Bit of a 'gotcha' at the moment, hopefully there is an installer option to set the path as a permanent feature.
Hey Geoff, I don't think there's an installer option at the moment to ignore the environment path from puppet.conf. Might make sense to open an RFE for this.
This is the /etc/foreman-installer/custom-hiera.yaml file that we use. The server_envs_dir should be what you are looking for. Unfortunately the satellite installer puppet module doesnt allow a hiera-array if you are using both r10k and katello created environments.
Hey, that's pretty cool! Having seen actual production deployments of r10k and Satellite, I have noticed that most customers use either r10k or Satellite native constructs. Taking your custom-hiera.yml should help them a lot already!
Also, I have looked into changing the puppet module we use to manage the installation of the puppet component, and it's sadly not easy to change it to allow for an array here. You could file an RFE though...
This post was really helpful to me in setting up Satellite with r10k. Unfortunately, after upgrading to 6.2.11, all content hosts have gone out of sync. They can no longer find their Puppet environments. I did go back into the /etc/puppet/puppet.conf and /etc/puppet/r10k.yaml to restore the environment paths, but still content hosts can't find their environments. Is anyone else experiencing this?
Yup - Running the Satellite installer (even for upgrading) will 'reset' the Satellite configurations, as these come from an internal puppet installer. Some parameters can now be overridden via a local heira file, however the puppet environment path is not one of them. In your setup, you will need to reset the puppet.conf environmentpath parameter and restart the puppet master every time you run a satellite-installer command (and if using capsules do it there too). I will detail an alternative and permanent way in another reply :-)
I did initially notice that my modified configs were overwritten. Even after restoring them, my r10k setup remained broken. I've tried your recommendations below but it still seems to fail. I opened a case with RedHat support but I'm wondering if they'll be able to resolve this.
Check my post further up about the /etc/foreman-installer/custom-hiera.yaml , after adding those settings I can run the installer without the puppet.conf being messed up.
Here are some changes that we have applied locally (several customers) to address some of these issues.
Firtsly, to get around the problem of needing to change the puppet.conf environmentpath every time an update is performed, we have found that it is possible to create symbolic links from the r10k environments to the main puppet environment. This means that we only need the default /etc/puppet/environments path defined. (r10k environments exist in /etc/puppet/r10k/environments).
The other issue that we addressed is in a setup with multiple capsules operating as puppet masters, the environments need to be replicated to each one. We fixed this by implementing a postrun script from the r10k run.
For this, our r10k.yml looks like this:
For the post_deploy script to work, we need an account that has passwordless ssh access from Satellite to each Capsule. In our customers setup we use the 'svc-r10k' service account to run r10k on the Satellite, and this account has access to all capsules to push modules.
The magic is then in the post_deploy script:
We have found these changes to Maxim's original setup to get around the handful of barriers.
Do you have ideas on the best way to run the r10k deploy script using GIT hooks? I´m currently using cron jobs and that is...not ideal :)
Göran- I may be able to help. In our setup, we have installed GitLab-CE and Jenkins-LTS. We store our puppet modules in GitLab-CE. We create projects in Jenkins for the puppet modules we have stored in GitLab-CE. We've exchanged ssh-keys between our Satellite, Jenkins and GitLab-CE servers. We also use the web hooks configuration in GitLab-CE in conjunction with the CI configuration in Jenkins. When we push changes to a puppet module on the GitLab-CE server, the configured GitLab-CE project web hook calls the Jenkins project. Jenkins then does some checking of the puppet code then upon success runs r10k [environment] [puppet-module] on the Satellite server.
We use the puppet/r10k module to configure our webhook, but we have it pinned to the 4.2.0 version as that is the last version with puppet3 support. You will want to review the r10k::webhook and r10k::webhook::config classes for configuring your webhook. For example, you will probably need to set use_mcollective => false and you will probably need to tweak user, group, ssl settings, etc. Once you have your webhook listener configured and opened in your firewall then you will need to configure your Git hook according to your Git server. From Bitbucket we have a post receive web-hook configured with the url back to our satellite server with something like this: http://satellite.example.com:8088/payload
Thanks, just what I needed to get started.
I installed rh-ruby24-ruby since sinatra needed a >2.0 ruby version and I didn´t want to mess anything up on satellite.
Forked puppet-r10k-4.2.0
Edited /usr/lib/systemd/system/webhook.service (webhook.custom.service.erb) and changed to: ExecStart=/usr/bin/scl enable rh-ruby24 /usr/local/bin/webhook
Edited /usr/local/bin/webhook (webhook.custom.bin.erb) and changed shebang to /usr/bin/env ruby
Installed the webhook on satellite with:
EDIT: so I ended up making a lot of changes in my fork to make the webhook work as we like it with Bitbucket. It can be found here: https://github.com/gorantornqvist/puppet-r10k/tree/satellite6_webhook
That's pretty cool, Göran, thanks for sharing!
It might be worth updating this for Puppet4 with Satellite 6.3.