Warning message

Log in to add comments.

Migrating from Satellite 5 to 6

Brett Thurber published on 2016-12-09T18:39:36+00:00, last updated 2016-12-09T19:37:19+00:00

Overview

Red Hat’s Systems Engineering group recently tackled the task of migrating their lab infrastructure from Satellite 5.6 to Satellite 6.2.4. The Satellite 5.6 server managed several hundred physical and virtual machines. The migration consisted of moving over DNS, DHCP, TFTP, PXE, custom provisioning scripts, and content. For those unfamiliar with Satellite 6 and it’s capabilities, please refer to: https://access.redhat.com/products/red-hat-satellite

Let’s take a look at how we accomplished our migration in roughly two days, during normal business hours.

Staging Satellite 6

In preparation for the migration, a bare metal machine was deployed as the new Satellite 6 server. The internal capsule services (DNS, DHCP, PXE, TFTP, Puppet) were not configured at this time as we planned to use the same IP address as the existing Satellite 5.6 server and also as not to conflict with the existing provisioning services. The purpose of the staging was to create a new manifest and begin the sync of required content from the Red Hat Content Delivery Network (CDN). For our environment the total content sync’d was ~100GB. Additionally, we also moved over our custom provisioning scripts into custom kickstarts and snippets.

The basic installation looked something like:

# satellite-installer --scenario satellite --foreman-initial-organization "initial_organization_name" --foreman-initial-location "initial_location_name" --foreman-admin-username admin-username --foreman-admin-password admin-password

Migrate Provisioning Scripts into Snippets

The existing Satellite 5 instance provisioned systems with a common kickstart file, and a number of ‘snippets’ to further customize each environment. Both Satellite 5 and Satellite 6 use Anaconda/Kickstart at their core to provision systems. Satellite 6 introduces a more robust method for creating dynamic kickstarts, backed by Ruby’s Erb templating feature. It also gives first class status to partition tables, allowing an administrator to maintain those as a separate entity from a kickstart or snippet.

The common kickstart file on the Satellite 5 server included a dynamic partitioning script to detect if a system was a VM or bare metal and use the appropriate target to configure the disk. This script was tweaked a bit and created as the standard partitioning table for Systems Engineering:

#Dynamic - this line tells Foreman this is a script rather then a static layout
# adapted from http://projects.theforeman.org/projects/foreman/wiki/Dynamic_disk_partitioning
# limit OS to one disk (in case of multiple disks attached)
virtual=0
#test if it's KVM or QEMU
dmidecode | egrep -i 'vendor' |grep QEMU && virtual=1
dmidecode | egrep -i 'vendor' |grep SeaBIOS && virtual=1
cat /proc/cpuinfo |grep QEMU && virtual=1
dmesg |grep "Booting paravirtualized kernel on KVM" && virtual=1
#test if it's vmware
dmidecode | grep -i 'manufacturer' |grep VMware && virtual=1
if [ "$virtual" -eq 1 ]; then
    cat <<EOF > /tmp/diskpart.cfg
bootloader --append="nofb quiet splash=quiet crashkernel=auto" --location=mbr --boot-drive=vda
autopart --type=lvm
zerombr
clearpart --all --initlabel
ignoredisk --only-use=vda
EOF
else
    cat <<EOF > /tmp/diskpart.cfg
bootloader --append="nofb quiet splash=quiet crashkernel=auto" --location=mbr --boot-drive=sda
autopart --type=lvm
zerombr
clearpart --all --initlabel
ignoredisk --only-use=sda
EOF
fi

The snippet on the original Satellite was essentially a one liner to retrieve and execute a bash script from a git-controlled, web accessible directory on a remote server, with some logic to run further customization scripts based on hostname, project, or other variables.
For this phase, we created a snippet on the new Satellite to run the same command and duplicate the functionality. One of the Next Steps will be to modernize these scripts, converting them into Ansible playbooks.

The “Satellite Kickstart Default” provided by Satellite was cloned, and trimmed of unnecessary bits to keep it clean. With no desire to run e.g. chef or salt clients, those checks were removed. While migrating the existing scripts over to Ansible, we wanted those scripts to be run by default, but have the ability to disable them when deploying a host to test the new playbooks. A check was added for a ‘no_post’ parameter to the kickstart. Host parameters is a feature of Satellite 6 that allows a sysadmin to define deployment specific variables, either on the host or host group level. These parameters are available to the kickstart template and can be used programmatically to customize the host.

The resulting kickstart template is shown below:

<%#
kind: provision
name: se_kickstart
oses:
- CentOS 5
- CentOS 6
- CentOS 7
- RedHat 5
- RedHat 6
- RedHat 7
- Fedora 19
- Fedora 20
%>
<%
  rhel_compatible = @host.operatingsystem.family == 'Redhat' && @host.operatingsystem.name != 'Fedora'
  os_major = @host.operatingsystem.major.to_i
  # safemode renderer does not support unary negation
  pm_set = @host.puppetmaster.empty? ? false : true
  puppet_enabled = pm_set || @host.params['force-puppet']
  section_end = (rhel_compatible && os_major <= 5) ? '' : '%end'
  no_post = @host.param_true?('no_post')
%>
install
<%= @mediapath %>
lang en_US.UTF-8
selinux --enforcing
keyboard us
skipx
<% subnet = @host.subnet -%>
<% if subnet.respond_to?(:dhcp_boot_mode?) -%>
<% dhcp = subnet.dhcp_boot_mode? && !@static -%>
<% else -%>
<% dhcp = !@static -%>
<% end -%>
network --bootproto <%= dhcp ? 'dhcp' : "static --ip=#{@host.ip} --netmask=#{subnet.mask} --gateway=#{subnet.gateway} --nameserver=#{[subnet.dns_primary, subnet.dns_secondary].select(&:present?).join(',')}" %> --hostname <%= @host %><%= os_major >= 6 ? " --device=#{@host.mac}" : '' -%>
rootpw --iscrypted <%= root_pass %>
firewall --<%= os_major >= 6 ? 'service=' : '' %>ssh
authconfig --useshadow --passalgo=sha256 --kickstart
timezone --utc <%= @host.params['time-zone'] || 'UTC' %>
<% if @dynamic -%>
%include /tmp/diskpart.cfg
<% else -%>
<%= @host.diskLayout %>
<% end -%>
text
reboot
%packages --ignoremissing
yum
dhclient
ntp
wget
@Core
<%= section_end -%>
<% if @dynamic -%>
%pre
<%= @host.diskLayout %>
<%= section_end -%>
<% end -%>
%post --nochroot
exec < /dev/tty3 > /dev/tty3
#changing to VT 3 so that we can see whats going on....
/usr/bin/chvt 3
(
cp -va /etc/resolv.conf /mnt/sysimage/etc/resolv.conf
/usr/bin/chvt 1
) 2>&1 | tee /mnt/sysimage/root/install.postnochroot.log
<%= section_end -%>
%post
logger "Starting anaconda <%= @host %> postinstall"
exec < /dev/tty3 > /dev/tty3
#changing to VT 3 so that we can see whats going on....
/usr/bin/chvt 3
(
<% if subnet.respond_to?(:dhcp_boot_mode?) -%>
<%= snippet 'kickstart_networking_setup' %>
<% end -%>
#update local time
echo "updating system time"
/usr/sbin/ntpdate -sub <%= @host.params['ntp-server'] || '0.fedora.pool.ntp.org' %>
/usr/sbin/hwclock --systohc
<%= snippet "subscription_manager_registration" %>
<% if @host.info['parameters']['realm'] && @host.realm && @host.realm.realm_type == 'Red Hat Identity Management' -%>
<%= snippet "idm_register" %>
<% end -%>
# update all the base packages from the updates repository
yum -t -y -e 0 update
<%= snippet('remote_execution_ssh_keys') %>
<% if puppet_enabled %>
# and add the puppet package
yum -t -y -e 0 install puppet
echo "Configuring puppet"
cat > /etc/puppet/puppet.conf << EOF
<%= snippet 'puppet.conf' %>
EOF
# Setup puppet to run on system reboot
/sbin/chkconfig --level 345 puppet on
/usr/bin/puppet agent --config /etc/puppet/puppet.conf -o --tags no_such_tag <%= @host.puppetmaster.blank? ? '' : "--server #{@host.puppetmaster}" %> --no-daemonize
<% end -%>
<% if no_post  -%>
echo "not running common post scripts"
<% else -%>
<%= snippet "se_common" %>
<% end -%>
sync
<% if @provisioning_type == nil || @provisioning_type == 'host' -%>
# Inform the build system that we are done.
echo "Informing Foreman that we are built"
wget -q -O /dev/null --no-check-certificate <%= foreman_url %>
<% end -%>
) 2>&1 | tee /root/install.post.log
exit 0
<%= section_end -%>

Finally, a standard host group was created in Satellite to tie this altogether. The default host group provisions a Red Hat Enterprise Linux 7.3 host using the custom kickstart and partition table. Similar to Satellite 5, an activation key is set on that host group, which will allow a deployed host to register and access yum content from Satellite.

Rip and Replace

After the server was staged, the time came to begin the migration. Because our Satellite 5.6 server provided DNS, DHCP and provisioning services (cobbler), we set a cutoff time/date for any new provisioning and all of the DNS and DHCP config and operational files were copied. This included the following:

  • dhcpd.conf
  • named.conf
  • All zone files under /var/named

Change IP of Sat 6 server

With all the pertinent DNS and DHCP configuration files copied, it was time to power off the Satellite 5.6 machine and change the IP address of the Satellite 6 replacement. There is a Red Hat Knowledge Base article describing the needed steps: https://access.redhat.com/solutions/1358443

For our environment, we simply changed the IP address of the satellite server in the appropriate /etc/sysconfig/network-scripts file and then configured the integrated capsule services.

Configure Capsule Services

Post IP address change, the integrated capsule is configured to provide DNS, DHCP, TFTP, PXE, and Puppet services. The initial configuration looked something like:

satellite-installer --foreman-proxy-dns true --foreman-proxy-dns-interface em1 --foreman-proxy-dns-zone <primary_DNS_zone> --foreman-proxy-dns-forwarders <forwarder_IP> --foreman-proxy-dns-forwarders <forwarder_IP> --foreman-proxy-dns-reverse <reverse_lookup_zone> --foreman-proxy-dhcp true --foreman-proxy-dhcp-interface em1 --foreman-proxy-dhcp-range "<DHCP_address_range>" --foreman-proxy-dhcp-gateway <gateway_IP> --foreman-proxy-dhcp-nameservers <IP_address_of_Sat6> --foreman-proxy-tftp true --foreman-proxy-tftp-servername $(hostname)

It’s important to note that if there are multiple zone files that need to be created, rerun the following for each:

satellite-installer --foreman-proxy-dns-zone <DNS_zone>
satellite-installer --foreman-proxy-dns-reverse <reverse_lookup_zone>

Migrating DNS/DHCP

Once all of the DNS zone’s were created and with the IP address range configured for DHCP on the integrated capsule, the next step is to set dns-managed and dhcp-managed to false. This is done to prevent Satellite from overwriting the configuration and data files and has no impact on being able to add new hosts to Satellite 6. To set the managed services to false run:

satellite-installer --foreman-proxy-dns-managed false
satellite-installer --foreman-proxy-dhcp-managed false

Next it is time to move the configuration and data files over. See below for migrated files and locations between Satellite 5 with Cobbler and Satellite 6:

DHCP

Satellite 5 Satellite 6
*/etc/dhcp.conf /etc/dhcp/dhcp.conf
  /etc/dhcp/dhcp.hosts

The /etc/dhcp/dhcp.hosts file will be empty. Copy over all previous statically assigned reservations from the /etc/dhcp.conf file only.

DNS

Satellite 5 Satellite 6
/var/named/ /var/named/dynamic/
/etc/dhcp.conf /etc/dhcp/dhcp.conf
*/etc/named.conf /etc/named.conf
  /etc/zones.conf
  /etc/named/options.conf

/etc/zones.conf is included from /etc/named.conf and contains all of the previously created zones before dns-managed is set to false for the integrated capsule.

/etc/named/options.conf is called from /etc/named.conf and contains any options needed. For our environment the only addition to the stock file is:

allow-recursion { any; localnets; localhost; };

(*) These files are managed by /etc/cobbler/.template

An item to note on the zone file content copy is to be sure to remove/replace any old references to the Satellite 5 server if it is being directly replaced by the Satellite 6 server. Things like NS and A records set for the correct hostname/IP address.

Once all of the configuration and data files are copied over, restart both the DNS and DHCP services on the Satellite 6 server.

Slave DNS

This step is optional however for our environment we use a secondary DNS server “just in case”. Slave DNS was already configured for the previous Satellite 5 server so the named configuration stayed the same.
To configure our slave DNS server for zone transfers from the Satellite 6 server, we first copied over the rndc key.

scp root@<sat6server>:/etc/rndc.key /etc/rndc.key

We deleted all of the existing zone files under /var/named. Then we ran the reconfigure command for the new rndc key:

rndc reconfig

We then did a zone transfer for all zones on the primary DNS server:

rndc refresh <zone_file>

You can check /var/log/messages for zone transfer status messages. Success will appear as:

Dec  2 15:43:32 <slave_DNS_server> named[25440]: transfer of '<zone_file>/IN' from <Sat_6_server_IP>#53: Transfer completed: 1 messages, 5 records, 234 bytes, 0.085 secs (2752 bytes/sec)
Dec  2 15:43:32 <slave_DNS_server> named[25440]: zone <zone_file>/IN: sending notifies (serial 30)

Check that the zone file exists under /var/named/ for sanity.

Register Systems

After DNS and DHCP are migrated, it’s time to register existing servers in the environment back to Satellite 6 for content and management. For our environment we synced the required content and created an activation key. On each system we then registered it back to the Satellite 6 machine:

yum -y --nogpgcheck install <URL_of_Sat_Server>-1.0-1.noarch.rpm
subscription-manager register --org="<organization>" \ --activationkey="<name_of_activation_key>"

After the server is registered it can be managed by navigating to Hosts > All Hosts, selecting the registered machine, clicking Edit and then clicking Manage host in the upper right to allow control over the server’s build cycle. Additional details can be found here: https://access.redhat.com/discussions/1528223

Test!

Provisioning

With everything in place, the time has come to create new hosts and verify both provisioning and discovery. For our environment we configured a RHV environment as a compute resource. This allowed testing of VM provisioning to include creation and deletion of hosts.

We also tested bare metal provisioning and Foreman discovery for new hosts. Additionally we verified correct DNS entries were created for both forward and reverse lookup records and propagated to our slave DNS.

Updates and Upgrades

Finally once existing hosts are registered and subscribed to the correct content via activation keys, run yum update to update systems as needed. For our environment we also upgraded RHV from 3.6 to 4.0 using Satellite 6 for content and management.

Next Steps

In an effort to further modernize and improve how we deploy servers, the next step will be to convert the existing bash scripts into Ansible playbooks. These scripts handle things like adding ssh keys, installing packages, or tweaking a system for a specific product.

These are all great candidates for automation with Ansible. We will bring in Ansible Tower to give us a centralized place to manage the Ansible jobs. Ultimately, the kickstart file will be responsible solely for adding an ansible user ssh key, and making an API call to inform Tower that it is built. Tower will take over from there.
Watch out for a future post down the road where we detail the trials and tribulations of rewriting those old bash scripts as Ansible playbooks.

Contributors

David Critch, Partner Systems Engineer, Systems Design and Engineering dcritch@redhat.com
Brett Thurber, Engineering Manager, Systems Design and Engineering bthurber@redhat.com

English

About The Author

Brett Thurber's picture Red Hat Community Member 70 points

Brett Thurber

Brett Thurber is a Engineering Manager and Distinguished Engineer for Red Hat’s Migration Engineering team. This team focuses on developing migration solutions, helping customers adopt Red Hat platforms and migrate workloads. He is also a Red Hat Certified Architect (RHCA) who has 25 years of IT ...