The Satellite Blog is a place for the Engineering team to publish tips and tricks about how to use our products, videos which showcase new features, and to point users at new content on the Customer Portal.
Latest Posts
-
Satellite Blog move - What you need to know
After many years of being hosted on the customer portal, the Satellite blog has been moved to the main Red Hat blog with the majority of the official blogs.
Along with hosting on a more modern platform colocated with many of the other Red Hat blogs, this change will allow us to create more dynamic content, reach a broader audience and focus on a variety of management and automation topics and trends.
The new blog is part of the Automation and Management channel which focuses on covering key topics across our management and automation portfolio. In the future, this channel will include blogs on all of the Management products including Red Hat Satellite, Red Hat Insights, Red Hat CloudForms, and Ansible including Ansible Engine, Ansible Tower, Ansible Networking, and Ansible Security. It will also show articles tagged about general Automation and Management topics.
In case you want to filter and just read the Satellite blogs, you can use this link: https://satelliteblog.redhat.com
Starting with Satellite 6.4, we include the capability to have blogs listed in the notification drawer once new blog posts are received. A future release of Satellite will have the URL updated by default, but in the meantime, you can change the location yourself by following these steps:
- Log into the Satellite User interface
- Navigate to: Administer > Settings > Notifications
- In the RSS URL field click the pencil icon to edit the URL.
- Replace the URL with: https://www.redhat.com/en/rss/blog/channel/red-hat-satellite
- Click the checkmark to accept the changes.
The settings should look similar to the screenshot below:
Note: The RSS feeds are updated twice daily, so this change will not automatically be reflected.
If you want to update the RSS feed immediately, you can ssh into your Satellite host and run the following command as a user with sudo permissions:
FOREMAN_RSS_LATEST_POSTS=3 foreman-rake rss:create_notifications
As mentioned, newer releases of Satellite will have the RSS URL changed by default, but this BZ was just submitted and is not currently assigned to a release.
As such it is suggested to make this change inside of Satellite so you can easily access the latest news and information about Satellite.Posted: 2018-11-08T20:35:37+00:00 -
Satellite 6.3.4 has just been released.
The main driver for the 6.3.4 release is ongoing performance and stability improvements.
There are 17 bugs squashed in this release - the complete list is below.There is one erratum for the server [1] and one for the hosts [2]. The install ISOs will be updated next week.
Customers who have already upgraded to Satellite 6.3 should follow the instructions in the errata.
Customers who are on older versions of Satellite should refer to the Upgrading and Updating Red Hat Satellite Guide.You may also want to consider using the Satellite Upgrade Helper if moving from Satellite 6.x to Satellite 6.3
Customers who have received hotfixes should verify the list below to ensure their hotfix is contained in the release before upgrading. Please reach out to Red Hat Support in these cases.
This update fixes the following bugs:
- Candlepin throws 500 Internal Server Error for more than 40+ guests
(BZ#1633252) - Deprecate katello-backup, and katello-restore in Satellite 6.3
(BZ#1569672) - Sat 6.3 does not update repositories with modified data from
RCM/Entitlement Service (BZ#1570792) - upgraded foreman-selinux has no label for 2375/tcp (BZ#1624026)
- Do not prevent host discovery for existing MAC/IP addresses (BZ#1624034)
- Cannot delete host when missing from the candlepin database (BZ#1624019)
- Content View is not updated on Content Host when change is made via
Hosts / All Hosts(BZ#1624020) - Re-add livecd-tools back to satellite repos (BZ#1624027)
- Capsule overview page errors unable to fetch logs "ï¼¼xE1" from
ASCII-8BIT to UTF-8 (BZ#1624035) - Improve MonitorEventQueue performance for large workloads (BZ#1624038)
- Satellite installation fails on rhel 7.6 Beta with
/Stage[main]/Candlepin::Service/Exec[cpinit]/returns: change from notrun
to 0 failed (BZ#1624022) - clean_backend_objects does not verify managed host status prior to
action (BZ#1624028) - processing virt-who report blocks RHSM certs checks what can lead to
503 errors (BZ#1624045) - On using UUID for content-host registration name, unexpected behavior
in entitlement status (BZ#1624033) - org_environment content access mode - authentication error while an
environment is updated on the client (BZ#1624036) - Unable to override hostgroup parameters from All hosts => edit host on
WebUI (BZ#1624025) - [RFE]Disable directory listing for /pub directory on satellite 6 using
custom-hiera. (BZ#1624021)
[1]https://access.redhat.com/errata/RHBA-2018:2915
[2] https://access.redhat.com/errata/RHBA-2018:2914
Satellite Migration from RHEL 6 to RHEL 7
As a reminder, Red Hat continues to strongly recommend your Satellite and Capsule Servers only be run on RHEL 7. There are several reasons why you should move your Satellite environment from RHEL 6 to RHEL 7 including enhanced performance and long-term supportability.
Future releases of Satellite (6.3 and above) will only support RHEL 7 and above. In preparation for newer versions of Satellite, you need to start thinking about how to move from older versions of RHEL to RHEL 7.
While RHEL 6 does support an in-place migration from RHEL 6 to RHEL 7, this migration mechanism is not supported when running Satellite on the RHEL host. Instead, you will need to clone your Satellite environment from a host running RHEL 6 to another host running RHEL 7.Review the Satellite 6.2.13 release blog for more detailed information about moving your Satellite environment from RHEL 6 to RHEL 7. 6.2.13 includes some important features for capsule backup and recovery which helps to ease the movement from RHEL 6 to RHEL 7.
Posted: 2018-10-10T21:58:58+00:00 - Candlepin throws 500 Internal Server Error for more than 40+ guests
-
Provisioning VMWare using userdata via Satellite 6.3-6.6
Will McDonald wmcdonal@redhat.com
Senior Consulting Architect
Red Hat UKI ServicesBackground
There are a number of different Red Hat technologies that can all be used to provision and configure VMware virtual machines.
- Cloudforms
- Ansible
- Satellite
Each has its own strengths, and scenarios where one may be favoured over another, framed by technical requirements, environmental constraints and local preferences.
During a recent engagement, we wanted to use Satellite to bulk-provision relatively large numbers of VMs to VMware Vcenter/ESX. Due to environmental constraints, we could not use Ansible, REX or SSH finish templates for post-provisioning configuration.
As a result we ended up with a solution using the upstream Foreman Userdata plug-in to further customise provisioned hosts via userdata injection for VMware customisation specification and cloud-init to ‘phone-home’. This plugin is currently being merged into Foreman core, it will likely appear in future Satellite 6 releases. It’s not currently a supported Red Hat component so be aware of this if you plan to use it.
Prerequisites
- Red Hat Satellite 6.3 (we used 6.3.3)
- VMware vCenter Server Appliance 6.7
- VMware ESXi 6.7
It is possible to reproduce this in a lab-like environment using nested virtualisation but this requires:
- 8 CPU cores
- 32GB
- 200GB of disk, thin provisioned (> 670 GB if thick provisioned)
- Vmware Workstation 14 or
- A nested ESX host with enough memory to run the VCSA in addition to VMs
Note: for those familiar with earlier releases of VCSA, the memory footprint for even a Tiny VCSA is substantial.
Setup Sequence of Events
In order to set up the initial environment, there are a few moving parts to set up. At a high-level we need to:
- Deploy a Satellite
- Deploy VCSA
- Deploy ESXi
- Satellite initial configuration
- Add manifest
- Configure and sync repositories
- Create a Lifecycle Environment
- Create a Content View and add a Repository to it
- Publish and promote the Content View
- Create Activation Keys and add Subscription
- Create a Subnet
- Create a Compute Resource
- Create a Compute Profile
- Create a Hostgroup
- Configure virt-who (optional)
- Install the Foreman Userdata Plug-in and restart Satellite
- Prepare provisioning templates (userdata and cloud-init)
- Configure the Operating System to use the userdata and cloud-init templates
- Prepare a VM template
- Create an empty VM
- Upload an ISO
- Install RHEL7
- Tweak networking (if required)
- Connect to Satellite temporarily
- Configure a public key (optional)
- Install open-vm-tools
- Install Perl
- Yum update
- Create copy (optional but recommended)
- Install cloud-init
- Configure cloud-init
- Create and run clean-up script
- Bake template
- Create Image in Satellite
- Test provisioning
Provisioning Sequence of Events
Once the environment is prepared, the provisioning workflow at a high-level is:
- User provisions VM(s) via the UI, API or hammer
- Satellite calls Vcenter to clone VM template
- Satellite userdata provisioning template injects customisation specification identity information
- Satellite cloud-init provisioning template instructs the VM to callback to Satellite when cloud-init runs post-provision
- Vcenter clones template to VM
- Vcenter applies customisation specification (VM identity including hostname, IP and DNS)
- VM builds, cloud-init is invoked and calls-back to Satellite on port 80 which then redirects to 443
Note: Even if registering the VM to a Capsule, the VM’s cloud-init phone-home always calls the Satellite, not its corresponding capsule. Make sure you take this into consideration in terms of firewall rules.
Network Configuration
The hosts were configured with the following FQDNs, IPs, operating systems and services. Forward and reverse DNS were correctly configured in DNS for all hosts.
Hostname IP OS Notes dns.example.com 192.168.0.1 RHEL7 External DNS service satellite.example.com 192.168.0.3 RHEL7 Yum Content, Provisioning, Subscription Management vcenter.example.com 192.168.0.4 Vmware Photon vCenter esxi1.example.com 192.168.0.5 ESXi ESX Hypervisor Management Endpoints
As this is just a demonstration lab, all SSL certificates are self-signed.
URL Purpose https://satellite.example.com Satellite management UI https://vcenter.example.com:4580/ VCSA management UI https://vcenter.example.com/ Vcenter management UI https://esxi1.example.com/ Note: the ESX host's Self-signed cert must be trusted in order to upload ISOs. VMware Installation
Installing vCenter and an ESX host or two is not covered in-depth but isn't terribly onerous.
Vmware Workstation 14 (or a suitably sized ESX host capable of running the VCSA) is recommended in order to correctly install the Vmware 6.7 VCSA. In theory it should be able to be run nested on other platforms but the installer and the OVA have extensive customisation which would need to be worked through.
Satellite Preparation
Satellite Installation
This walkthrough assumes you have a basic Satellite deployed with an Organization and Location configured, a manifest installed but no other objects created (no products or repositories, content views, activation keys etc.).
See the following documents for further information on planning for, and installing, Red Hat Satellite:
Red Hat professional services will obviously always be more than happy to help here, too. :D
This demonstration used Satellite 6.3.3 but this should work for earlier releases too.
Satellite Initial Preparation
Hammer Setup
Hammer has already been configured with sensible defaults for organization and location.
Repositores
Configure upstream Red Hat Repositories, in the Satellite UI under Content and Red Hat Repositories, enable the following
- RPMS / Red Hat Enterprise Linux Server / Red Hat Enterprise Linux 7 Server (RPMs) / Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server
Sync the upstream content in the UI, under Content and Sync Status, Expand All, Select All, and Synchronize Now.
Lifecycle Environments
Create a lifecycle environment (LE):
[root@satellite ~]# hammer lifecycle-environment create --description le-eng-rhel7-std-server --prior Library --name le-eng-rhel7-std-server
Content View
Create a content view (CV):
[root@satellite ~]# hammer content-view create --description cv-rhel7-std-server --name cv-rhel7-std-server
List Repository IDs
List the repository IDs of the synchronised content for the subsequent step:
[root@satellite ~]# hammer repository list --order id | awk -F '|' '{ print $1, $2 }'
Add Repositories to Content View
Add the required repositories from the previous step into the Content View:
[root@satellite ~]# hammer content-view update --repository-ids 1,2 --name "cv-rhel7-std-server"
Publish Content to Library
Publish a new version of the Content View into the initial Library:
[root@satellite ~]# hammer content-view publish --name "cv-rhel7-std-server" --async
Promote Content into Content View
Now promote the newly published Content View from the Library into the first Lifecycle Environment:
[root@satellite ~]# hammer content-view version promote --content-view "cv-rhel7-std-server" --to-lifecycle-environment "le-eng-rhel7-std-server" --async
Create Activation Key
Create an Activation Key (AK) to be used subsequently to register newly provisioned hosts:
[root@satellite ~]# hammer activation-key create --content-view "cv-rhel7-std-server" --lifecycle-environment "le-eng-rhel7-std-server" --name "ak-eng-rhel7-std-server"
List Available Subscriptions
List the available subscriptions (those loaded into the Satellite through its manifest):
[root@satellite ~]# hammer subscription list | awk -F'|' '{ print $1, $3, $9, $10 }' --- ------------------------------------ ---------- --------- ID NAME QUANTITY CONSUMED --- ------------------------------------ ---------- --------- 1 Red Hat Satellite (Self-Supported) 1 0 2 Red Hat Enterprise Linux 1 0 --- ------------------------------------ ---------- ---------
Add Subscriptions to the Activation Key
Add the desired subscriptions to the Activation Key:
[root@satellite ~]# hammer activation-key add-subscription --name "ak-eng-rhel7-std-server" --subscription-id 2
Configure the Domain
A domain will already have been created during Satellite installation. Add this domain to the default location and organization through the UI.
Create a subnet
Create a new subnet for the network to be provisioned to:
[root@satellite ~]# hammer subnet create --name "sn-default-servers" --network "192.168.0.0" --mask "255.255.255.0" --gateway "192.168.0.1" --dns-primary "192.168.0.8" --domains example.com --locations loc-example --organizations org-example
Create a Compute Resource
Either username@domain and domain\username both work from a compute resource creation perspective.
[root@satellite ~]# hammer compute-resource create --caching-enabled 1 --datacenter "dc-example" --name "cr-vcenter" --password "password" --provider "Vmware" --server "vcenter.example.com" --user 'vsphere.local\administrator' --locations "loc-example" --organizations "org-example"
[root@satellite ~]# hammer compute-resource create --caching-enabled 1 --datacenter "dc-example" --name "cr-vcenter" --password "password" --provider "Vmware" --server "vcenter.example.com" --user 'administrator@vsphere.local' --locations "loc-example" --organizations "org-example"
Create a Compute profile
Note: This can’t currently be done via hammer, see the following articles for more information:
- Satellite 6: How to create compute-profile through hammer cli?
- Bug 1278917 - RFE Commands for read operations around compute profiles and attributes
- Commands for read operations around compute profiles and attributes
In the UI select Infrastructure > Compute profile > Create Compute Profile
Populate the indicated fields following values, select Compute Resource (cr-vcenter):
Field Value Name vmware-small Cluster esxi1.example.com Resource pool auto populated ‘Resources’ Folder vm Guest OS Red Hat Enterprise Linux 7 (64-bit) Image img-rhel7-std-server NIC type VMXNET3 Network VM Network Storage See below... Data store datastore1 Thin provision uncheck Create a Hostgroup
Note: This can be accomplished in hammer but for a one-off it's quicker to create in the UI to save having to enumerate each value to illustrate the process.
A hostgroup was created in the UI with the following values:
[root@satellite ~]# hammer hostgroup info --id 1 Id: 1 Name: hg-rhel7-std-server Title: hg-rhel7-std-server Operating System: RedHat 7.5 Subnet: sn-default-servers Domain: example.com Architecture: x86_64 Puppet CA Proxy Id: Puppet Master Proxy Id: ComputeProfile: vmware-small Puppetclasses: Parameters: kt_activation_keys => ak-eng-rhel7-std-server Locations: loc-example Organisations: org-example Parent Id: OpenSCAP Proxy: Content View: ID: 2 Name: cv-rhel7-std-server Lifecycle Environment: ID: 2 Name: le-eng-rhel7-std-server Content Source: ID: 1 Name: satellite.example.com Kickstart Repository: ID: Name:
virt-who Configuration
Given the target compute in this walkthrough is Vmware, it’s likely you will need to configure virt-who for VDC subscription.
In this example, we are installing virt-who on the Satellite and this approach is generally fine for small to medium virtual estates. For very large estates, please refer to Red Hat documentation, support or consulting.
Install virt-who
Install the virt-who package and its dependencies
[root@satellite ~]# yum install -y virt-who
Create virt-who configuration
Create a virt-who configuration for each vCenter to be targeted
[root@satellite ~]# hammer virt-who-config create --name vcenter.example.com --organization org-example --interval 60 --filtering-mode none --hypervisor-id hostname --hypervisor-type esx --hypervisor-server vcenter.example.com --hypervisor-username 'administrator@vsphere.local' --hypervisor-password 'password' --satellite-url satellite.example.com
Deploy the virt-who configuration
Deploy the virt-who configuration
[root@satellite ~]# hammer virt-who-config deploy --id 1
Debugging virt-who
In the event of problems registering the vCenter, virt-who can be debugged by stopping the service and running in debug mode in the foreground.
[root@satellite ~]# systemctl stop virt-who.service [root@satellite ~]# virt-who -d -o
Foreman Userdata Plugin Configuration
Next we install the upstream Foreman Userdata plugin appropriate for the target release.
Note: this plugin is not currently part of core Foreman, nor is it shipped by Red Hat. As a result this is NOT currently a supported solution. The plugin is on the roadmap for future inclusion.
Check Foreman Version
Check the version of Foreman on which the installed Satellite release is based:
[root@satellite ~]# rpm -q foreman foreman-1.15.6.48-1.el7sat.noarch
Note:
foreman-*1.15*.6.48-1
. The 1.15 value is important for the subsequent step.Enabled Foreman Plugin Repository
Create a Yum repository configuration stub. For Satellite 6.3:
[root@satellite ~]# cat > /etc/yum.repos.d/foreman-plugins.repo <<EOF [foreman-plugins] name=Foreman plugins baseurl=http://yum.theforeman.org/plugins/1.15/el7/x86_64/ enabled=1 gpgcheck=0 EOF
For Satellite 6.4:
[root@satellite ~]# cat > /etc/yum.repos.d/foreman-plugins.repo <<EOF [foreman-plugins] name=Foreman plugins baseurl=http://yum.theforeman.org/plugins/1.18/el7/x86_64/ enabled=1 gpgcheck=0 EOF
For Satellite 6.5:
[root@satellite ~]# cat > /etc/yum.repos.d/foreman-plugins.repo <<EOF [foreman-plugins] name=Foreman plugins baseurl=http://yum.theforeman.org/plugins/1.20/el7/x86_64/ enabled=1 gpgcheck=0 EOF
For Satellite 6.6:
[root@satellite ~]# cat > /etc/yum.repos.d/foreman-plugins.repo <<EOF [foreman-plugins] name=Foreman plugins baseurl=http://yum.theforeman.org/plugins/1.22/el7/x86_64/ enabled=1 gpgcheck=0 EOF
The plugin code has been merged into core functionality in Foreman 1.23, therefore future Satellite updates will contain the feature out of box and this step can be skipped completely.
Install the Foreman Userdata plugin
Now we can 'yum install' the foreman_userdata plugin from the upstream repository:
[root@satellite ~]# yum -y install tfm-rubygem-foreman_userdata
Restart Satellite
Restart the Satellite services to pick up the newly installed plugin.
[root@satellite ~]# katello-service restart
Configure Provisioning Templates
Now we need to create and load some additional provisioning templates to be used by the userdata injection and cloud-init client.
Create provisioning template files
vmware-cloud-init-template
Create a cloud-init template file:
[root@satellite ~]# cat > ~/vmware-cloud-init-template.erb <<EOF #cloud-config hostname: <%= @host.name %> fqdn: <%= @host %> manage_etc_hosts: true users: {} runcmd: - touch ~/cloud-init phone_home: url: <%= foreman_url('built') %> post: [] tries: 10 EOF
Note: the ‘#cloud-config’ line is required in the cloud-init template, similar to a magic number of shebang.
Note: the runcmd illustrated here is merely a simple test to ensure cloud-init is functioning. More complex logic can be used in the template for registration, subscription management or other bespoke callback or finish tasks.vmware-userdata-template
Create a userdata template file:
[root@satellite ~]# cat > ~/vmware-userdata-template.erb <<EOF # Template for VMWare customization via open-vm-tools identity: LinuxPrep: domain: <%= @host.domain %> hostName: <%= @host.shortname %> hwClockUTC: true timeZone: <%= @host.params['time-zone'] || 'UTC' %> globalIPSettings: dnsSuffixList: [<%= @host.domain %>] <%- @host.interfaces.each do |interface| -%> <%- next unless interface.subnet -%> dnsServerList: [<%= interface.subnet.dns_primary %>, <%= interface.subnet.dns_secondary %>] <%- end -%> nicSettingMap: <%- @host.interfaces.each do |interface| -%> <%- next unless interface.subnet -%> - adapter: dnsDomain: <%= interface.domain %> dnsServerList: [<%= interface.subnet.dns_primary %>, <%= interface.subnet.dns_secondary %>] gateway: [<%= interface.subnet.gateway %>] ip: <%= interface.ip %> subnetMask: <%= interface.subnet.mask %> <%- end -%> EOF
Install provisioning templates
Now install the newly created templates:
[root@satellite ~]# hammer template create --name vmware-cloud-init --file ~/vmware-cloud-init-template.erb --locations loc-example --organizations org-example --operatingsystem-ids 1 --type cloud-init [root@satellite ~]# hammer template create --name vmware-userdata --file ~/vmware-userdata-template.erb --locations loc-example --organizations org-example --operatingsystem-ids 1 --type user_data
Configure Templates in the Operating System
Now the templates are created and loaded into the Satellite, we can update the Operating System definition to use the desired templates during provisioning requests.
In the UI, switch the Red Hat 7.5 Operating System to use these templates:
Go to Hosts > Operating systems > Red Hat 7.5 > Templates
- User data template: vmware-userdata
- Cloud-init: vmware-cloud-initTemplate Preparation
Create Empty Virtual Machine
In vCenter, create an empty RHEL7 VM container with the following properties:
- 1 vCPU
- 2GB vMem
- 16GB VMDKUpload a Red Hat Enterprise Linux ISO
Download rhel-server-7.5-x86_64-dvd.iso from the [Red Hat Customer Portal | https://access.redhat.com/downloads ], then upload into vCenter to a datastore ‘isos’ folder
Note: log in to the ESXi host directly in a browser to accept its self-signed certificate or ISO upload won’t work.
Install Red Hat Enterprise Linux
Install RHEL7 minimal from the ISO, consult Red Hat Enterprise Linux 7 documentation for more details.
Virtual Machine Network Configuration
The virtual machine will require some temporary network identity settings in order to prepare the template. Tweak interface connection name and DNS as appropriate for the environment:
[root@localhost ~]# nmcli con modify 'System ens192' connection.id ens192 [root@localhost ~]# nmcli con mod ens192 ipv4.method auto [root@localhost ~]# nmcli con edit ens192 nmcli connection edit ens192 nmcli> remove ipv4.dns nmcli> set ipv4.ignore-auto-dns yes nmcli> set ipv4.dns 192.168.0.2 nmcli> save nmcli> quit nmcli device reapply ens192 [root@localhost ~]# hostnamectl set-hostname "localhost.localdomain"
Subscribe to Satellite
Now we need to temporarily subscribe to the Satellite for some template prerequisite packages and a few other bits and pieces:
[root@localhost ~]# rpm -ivh http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm [root@localhost ~]# subscription-manager register --org=org-example --activationkey=ak-eng-rhel7-std-server
Drop in SSH Keys
This is an opportune moment to drop in any SSH public keys if desired:
[root@localhost ~]# mkdir ~/.ssh [root@localhost ~]# chmod 700 ~/.ssh [root@localhost ~]# cat > ~/.ssh/authorized_keys <<EOF ssh-rsa AAAABlongkeyfingerprint will@example.com EOF [root@localhost ~]# chmod 600 ~/.ssh/authorized_keys
Note: keys can also be injected into the template during provisioning time via cloud-init.
Install open-vm-tools
Install open-vm-tools to ensure the userdata customisation specification is properly handled:
[root@localhost ~]# yum -y install open-vm-tools.x86_64
Install Perl
Perl is also required for open-vm-tools to correctly parse and handle the customisation specification injection at provisioning time:
[root@localhost ~]# yum -y install perl
Yum Update Virtual Machine
While the VM template is temporarily registered to the Satellite, it will reduce future updates if we bring it up to the latest patch level:
[root@localhost ~]# yum -y update
Note: At this stage, it can be wise to power off and create a copy of the template without cloud-init. cloud-init can be quite invasive, if the template preparation requires multiple iterations this is a good place to restart from.
Install cloud-init
Power the virtual machine back on and install cloud-init:
[root@localhost ~]# yum -y install cloud-init
Configure cloud-init to Skip Networking
Drop in a configuration stub to instruct cloud-init to skip network configuration:
[root@localhost ~]# cat << EOF > /etc/cloud/cloud.cfg.d/01_network.cfg network: config: disabled EOF
Note: Satellite will do this via the Vmware customisation specification).
Configure cloud-init to Callback to Satellite Post-build
Drop in a configuration stub to instruct cloud-init to call back to Satellite post-provision. This informs Satellite of the build's completion, and allows additional post-provisioning behaviour to be injected via cloud-init configuration (in the form of the cloud-init provisioning template).
[root@localhost ~]# cat << EOF > /etc/cloud/cloud.cfg.d/10_foreman.cfg datasource_list: [NoCloud] datasource: NoCloud: seedfrom: http://satellite.example.com/userdata/ EOF
Replace Default cloud-init Configuration
The default cloud-init configuration may be considered quite invasive by the standards of some environments (YMMV). Back-up the default cloud-init configuration and replace with a more minimal set of configuration that does just what we require:
[root@localhost ~]# cp /etc/cloud/cloud.cfg ~/cloud.cfg.`date -I` [root@localhost ~]# cat << EOF > /etc/cloud/cloud.cfg cloud_init_modules: - bootcmd cloud_config_modules: - runcmd cloud_final_modules: - scripts-per-once - scripts-per-boot - scripts-per-instance - scripts-user - phone-home system_info: distro: rhel paths: cloud_dir: /var/lib/cloud templates_dir: /etc/cloud/templates ssh_svcname: sshd # vim:syntax=yaml EOF
Unregister Virtual Machine from Satellite
Now the template is fully prepared bar the clean/sysprep phase, we can unregister it from the Satellite:
[root@localhost ~]# subscription-manager unregister Unregistering from: satellite.example.com:443/rhsm System has been unregistered. [root@localhost ~]# subscription-manager clean All local data removed
Create Clean Script
Now we drop in a simple clean/sysprep script to remove the virtual machine identity prior to converting back to a template:
[root@localhost ~]# cat > ~/clean.sh <<EOF #!/bin/bash # stop logging services /usr/bin/systemctl stop rsyslog /usr/bin/systemctl stop auditd # remove old kernels # /bin/package-cleanup -oldkernels -count=1 #clean yum cache /usr/bin/yum clean all #force logrotate to shrink logspace and remove old logs as well as truncate logs /usr/sbin/logrotate -f /etc/logrotate.conf /bin/rm -f /var/log/*-???????? /var/log/*.gz /bin/rm -f /var/log/dmesg.old /bin/rm -rf /var/log/anaconda /bin/cat /dev/null > /var/log/audit/audit.log /bin/cat /dev/null > /var/log/wtmp /bin/cat /dev/null > /var/log/lastlog /bin/cat /dev/null > /var/log/grubby #remove udev hardware rules /bin/rm -f /etc/udev/rules.d/70* #remove uuid from ifcfg scripts /bin/cat > /etc/sysconfig/network-scripts/ifcfg-ens192 <<EOM DEVICE=ens192 ONBOOT=yes EOM #remove SSH host keys /bin/rm -f /etc/ssh/*key* #remove root users shell history /bin/rm -f ~root/.bash_history unset HISTFILE #remove root users SSH history /bin/rm -rf ~root/.ssh/known_hosts EOF
Run Clean Script
Clean the system:
[root@localhost ~]# sh ~/clean.sh
Power Off Virtual Machine
Power off the VM:
[root@localhost ~]# systemctl poweroff
Create Virtual Machine Template
Now convert the VM to a template in vCenter ‘template-rhel7-cloudinit’
Finalising Satellite Preparation
Now we have our Satellite and virtual machine template prepared, there's just one final step before moving to testing the provisioning flow.
List Operating System, Architecture and Compute Resource
List the Operating System, Architecture and Compute Resource configuration in Satellite to identify the corresponding object IDs:
[root@satellite ~]# hammer os list; hammer architecture list; hammer compute-resource list ---|------------|--------------|------- ID | TITLE | RELEASE NAME | FAMILY ---|------------|--------------|------- 1 | RedHat 7.5 | | Redhat ---|------------|--------------|------- ---|------- ID | NAME ---|------- 1 | x86_64 2 | i386 ---|------- ---|------------|--------- ID | NAME | PROVIDER ---|------------|--------- 1 | cr-vcenter | VMware ---|------------|---------
Create Image in Satellite
Now we create an Image in Satellite, linking the template in vCenter with the required resources in Satellite:
[root@satellite ~]# hammer compute-resource image create --operatingsystem-id 1 --architecture-id 1 --compute-resource-id 1 --user-data true --uuid template-rhel7-cloudinit --username root --name img-rhel7-std-server
Test Guest Provisioning
We should now be in a position to test provisioning of guest VMs via the Satellite UI or hammer.
For example:
[root@satellite ~]# hammer host create --name test-host --organization org-example --location loc-example --hostgroup hg-rhel7-std-server --compute-resource cr-vcenter --provision-method image --image img-rhel7-std-server --enabled true --managed true --compute-profile vmware-small --operatingsystem-id 1 --compute-attributes="start=1" --interface "managed=true,compute_type=VirtualVmxnet3,type=interface,domain_id=1,identifier=ens192,ip=192.168.0.50,subnet_id=1,primary=true,compute_network='VM Network'"
Next Steps
Once provisioning works, guests are being cloned, allotted the correct hostname, domain and IP, and the ~/cloud-init file is being created, the cloud-init provisioning template can be extended to perform any other post/finish tasks as required. For example:
- Installing the katello-consumer-ca from the appropriate content host
- Registering with a specific activation key
- Installing the katello-agent
Once host provisioning works for a single host, it’s relatively trivial to scale this up to 10s or 100s of hosts.
Even in a small, nested environment, the end-to-end process takes just a couple of minutes per-VM at most. However, care should be taken around concurrent host registration so it may be wise to batch bulk provisioning into groups of 10s or 20s of VMs and wait for completion of each batch.
References
Red Hat Documentation
- https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html-single/hammer_cli_guide/index#sect-CLI_Guide-Authentication
- https://access.redhat.com/documentation/en-us/red_hat_satellite/6.3/html/installation_guide/installing_satellite_server#performing_initial_configuration_sat_server_parent
- https://access.redhat.com/documentation/en-us/red_hat_satellite/6.3/
- https://access.redhat.com/documentation/en-us/red_hat_satellite/6.3/html/virtual_instances_guide/
Useful Links
- https://translate.google.co.uk/translate?hl=en&sl=fr&u=https://www.mchelgham.com/installer-vcsa-6-7-dans-vmware-workstation-fusion/&prev=search
- https://www.virten.net/2017/02/homelab-downsizing-vcenter-server-appliance-6-5/
- https://www.starwindsoftware.com/blog/vmware-vcenter-server-appliance-vcsa-and-after-install-tricks
- http://everything-virtual.com/2016/05/06/creating-a-centos-7-2-vmware-gold-template/
Posted: 2018-10-09T08:00:00+00:00 -
Red Hat Satellite 6.4 Beta is now available
We are pleased to announce that Red Hat Satellite 6.4 is now available in beta to current Satellite customers.
Red Hat Satellite is an infrastructure management platform, designed to manage system patching, provisioning, configurations and subscriptions across the entirety of a Red Hat environment. Satellite offers a lifecycle management solution to help keep your Red Hat infrastructure running efficiently and with greater security, which can reduce costs and overall environmental complexity.
The Satellite 6.4 beta is focused on embedding Ansible Automation for remote execution and desired state management, as well as continued enhancements to the usability of the Satellite user interface.
Major features that we are asking customers to review as they test the beta are:
- Ansible integration for remote execution
- Significant user interface changes
- New vertical navigation
- New repositories page
- Manifest editing from within the Satellite interface
- Notification drawer enhancements
- Enhanced auditing of user events
- Load balanced capsules
Note: Puppet 5 Required
- If upgrading you must be at Satellite 6.3 and Puppet 4 before starting the upgrade process.
- Refer to the blog: What you need to know to be ready for Satellite 6.4 and Puppet 5 for more information.
Customers with active Red Hat Satellite subscriptions can test out the new features in Satellite 6.4 beta now by signing up for the beta.
You can also refer to the Red Hat Satellite 6.4 Beta FAQ as well as the Satellite 6.4 Navigation Guide for additional information.
If you would like to see Satellite 6.4 in action, check out the Find It. Fix It. Before It Breaks. video available on YouTube that shows Satellite 6.4 discovering potential risks with Red Hat Insights and fixing them with embedded Ansible Automation.
Posted: 2018-09-05T12:55:50+00:00 -
Satellite 6.3.3 is now available
Satellite 6.3.3 has just been released.
The main driver for the 6.3.3 release is ongoing performance and stability improvements.
There are 24 bugs squashed in this release - the complete list is below.The most notable issue is there was a critical Pulp maintenance routine that never executed that is now resolved with this update. 6.3.3 adds a weekly cron schedule to ensure execution of the maintenance job. For customers with large numbers of content hosts (10,000+) there may be a possible reduction in disk consumption in /var/lib/pulp after this maintenance task executes.
If you wish to find out how to reclaim this space, please open a support ticket for guidance
There is one erratum for the server [1] and one
for the hosts [2]. The install ISOs will be updated next week.Customers who have already upgraded to Satellite 6.3 should follow the instructions in the errata.
Customers who are on older versions of Satellite should refer to the Upgrading and Updating Red Hat Satellite Guide.You may also want to consider using the Satellite Upgrade Helper if moving from Satellite 6.x to Satellite 6.3.
Customers who have received hotfixes should verify the list below to ensure their hotfix is contained in the release before upgrading. Please reach out to Red Hat Support in these cases.
This update fixes the following bugs:
- There was a critical Pulp maintenance routine that never executed. This update adds a weekly cron schedule to ensure execution. (BZ#1609928)
- The Pulp regenerate applicability of consumer tasks took several minutes to complete. This update improves execution times. (BZ#1573892)
- With a large installation of around 20,000 content hosts, the Applicable Package List froze and the action consumed more resources than was necessary. (BZ#1608597)
- Uploading a package to a repository produced the error: NoMethodError: undefined method `to_hash'. (BZ#1583545)
- Host registration failed with the error: "Validation failed: Host has already been taken". (BZ#1605188)
- During a Content View publish, the Puppet module completed publishing before yum, causing provisioning problems. (BZ#1579381)
- The foreman-rake foreman_tasks:cleanup scripts did not remove tasks from foreman_tasks_locks table. (BZ#1585890)
- Docker registry sync did not use an HTTP proxy configuration, which left no route to the host for new Docker registries within a network. (BZ#1333595)
- When a custom certificate had non-unicode characters, users received UnicodeEncodeError during the CapsuleGenerateAndSync task. (BZ#1449418)
- The katello-restore script was not restoring full incremental backups. (BZ#1497858)
- While running katello-backup with a cron job, sudo required a tty. (BZ#1540382)
- Copying the activation key resulted in only a partial copy, the missing configuration information then caused failure with "undefined method" and Internal Server Error. (BZ#1553338)
- Trying to update the host produced an error: ERF42-6324 [Foreman::Exception]: Could not find network dvportgroup-56 on VMWare compute resource. (BZ#1557613)
- Running bootstrap script to migrate hosts failed with error: Content view cannot be blank. (BZ#1559703)
- The volume canceljobjob logging was reduced. (BZ#1563225)
- Users experienced Tomcat java.lang.OutOfMemoryError: GC overhead limit exceeded. (BZ#1572604)
- Cannot edit user photos when using LDAP datasource. (BZ#1573243)
- Virtual Datacenter subscription was unavailable to hosts registering again. (BZ#1575056)
- Failed to delete hosts when the host name was missing from the Candlepin database. (BZ#1575140)
- The action history of Content Views was missing after upgrading to 6.3. (BZ#1575999)
- Exclude Satellite FQDN and localhost from possible proxying when a user set the foreman http proxy. (BZ#1585069)
- Qpidd deadlocked because of linearstore inactivity timer. (BZ#1588015)
- The foreman-rake katello:regenerate_repo_metadata command failed with "NoMethodError: undefined method `in_default_view' for #". (BZ#1589646)
- BZ - 1559703 - Running bootstrap fails: Content view can't be blank
[1] https://access.redhat.com/errata/RHBA-2018:2550
[2] https://access.redhat.com/errata/RHBA-2018:2551
Satellite Migration from RHEL 6 to RHEL 7
As a reminder, Red Hat continues to strongly recommend your Satellite and Capsule Servers only be run on RHEL 7. There are several reasons why you should move your Satellite environment from RHEL 6 to RHEL 7 including enhanced performance and long-term supportability.
Future releases of Satellite (6.3 and above) will only support RHEL 7 and above. In preparation for newer versions of Satellite, you need to start thinking about how to move from older versions of RHEL to RHEL 7.
While RHEL 6 does support an in-place migration from RHEL 6 to RHEL 7, this migration mechanism is not supported when running Satellite on the RHEL host. Instead, you will need to clone your Satellite environment from a host running RHEL 6 to another host running RHEL 7.Review the Satellite 6.2.13 release blog for more detailed information about moving your Satellite environment from RHEL 6 to RHEL 7. 6.2.13 includes some important features for capsule backup and recovery which helps to ease the movement from RHEL 6 to RHEL 7.
Posted: 2018-08-23T00:06:45+00:00 -
What you need to know to be ready for Satellite 6.4 and Puppet 5
As we work towards a Satellite 6.4 release this fall there are some very important changes to Puppet that are coming that the Satellite team wants to prepare you for.
Note: This affects ALL Satellite 6.3 users, even if you are not using Puppet or if you are using Puppet Enterprise.
The last few releases of Satellite have supported Puppet 3.8, a version which has been end-of-life since December 31, 2016.
Satellite 6.3 introduced support for Puppet 4, but since there were some major changes on the Puppet side between Puppet 3.8 and Puppet 4, Satellite 6.3 supported both versions.
But Puppet 4 is also rapidly approaching its end-of-life date, expected around October 2018.The upcoming release of Satellite 6.4 will only support Puppet 5.
This is because the other versions that we mentioned, Puppet 3.8 and Puppet 4, will both be end-of-life (or near end-of-life) by the time of the Satellite 6.4 release date.In order to upgrade to Satellite 6.4 from Satellite 6.3, you must first make sure that you have upgraded your Satellite 6.3 environment to Puppet 4.
If you try to upgrade from Satellite 6.3 with Puppet 3.8 the installed is expected to fail in order to ensure that the Puppet upgrade is also successful.This is true even if you don't actively use Puppet.
For customers that have upgraded to Satellite 6.3, Puppet is not automatically upgraded to Puppet 4. This is because of the significant changes that Puppet has made between releases.If you are a Puppet Enterprise user or you do not leverage the Puppet capabilities included with Satellite, this still impacts you too.
However, it should just be a simple upgrade of Puppet that is built into
Satellite, and should not affect your Puppet Enterprise operations.Once you have made sure any Puppet scripts will work as expected, the upgrade itself from Puppet 3.8 to Puppet 4 is quite easy.
Enable the Puppet4 repos, then run the "satellite-installer" command with the "--upgrade-puppet" option. Full instructions are below.If you haven't already started moving to Satellite 6.3, the feedback has been fantastic. It is a great release, and if you are running an older version of Satellite 6 we would encourage you to upgrade to Satellite 6.3 soon. While you're at it, upgrade to Puppet 4.
If you aren't leveraging Puppet in Satellite, then this shouldn't concern you much and it should be an easy upgrade (but you still need to do it).
However, if you are a heavy Puppet user, and are using the Puppet shipped as part of Satellite, it should be noted that Puppet changed significantly between versions 3.8 and 4. As a result, you should plan to test any of your Puppet based modules to be sure that they work properly after the upgrade.
Moving from Puppet 4 to 5 is much simpler and should require little changes to any of your modules.Here are some resources that we thought would be particularly helpful when planning the upgrade from Puppet 3.8 to Puppet 4.
- Upgrading Puppet Section of the Upgrading and Updating Red Hat Satellite 6.3 Guide.
- Puppet 3.x to 4.x: Upgrade Puppet Server and PuppetDB
- Puppet 3.8.x to 5.x: Get upgrade-ready
Hopefully, these resources will help to get your Satellite 6.3 environment updated to Puppet 4.
Once you are at Satellite 6.3 and Puppet 4 you will be ready to upgrade to Satellite 6.4 and Puppet 5 when it is released this fall.
Posted: 2018-06-26T22:49:49+00:00 -
Satellite 6.3.2 is now available
Satellite 6.3.2 has just been released.
The main driver for the 6.3.2 release is allowing customers to disable weak ciphers, but there are several other new features and fixes.
There are two errata for the server [1][3] and one
for the hosts [2]. The install ISOs will be updated later this week.Customers who have already upgraded to Satellite 6.3 should follow the instructions in the errata.
Customers who are on older versions of Satellite should refer to the Upgrading and Updating Red Hat Satellite Guide.You may also want to consider using the Satellite Upgrade Helper if moving from Satellite 6.x to Satellite 6.3
Customers who have received hotfixes should verify the list below to ensure their hotfix is contained in the release before upgrading. Please reach out to Red Hat Support in these cases.
This update fixes the following bugs:
-
Users can now disable weak ciphers across Satellite 6's series of
services and restrict to only TLS 1.2. For more information, see How to
disable weak encryption (SSL 2.0 and SSL 3.0) on Red Hat Satellite:
https://access.redhat.com/solutions/26833 (BZ#1553875, BZ#1318973)NOTE: If you are still using RHEL 5 Servers note that restricting communications to only TLS 1.2 will prevent RHEL5 from communicating with Satellite. Refer to Known Issues and Attacks Against SSL/TLS in OpenSSL/NSS/gnutls on Red Hat Enterprise Linux 5 for full details about supported TLS versions with RHEL 5.
-
The foreman-debug command now collects additional information to help
with debugging, including passenger statistics. (BZ#1540493, BZ#1558545) -
Satellite 6.3 could not sync containers from the Google or Quay
registry because of differences in version 2 of the API. Satellite now
supports these differences. (BZ#1555165) -
Performance improvements, which arose from experiences with customer
upgrades, are now available for the UI and CLI. (BZ#1511503, BZ#1567978,
BZ#1553263, BZ#1560740, BZ#1563002, BZ#1575113) -
Certain future-dated subscriptions could not be enabled based on the
products they enable. This is now fixed. (BZ#1553264) -
Authentication for OpenStack v3 was failing because Satellite did not
use the domain field. This is now fixed. (BZ#1513932) -
Problems that relate to migrating and upgrading, for example the
content_source_id not migrating and template history disappearing, are
now fixed. (BZ#1584874, BZ#1559108) -
Intermittent segmentation faults in the Pulp stack are now fixed.
(BZ#1516481) -
Provisioning templates in RHEV 3.6 ignored the disk settings specified
in the template. This is now fixed. (BZ#1399102) -
The boot disk provisioning option was omitted from the hammer host
create command. This is now fixed. (BZ#1544498) -
Remote execution failed on host collections. This is now fixed.
(BZ#1553017) -
Intermittent segmentation faults in the qpid stack are now fixed.
(BZ#1561819) -
The initial remote execution command after a restart failed with the
error message: "Could not use any Capsule". This is now fixed. (BZ#1558069) -
Pulp workers became deadlocked when the PULP_MAX_TASKS_PER_CHILD setting was enabled. These workers now reconnect correctly. (BZ#1590906)
[1] https://access.redhat.com/errata/RHBA-2018:1950
[2] https://access.redhat.com/errata/RHBA-2018:1951
[3]https://access.redhat.com/errata/RHBA-2018:1956
Satellite Migration from RHEL 6 to RHEL 7
As a reminder, Red Hat continues to strongly recommend your Satellite and Capsule Servers only be run on RHEL 7. There are several reasons why you should move your Satellite environment from RHEL 6 to RHEL 7 including enhanced performance and long-term supportability.
Future releases of Satellite (6.3 and above) will only support RHEL 7 and above. In preparation for newer versions of Satellite, you need to start thinking about how to move from older versions of RHEL to RHEL 7.
While RHEL 6 does support an in-place migration from RHEL 6 to RHEL 7, this migration mechanism is not supported when running Satellite on the RHEL host. Instead, you will need to clone your Satellite environment from a host running RHEL 6 to another host running RHEL 7.Review the Satellite 6.2.13 release blog for more detailed information about moving your Satellite environment from RHEL 6 to RHEL 7. 6.2.13 includes some important features for capsule backup and recovery which helps to ease the movement from RHEL 6 to RHEL 7.
Posted: 2018-06-20T07:04:16+00:00 -
-
Satellite 6.2.15 is now available
Red Hat Satellite 6.2.15 includes bug fixes for improving the performance of Satellite 6.2.x.
There is one erratum for the server [1] and one for the hosts [2].
ISOs should be published next week.Customers who have already upgraded to 6.2 should follow the instructions in the errata. Customers who are on 6.1.x should follow the upgrade instructions in the Satellite 6.2 Installation Guide. Customers who have received hotfixes should verify the list below to ensure their hotfix is contained in the release before upgrading. Please reach out to Red Hat Support in these cases.
Fixes included in 6.2.15
This update fixes the following bugs:
- TLS 1.2 is now enabled by default on Satellite 6.2. (BZ#1331041, BZ#1548093)
- Performance problems with registration, such as concurrent registrations and task blocking have been improved based on work with larger customers. (BZ#1490019, BZ#1514508, BZ#1553845)
- A new dispatch router has been released to fix a segmentation fault caused by long running use. (BZ#1535891)
- Performance problems with the katello-errata query have been improved and backported to Satellite 6.2. (BZ#1518804)
- Performance problems with memory consumption when searching for an unqualified hostgroup have been improved and backported to Satellite 6.2. (BZ#1547986)
- Performance problems with the subscription page when the user has a large volume of hosts have been improved and backported to Satellite 6.2. (BZ#1551674)
- Restarting a pulp worker with running tasks caused the tasks to be in a broken state. This problem has been fixed. In such cases, tasks are stopped with warning "Task cancelled". This fix has been backported to Satellite 6.2. (BZ#1526437)
- Synchronization tasks that were interrupted by Satellite restarting were not completing. This is now fixed and has been backported to Satellite 6.2. (BZ#1548167)
- The foreman-debug tool has been updated to collect the foreman-maintain logs. This improves the ability to analyze problems during upgrades. (BZ#1542407)
- The katello-backup tool can now use relative paths. (BZ#1544396)
- Users received errors when they created hosts using hammer with two network interfaces if eth0 was not the primary interface. This is now fixed. (BZ#1417053)
- Users were not able to manage their own taxonomies. This functionality has been enabled. (BZ#1476843)
- Custom products created for RHEL5 were not using the correct checksum type. SHA1 is now used for these repositories. (BZ#1480694)
Users of Red Hat Satellite are advised to upgrade to these updated packages, which fix these bugs.
[1] https://access.redhat.com/errata/RHBA-2018:1672
[2] https://access.redhat.com/errata/RHBA-2018:1673
Satellite Migration from RHEL 6 to RHEL 7
As a reminder, Red Hat continues to strongly recommend your Satellite and Capsule Servers only be run on RHEL 7. There are several reasons why you should move your Satellite environment from RHEL 6 to RHEL 7 including enhanced performance and long-term supportability.
Future releases of Satellite (6.3 and above) will only support RHEL 7 and above.
In preparation for newer versions of Satellite, you need to start thinking about how to move from older versions of RHEL to RHEL 7.
While RHEL 6 does support an in-place migration from RHEL 6 to RHEL 7, this migration mechanism is not supported when running Satellite on the RHEL host. Instead, you will need to clone your Satellite environment from a host running RHEL 6 to another host running RHEL 7.
Review the Satellite 6.2.13 release blog for more detailed information about moving your Satellite environment from RHEL 6 to RHEL 7. 6.2.13 includes some important features for capsule backup and recovery which helps to ease the movement from RHEL 6 to RHEL 7.Posted: 2018-05-23T11:23:32+00:00 -
Go West, [not so] young Spinks: One Satellite member’s guide to Red Hat Summit 2018
Greetings!
I’m John Spinks, Technical Marketing Manager for Satellite.
While I’m relatively new to Red Hat, I get to work with Red Hat Satellite engineers and customers every day. Next week is my first Red Hat Summit so I’m excited to get to see so many of both in one place.Not only is this my first Summit as an attendee, I’m honored to say that this will also be my first time at Summit as a speaker. Brent Midwood and I will be presenting the session: Live Demonstration: Find it. Fix it. Before it breaks. That has of course captured a lot of my interest of late, but I’m also looking forward to attending some other breakout sessions about how Satellite enables customers and businesses to meet their goals.
Breakout Sessions
Summit has a packed and exciting agenda, so I’ve highlighted a few of the sessions and their description from the Summit agenda that I am most interested in attending and that I think will be of great interest to users of Red Hat Satellite:
Live demonstration: Find it. Fix it. Before it breaks.
A common saying is, "If it ain’t broke, don't fix it." But what if it is broken and you just don't know it? Waiting for something to break means late nights, lunches at your desk, or waiting for a maintenance window to fix issues. However, there are ways for you to automate how you predict, resolve, and manage issues before they cause you headaches. Read more.
How Walmart uses systems management tools to manage its massive IT operation at scale
To power the largest retailer in the world, Walmart’s IT operation must manage systems at unprecedented scale. Walmart’s IT leaders will share how they use various automation tools, including Red Hat Satellite, to give them the capacity they need. Darin Lively, Staff Systems Engineer, and Brian Ameling, Staff Systems Engineer, will offer a look into Walmart’s complex architecture and… Read more.
A problem's not a problem, until it's a problem (Red Hat Insights)
Find out how using predictive analytics from Red Hat Insights, you can assess the impact of a potential system issue and the likelihood of it becoming a problem. Read more.
Red Hat Management roadmap and strategy
Attend this session to learn the latest on the strategy and roadmap for Red Hat Management, including details on Red Hat Ansible Automation, Red Hat CloudForms, Red Hat Insights, and Red Hat Satellite. Read more.
If you’re going to Red Hat Summit next week, I encourage you to add these sessions to your agenda now-- before they fill up!
Automation and Management Booth
As much as possible I’ll be spending time in the Automation & Management pod in the Red Hat booth during the Ecosystem Expo hours. Please stop by to say “Hello” to me or to any of our Satellite experts and see some demos of Red Hat Satellite 6.3 and some sneak peeks into what’s coming next.
For those of you fortunate enough to go - safe travels and hope to see you there!
If you can’t make it I hear that content (slides and recordings) will be posted after Summit at http://redhat.com/summitJohn Spinks
@jbspinksPosted: 2018-05-02T13:10:32+00:00 -
Satellite 5 and RHN End of Life - Making sure that you are only connected to RHSM.
Have you completed your migration from Satellite 5 to Satellite 5.8, but you keep getting messages from us about upgrading before January 31, 2019?
It could be that your systems are still registered with Red Hat Network (RHN), even if you have moved to a newer version.
Let's walk through a couple steps to show you how you can check and see if you are registered with RHN or Red Hat Subscription Manager (RHSM).
I moved to Satellite 6 - Does this affect me?
If you have moved off of Satellite 5 to Satellite 6, and you have shut down all Satellite 5 systems, then this article does not impact you. Satellite 6 cannot be registered to RHN. However, if you still have Satellite 5 systems in your environment please verify that they have been upgraded to Satellite 5.8 and have moved to RHSM.
Why are these steps important?
As of January 31, 2019 the RHN connector that Satellite 5.7 and older uses to get content will be completely shut down. After this date NO CONTENT will be available through RHN. Both system level updates and channel syncing will be stopped as a result.
Satellite 5.8 and Satellite 6 uses RHSM instead of RHN, so this shutdown does not impact those versions.
What happens to my Satellite 5.7 or older server after January 31, 2019?
Red Hat STRONGLY recommends an upgrade to Satellite 5.8 prior to 31-Jan-2019. However, if the Satellite is not upgraded,
You WILL
- Be able to use the Satellite 5.7 (or lower) with previously downloaded content.
- Be able to receive updates for the underlying OS (if registered via RHSM)
- Receive support from CEE to upgrade to 5.8.
You WILL NOT
- Receive updates for (and be able to synchronize) any channels previously synced via Satellite 5.
- Receive updates for the underlying OS (if registered via RHN)
- Receive help for anything else Satellite related other than upgrading to Satellite 5.8.
I have not moved to Satellite 5.8 OR I have installed Satellite 5.8 but have not migrated to RHSM. What do I do?
If you are looking to start the process of migrating from RHN to RHSM, follow the instructions in the article: "Preparing Satellite 5 systems for Red Hat Network's End of Life."
How could I be registered to both RHN and RHSM?
If you had a Satellite system that was registered with RHN, and you upgraded your environment to Satellite 5.8 and manually registered with RHSM, then you might be double registered.
However, if you used therhn-migrate-classic-to-rhsm
command then you should have been properly registered with RHSM and removed from RHN.How can I check if I am double registered to both RHN and RHSM?
Verify that you are connected to RHSM
The first thing you should do if verify that your subscription-manager status is current:
# subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Current
Next, Look at the list of consumed subscriptions to verify that you have a Red Hat Satellite subscription attached to the system:
# subscription-manager list --consumed +-------------------------------------------+ Consumed Subscriptions +-------------------------------------------+ Subscription Name: Red Hat Satellite Provides: Red Hat Satellite Red Hat Satellite Capsule
Verify that you are NOT connected to RHN
To confirm that the system is not also registered to RHN as well as RHSM, use the
yum repolist
command to verify that therhnplugin
is NOT present:# yum repolist Loaded plugins: product-id, search-disabled-repos, security, subscription-manager repo id repo name status rhel-6-server-rpms Red Hat Enterprise Linux 6 Server (RPMs) 20,029 rhel-6-server-satellite-5.7-rpms Red Hat Satellite 5.7 (for RHEL 6 Server) (RPMs) 736
- Notice the line starting with Loaded plugins:, there should not be rhnplugin loaded in the above yum command output.
- RHSM repositories end with rpms, so all the repo id's listed in above yum command output should end with rpms.
Many of these details are covered in the solution: How to verify whether Red Hat Satelite v 5.x server is successfully migrated from RHN Hosted to RHSM?
Posted: 2018-04-30T13:00:07+00:00