Warning message

Log in to add comments.

The Satellite Blog is a place for the Engineering team to publish tips and tricks about how to use our products, videos which showcase new features, and to point users at new content on the Customer Portal.

Latest Posts

  • Satellite 6.3.1 is now available

    Authored by: John Spinks

    Red Hat Satellite 6.3.1 includes packages that supports Red Hat Enterprise Linux 7.5 as well as a variety of performance enhancements and general bug fixes.

    Especially notable is the improvements in the performance of content views. In our tests we've seen publishing of a single content view on RHEL7 redunce in time by 43% and publishing of composite views reduced 95%. To put numbers to this 6.3.0 took 320 seconds to publish a composite view while 6.3.1 took 14 seconds to publish the same CV.
    Promotion also was significantly improved. A group of 16 CVs were promoted in 977 seconds in 6.3.0 to 5.78 seconds in 6.3.1 - a 99% reduction in time to promote!

    There is one erratum for the server [1] and one for the hosts [2]. The install ISOs will be updated next week.

    Customers who have already upgrade to Satellite 6.3 should follow the instructions in the errata.
    Customers who are on older versions of Satellite should refer to the Upgrading and Updating Red Hat Satellite Guide.

    You may also want to consider using the Satellite Upgrade Helper if moving from Satellite 6.x to Satellite 6.3

    Customers who have received hotfixes should verify the list below to ensure their hotfix is contained in the release before upgrading. Please reach out to Red Hat Support in these cases.

    Fixes included in 6.3.1

    • The performance of Content View publishing and promotion has been improved. (BZ#1522912)

    • The performance of Ansible Tower inventory collection has been improved. (BZ#1437197)

    • Various performance and scale improvements have been made. (BZ#1553881, BZ#1553879, BZ#1532348, BZ#1553871)

    • Several upgrade issues from 6.2.x to 6.3 have been resolved. (BZ#1549502, BZ#1547607, BZ#1553279)

    • The installer now allow users to configure the SSL protocols used by Tomcat. (BZ#1544995)

    • Disconnected installations no longer attempt to install the oauth gem from a remote server. (BZ#1541885)

    • Users can now configure custom products not to use an HTTP Proxy when syncing content, and to only use the proxy for Red Hat Content. (BZ#1132980)

    • Restarting a pulp worker with running tasks caused the tasks to be in a broken state. This problem has been fixed. In such cases, tasks are stopped with warning "Task cancelled". (BZ#1552118)

    • Puppet modules which were built on MacOS machines can now be imported into Satellite. (BZ#1445625)

    • Previously, an API endpoint used by the SCAP Satellite plug-in failed to provide generated guides based on the selected profile. Consequently, Red Hat Satellite was not able to get information from the plug-in, and the Satellite UI was broken. With this update, handling of the API endpoint has been updated, and it is now possible to generate guides using this API endpoint again. (BZ#1480595)

    • Backups were failing with relative paths since 6.2.13. This regression has been fixed. (BZ#1544401)

    • When installing the katello-ca-consumer package, the script restarted Docker. Consequently, running containers where stopped. With this update, the Docker service is reloaded not restarted. (BZ#1518289)

    Users of Red Hat Satellite are advised to upgrade to these updated packages, which fix these bugs.

    [1] https://access.redhat.com/errata/RHBA-2018:1126
    [2] https://access.redhat.com/errata/RHBA-2018:1127

    Satellite Migration from RHEL 6 to RHEL 7

    As a reminder, Red Hat continues to strongly recommend your Satellite and Capsule Servers only be run on RHEL 7. There are several reasons why you should move your Satellite environment from RHEL 6 to RHEL 7 including enhanced performance and long term supportability.

    Future releases of Satellite (6.3 and above) will only support RHEL 7 and above. In preparation for newer versions of Satellite you need to start thinking about how to move from older versions of RHEL to RHEL 7.
    While RHEL 6 does support an in-place migration from RHEL 6 to RHEL 7, this migration mechanism is not supported when running Satellite on the RHEL host. Instead you will need to clone your Satellite environment from a host running RHEL 6 to another host running RHEL 7 .

    Review the Satellite 6.2.13 release blog for more detailed information about moving your Satellite environment from RHEL 6 to RHEL 7. 6.2.13 includes some important features for capsule backup and recovery which helps to ease the movement from RHEL 6 to RHEL 7.

    Posted: 2018-04-13T14:43:30+00:00
  • Preparing to Upgrade Satellite? Open a Proactive Support Case.

    Authored by: John Spinks

    Worried about your upcoming Satellite upgrade? Don’t be.

    In addition to our detailed upgrade documentation, our support team has been through hundreds of upgrades and they’re happy to help if something deviates from your expectations. In order to optimize your upgrade experience if you chose to engage our support team, please submit what we call a “Proactive Support Case” ahead of your planned upgrade window.

    Why should you do this? This will allow for an experienced Satellite support professional to be on-call and/or aware of your plans so they can quickly assist or help triage any issues you could see, as well as minimizing logistics in the event that a problem does occur since a case is already open.

    On average, most planned upgrades within a major release, for example from Satellite 5.7 to Satellite 5.8 or Satellite 6.2 to Satellite 6.3, are able to be completed within a 4-8 hour maintenance window and many are completed well under that time estimate.

    Here’s how you can submit a proactive support ticket:

    1. Open a case providing information about the upcoming upgrade from Satellite 5.x to Satellite 5.8.

      • Access Red Hat Customer Portal
      • Access Support > Support Cases > Open a New Support Case > Open Case
      • Fill in the forms, describing the planned upgrade
      • Include notation “proactive case”
      • Include Environment information for all impacted components:

        • SOS Report
        • Time/Date of planned upgrade window
        • For Satellite 6 include:
          • How many Satellite and Capsule Servers are in use
          • Are any of the above using the disconnected model? If so, how many?
          • Output from Foreman-debug
        • For Satellite 5 include:
          • How many Satellite and Proxy Servers are in use
          • Are any of the above using the disconnected model? If so, how many?
          • Output of spacewalk-debug
          • Output of: rhn-satellite status
          • Output of rhn-schema-version
          • Output of database-schema-version
          • If the environment was previously upgraded include the schema-upgrade-log
          • Contents of: /etc/rhn/rhn.conf
      • Include any additional information that would assist troubleshooting

      • Submit the case
    2. Notify TAM/CSM team of the proactive case
      • Notify TAM/CSM of case #
      • Use support number, if needed, to raise awareness (1-888-467-3342)
    3. Case will be flagged within Red Hat to be monitored for potential actions
    4. Post upgrade: work with TAM/support team to ensure case is closed

    Note: Proactive support cases that need weekend coverage should be opened at least 4 days prior to the weekend of the upgrade. If you plan to upgrade this weekend your support ticket will need to be in by close of business Tuesday in order to be proactively supported.

    Customers who submit a proactive support ticket are still asked to follow the upgrade documentation very carefully as that is still the single best source for guidance on your Satellite upgrades. However, our support professionals have seen their fair share of “edge cases” and exceptions and they can help close any gap quickly if they see any potential challenges.

    Staying on Satellite 5? Get to Satellite 5.8 ASAP!

    Are you a Satellite customer running Satellite 5.7 or earlier version?
    Hopefully by now you've heard about RHN shutting down on January 31, 2019. This means that after January 31, 2019 your Satellite Server will no longer be able to download new content.
    To protect yourself and to be sure that you can continue to access content you need to upgrade your Satellite environment to Satellite 5.8.

    On Satellite 5 and ready to move to Satellite 6 but want more help?

    Red Hat has recently released a consulting offering for Satellite 5 to Satellite 6 transition. Contact your account team for more details.

    Posted: 2018-04-11T16:33:49+00:00
  • Satellite 6.3 is now available

    Authored by: John Spinks

    Red Hat Satellite 6.3 is now available.

    Red Hat is pleased to announce the general availability of Red Hat Satellite 6.3. The latest release increases product stability and usability, and introduces new and enhanced features designed to meet user needs.

    Key features of Red Hat Satellite 6.3 are organized into key content areas below. Most of the new features include links to the feature overview available on the content portal.

    Content Management:

    System Provisioning:

    Configuration Management:

    Supportability:

    Security & User Access:

    Usability:

    Other enhancements

    In addition to the above major features for Satellite 6.3, there are a decent number of smaller features that do not have detailed feature overviews associated. The list of these other features are available here.

    Additional Details

    For more details on this release, see the Release Notes.

    For instructions on perfoming a fresh install of Red Hat Satellite 6.3, see the Installation Guide.

    For instructions on upgrading from an earlier version of Red Hat Satellite 6.x, see the Upgrading and Updating Red Hat Satellite Guide.

    Posted: 2018-02-21T16:57:28+00:00
  • Satellite 6.2.14 is now available

    Authored by: John Spinks

    Red Hat Satellite 6.2.14 includes fixes for performance improvements and stability, as well as upgrade enhancements to make it easier to upgrade Satellite 6.2 to the upcoming Satellite 6.3 release.
    There is one erratum for the server [1] and one for the hosts [2]. The install ISOs will be updated later this week.

    Customers who have already upgraded to 6.2 should follow the instructions in the errata. Customers who are on 6.1.x should follow the upgrade instructions in the Satellite 6.2 Installation Guide. Customers who have received hotfixes should verify the list below to ensure their hotfix is contained in the release before upgrading. Please reach out to Red Hat Support in these cases.

    Fixes included in 6.2.14

    Security Fixes:

    • It was discovered that python-twisted-web used the value of the Proxy header from HTTP requests to initialize the HTTP_PROXY environment variable for CGI scripts, which in turn was incorrectly used by certain HTTP client implementations to configure the proxy for outgoing HTTP requests. A remote attacker could possibly use this flaw to redirect HTTP requests performed by a CGI script to an attacker-controlled proxy via a malicious HTTP request. (CVE-2016-1000111)

    Bugs:

    • Upgrades from Satellite 6.2 to Satellite 6.3 were failing due to the use of certificates with custom authorities. These upgrade paths now work. (BZ#1523880, BZ#1527963)
    • Additional tooling is provided to support data validation when upgrading from Satellite 6.2 to Satellite 6.3. (BZ#1519904)
    • Several memory usage bugs in goferd and qpid have been resolved. (BZ#1319165, BZ#1318015, BZ#1492355, BZ#1491160, BZ#1440235)
    • The performance of Puppet reporting and errata applicability has been improved. (BZ#1465146, BZ#1482204)
    • Upgrading from 6.2.10 to 6.2.11 without correctly stopping services can cause the upgrade to fail on removing qpid data. This case is now handled properly. (BZ#1482539)
    • The cipher suites for the Puppet server can now be configured by the installation process. (BZ#1491363)
    • The default cipher suite for the Apache server is now more secure by default. (BZ#1467434)
    • The Pulp server contained in Satellite has been enhanced to better handle concurrent processing of errata applicability for a single host and syncing Puppet repositories. (BZ#1515195, BZ#1421594)
    • VDC subscriptions create guest pools which are for a single host only. Administrators were attaching these pools to activation keys which was incorrect. The ability to do this has been disabled. (BZ#1369189)
    • Satellite was not susceptible to RHSA-2016:1978 but security scanners would incorrectly flag this as an issue. The package from this errata is now delivered in the Satellite channel to avoid these false positives. (BZ#1497337)
    • OpenScap report parsing resulted in a memory leak. This leak has been fixed. (BZ#1454743)
    • The validation on the length of names for docker containers and repositories was too restrictive. Names can now be longer. (BZ#1424689)
    • Goferd continues to leak memory when qdrouterd is not accessible. Was supposedly fixed as per bz 1260963 (BZ#1318015)

    Users of Red Hat Satellite are advised to upgrade to these updated packages, which fix these bugs.

    [1] https://access.redhat.com/errata/RHSA-2018:0273

    [2] https://access.redhat.com/errata/RHBA-2018:0272

    Satellite Migration from RHEL 6 to RHEL 7

    As a reminder, Red Hat continues to strongly recommend your Satellite and Capsule Servers only be run on RHEL 7. There are several reasons why you should move your Satellite environment from RHEL 6 to RHEL 7 including enhanced performance and long term supportability.

    Future releases of Satellite (6.3 and above) will only support RHEL 7 and above. In preparation for newer versions of Satellite you need to start thinking about how to move from older versions of RHEL to RHEL 7.
    While RHEL 6 does support an in-place migration from RHEL 6 to RHEL 7, this migration mechanism is not supported when running Satellite on the RHEL host. Instead you will need to clone your Satellite environment from a host running RHEL 6 to another host running RHEL 7 .
    Review the Satellite 6.2.13 release blog for more detailed information about moving your Satellite environment from RHEL 6 to RHEL 7. 6.2.13 includes some important features for capsule backup and recovery which helps to ease the movement from RHEL 6 to RHEL 7.

    Posted: 2018-02-05T15:43:11+00:00
  • Satellite 6.2.13 is now available

    Authored by: John Spinks

    Satellite 6.2.13 is now available.

    Red Hat Satellite 6.2.13 includes backup and restore capabilities for Capsule Servers, as well as other enhancements to make it easier to move the underlying Satellite operating system from a Red Hat ® Enterprise Linux ® 6 (RHEL 6) to a RHEL 7 environment. There are also enhancements to optimize package profile tasks, improvements to the pulp workers service, and documentation improvements.

    One of the most critical improvements is Backup and Restore of Capsule Servers using the katello-backup and katello-restore scripts. These steps will be documented in the refreshed Satellite 6.2 Server Administration Guide. This will help ease the transition of your Satellite or Capsule Server from RHEL 6 to RHEL 7.

    Customers who have already upgraded to 6.2 should follow the instructions in the errata. Customers who are on 6.1.x should follow the upgrade instructions in the Satellite 6.2 Installation Guide. Customers who have received hotfixes should verify the list below to ensure their hotfix is contained in the release before upgrading. Please reach out to Red Hat Support in these cases.

    Satellite Migration from RHEL 6 to RHEL 7

    Red Hat continues to strongly recommend your Satellite and Capsule Servers only be run on RHEL 7. There are several reasons why you should move your Satellite environment from RHEL 6 to RHEL 7 including enhanced performance and long term supportability.

    RHEL 7 has an improved kernel, newer versions of Ruby, and defaults to the XFS file system which provides better overall support for Satellite and Capsule Server operations. The combination of these updates to RHEL 7 result in significant increases in speed related to content focused.

    Future releases of Satellite will only support RHEL 7 and above. In preparation for newer versions of Satellite you need to start thinking about how to move from older versions of RHEL to RHEL 7.
    While RHEL 6 does support an in-place migration from RHEL 6 to RHEL 7, this migration mechanism is not supported when running Satellite on the RHEL host. Instead you will need to clone your Satellite environment from a host running RHEL 6 to another host running RHEL 7 .
    The clone process at a high level requires backing up the Satellite Server or Satellite Capsule using the katello-backup script. Copy the backup files to the new host that is running RHEL 7, then restore using the katello-restore script.
    For long term supportability, it is important to note that RHEL 6 is in production phase 3 of its life cycle, meaning the operating improvements that help make Satellite run so well on RHEL 7 are not going to be coming to RHEL 6.

    To help make migration from RHEL 6 to RHEL 7 an easier process, the Satellite 6.2.13 release adds the capability to use the katello-backup and katello-restore scripts on your capsule servers. This enables you to restore your Satellite server and Satellite capsules from a RHEL 6 to RHEL 7 Servers. Detailed steps on how to upgrade your Satellite or Capsule server to RHEL 7 are available in the refreshed Satellite 6.2 Administration Guide.

    Posted: 2017-12-19T19:31:20+00:00
  • Red Hat Satellite 6.3 Beta now available

    Authored by: John Spinks

    Red Hat Satellite 6.3 Beta now available

    December 7, 2017

    We are pleased to announce that Red Hat Satellite 6.3 is now available in beta to current Satellite customers.

    Red Hat Satellite is an infrastructure management platform, designed to manage system patching, provisioning, configurations and Red Hat subscriptions across the entirety of a Red Hat environment. Satellite offers a lifecycle management solution to help keep customers’ Red Hat infrastructure running efficiently and with greater security, which can reduce costs and overall complexity.

    The Satellite 6.3 beta reflects a continued focus on increasing product stability and usability, along with new and enhanced features designed to meet user needs in the following areas:

    Enhanced system provisioning, configuration and content management:

    • Red Hat Ansible Tower integration best practices
    • Improved ability to manage provisioning templates and pull from GIT
    • New custom file-type repositories
    • Improved content download policies and synchronization

    Improved product security and usability:

    • Newly defined and formalized ‘Org Admin’ role
    • New OpenSCAP tailoring files
    • Notification Drawer feature for Satellite event alerts and messaging
    • Future-dated subscription policies
    • Cloning tool for cloning an existing Satellite server to a new host
    • Renaming tool for changing the Satellite hostname while changing configurations

    Updated support for a variety of user needs:

    • Increased support for both Puppet 3.8 and 4
    • Support for Satellite and Capsule servers running on Amazon Web Services (AWS) EC2
    • Support for UEFI provisioning

    How to access the Red Hat Satelllite 6.3 Beta

    Customers with active Red Hat Satellite subscriptions can test out the new features in Satellite 6.3 beta now at https://access.redhat.com/products/red-hat-satellite/beta.

    Posted: 2017-12-07T14:20:58+00:00
  • Satellite 6.3 Beta Repositories

    Authored by: John Spinks

    In preparation for an upcoming public beta release of Red Hat Satellite 6.3, current Satellite customers may notice Satellite 6.3 beta ISOs and packages available in their repositories.

    Documentation, a Beta Navigation Guide, and customer support will be made available for the 6.3 beta at the time of public beta launch. The supported public launch of the Satellite 6.3 beta is currently scheduled for early December 2017. The announcement of the supported public beta will be made in the Red Hat Customer Portal and an email notification will be sent to all Red Hat Satellite customers at the time of the supported public beta launch.

    Support will not be provided for the use of the Red Hat Satellite 6.3 beta software until the supported public beta is announced and open to all current Red Hat Satellite customers.

    This information is also posted on the Red Hat Knowledge Base: https://access.redhat.com/articles/3248171

    Posted: 2017-11-21T15:09:40+00:00
  • Satellite 6 and iPXE

    Authored by: Lukas Zapletal

    TFTP is slow and unreliable protocol on high-latency networks, but if your hardware is supported by iPXE (http://ipxe.org/appnote/hardware_drivers) or if UNDI driver of the NIC is compatible with iPXE, it is possible to configure PXELinux to chainboot iPXE and continue booting via HTTP protocol which is fast and reliable.

    There are three scenarios described in this article. In the first two, PXELinux is loaded via TFTP and it chainloads iPXE directly or via UNDI which then carries over the kernel and init RAM disk (the largest artifact for transfer) via more robust protocol HTTP:

    • hardware is turned on
    • PXE driver gets network credentials from DHCP
    • PXE driver gets PXELinux firmware from TFTP (pxelinux.0)
    • PXELinux searches for configuration file on TFTP
    • PXELinux chainloads iPXE (undionly-ipxe.0 or ipxe.lkrn)
    • iPXE gets network credentials from DHCP again
    • iPXE gets HTTP address from DHCP
    • iPXE chainloads the iPXE template from Satellite
    • iPXE loads kernel and init RAM disk of the installer

    Requirements:

    • a host entry is created in Satellite
    • MAC address of the provisioning interface matches
    • provisioning interface of the host has a valid DHCP reservation
    • the host has special PXELinux template (below) associated
    • the host has iPXE template associated
    • hardware is capable of PXE booting
    • hardware NIC is compatible with iPXE

    The iPXE project offers two options: using PXE interface (UNDI) or using built-in linux network card driver. Both options have pros and cons and each gives different results with different hardware cards. Some NIC adapters can be slow with UNDI, some are actually faster. Not all network cards will work with either or both ways.

    Both scenarios are meant for bare-metal hosts. There are known issues with chainloading iPXE from libvirt, because it already uses iPXE as PXE ROMs by default. To avoid PXE in virtual environments, use scenaio C below.

    The third approach (B) is avoiding PXE (TFTP) completely by using iPXE as the primary firmware for virtual machines. It is only possible in virtual environments.

    A1. Chainbooting iPXE directly

    In this setup, iPXE uses build-in driver for network communication. Therefore this will only work on supported cards (see above).

    Capsule setup

    The following steps need to be done on all TFTP Capsules. Make sure ipxe-bootimgs RPM package is installed. Copy the iPXE firmware to the TFTP root directory:

    cp /usr/share/ipxe/ipxe.lkrn /var/lib/tftpboot/
    

    Do not use symbolic links as TFTP runs in chroot. When using SELinux, remember to correct file contexts:

    restorecon -RvF /var/lib/tftpboot/
    

    Satellite setup

    In your Satellite instance, go to "Provisioning templates" and create new template of PXELinux kind with the following contents:

    DEFAULT linux
    LABEL linux
    KERNEL ipxe.lkrn
    APPEND dhcp && chain <%= foreman_url('iPXE') %>
    IPAPPEND 2
    

    Associate iPXE template which ships with Satellite which is named 'Kickstart default iPXE' containing something like (no changes required):

    #!ipxe
    kernel <%= "#{@host.url_for_boot(:kernel)}" %> ks=<%= foreman_url("provision")%>
    initrd <%= "#{@host.url_for_boot(:initrd)}" %>
    boot
    

    If there was a host associated with PXELinux templates, you may need to cancel and re-enter Build state for the TFTP configuration to be redeployed via Cancel and Build button. Satellite 6.2+ do this automatically on template save.

    A2. Chainbooting iPXE via UNDI

    In this setup, iPXE uses UNDI for network communication. The hardware must support that.

    Capsule setup

    The following steps need to be done on all TFTP Capsules. Make sure ipxe-bootimgs RPM package is installed. Copy the iPXE firmware to the TFTP root directory and rename it:

    cp /usr/share/ipxe/undionly.kpxe /var/lib/tftpboot/undionly-ipxe.0
    

    Do not use symbolic links as TFTP runs in chroot. When using SELinux, remember to correct file contexts:

    restorecon -RvF /var/lib/tftpboot/
    

    Capsule setup (gPXELinux alternative)

    This is alternative approach if none of the above configurations work (gPXE is an older generation of iPXE project). Make sure syslinux RPM package is installed and copy the gPXE firmware to the TFTP root directory:

    cp /usr/share/syslinux/gpxelinuxk.0 /var/lib/tftpboot/
    

    Do not use symbolic links as TFTP runs in chroot. When using SELinux, remember to correct file contexts:

    restorecon -RvF /var/lib/tftpboot/
    

    Satellite setup

    In your Satellite instance, go to "Provisioning templates" and create new template of PXELinux kind with the following contents:

    DEFAULT undionly-ipxe
    LABEL undionly-ipxe
    MENU LABEL iPXE UNDI
    KERNEL undionly-ipxe.0
    IPAPPEND 2
    

    When using alternative gPXELinux, then use the following template contents:

    DEFAULT gpxelinux
    LABEL gpxelinux
    MENU LABEL gPXELinux
    KERNEL gpxelinuxk.0
    IPAPPEND 2
    

    Associate iPXE template which ships with Satellite which is named 'Kickstart default iPXE' containing something like (no changes required):

    #!ipxe
    kernel <%= "#{@host.url_for_boot(:kernel)}" %> ks=<%= foreman_url("provision")%>
    initrd <%= "#{@host.url_for_boot(:initrd)}" %>
    boot
    

    When using gPXELinux, replace ipxe "shebang" with gpxe.

    If there was a host associated with PXELinux templates, you may need to exit and re-enter Build state for the TFTP configuration to be redeployed. Satellite 6.2+ do this automatically on template save.

    DHCP setup

    The above configuration will lead to an endless loop of chainbooting iPXE/gPXE firmware. To break this loop, configure DHCP server to hand over correct URL to continue booting. In the /etc/dhcp/dhcpd.conf file change the "filename" global or subnet configuration as follows:

    if exists user-class and option user-class = "iPXE" {
      filename "https://satellite:443/unattended/iPXE";
    } else {
      filename "pxelinux.0";
    }
    

    This file is under foreman-installer (Puppet) control, make sure to do the change every time installer is executed (e.g. during upgrade).

    On isolated networks, use Capsule URL (port 9090) instead of Satellite when templates feature is enabled. When using gPXE, use "gPXE" user-class instead or provide additional "elseif" block for both cases.

    B. Chainbooting virtual machines

    Since most virtualization hypervisors use iPXE as the primary firmware for PXE booting, the above configuration will directly work without TFTP and PXELinux involved. This is known to work with libvirt, oVirt and RHEV. If the hypervisor is capable of replacing PXE firmware, it will work too (e.g. VMWare is documented at http://ipxe.org/howto/vmware). The workflow is simplified in this case:

    • VM is turned on
    • iPXE gets network credentials from DHCP again
    • iPXE gets HTTP address from DHCP
    • iPXE chainloads the iPXE template from Satellite
    • iPXE loads kernel and init RAM disk of the installer

    To configure this, make sure your hypervisor is using iPXE, configure iPXE template for your host(s) and DHCP server to return valid URL:

    Associate iPXE template which ships with Satellite which is named 'Kickstart default iPXE'. The contents is the same as in the workflows above. If there was a host associated with PXELinux templates, you may need to exit and re-enter Build state for the TFTP configuration to be redeployed (Satellite 6.2+ does this automatically).

    Similarly to above configuration, this will lead to an endless loop of chainbooting iPXE firmware. To break this loop, configure DHCP server to hand over correct URL to iPXE to continue booting. In the /etc/dhcp/dhcpd.conf file change the "filename" global or subnet configuration as follows:

    if exists user-class and option user-class = "iPXE" {
      filename "https://satellite:443/unattended/iPXE";
    } else {
      filename "pxelinux.0";
    }
    

    On isolated networks, use Capsule URL instead of Satellite when templates feature is enabled. When using gPXE, use "gPXE" user-class instead.

    Posted: 2017-10-06T07:00:00+00:00
  • Satellite 6.2.12 is released

    Authored by: Rich Jerrido

    Satellite 6.2.12 has been released today. 6.2.12 introduces a new tool for renaming the satellite, and several other new features and fixes. There is one erratum for the server [1] and one for the hosts [2]. The install ISOs will be updated later this week.

    Customers who have already upgraded to 6.2 should follow the instructions in the errata. Customers who are on 6.1.x should follow the upgrade instructions at [3].

    PLEASE NOTE: Customers who have received hotfixes should verify the list below to ensure their hotfix is contained in the release before upgrading. Their hotfixes may be overwritten if they upgrade. Please reach out to the Satellite team if you are unsure.

    The list of new features included in 6.2.12 are:

    • A new tool, satellite-host-rename, is now provided, which allows customers to rename a Satellite Server. Full documentation can be found in the Server Administration Guide under the "Renaming a Server" section. (BZ#924383)

    • The hosts API now accepts a parameter, "thin", to return only host names. This provides a more performant call for iterating over a list of hosts. (BZ#1461194)

    • Temporary subscriptions for guests are now valid for 7 days instead of 24 hours. This change should provide more time to address issues with virt-who without impacting the availability of Red Hat content. (BZ#1418482)

    The list of bugs which have been fixed in 6.2.12 include:

    • 6.2.11 introduced a new dependency on katello-agent which was not represented in the RPM package. The dependencies now resolve correctly. (BZ#1482189)
    • Several improvements have been made to synchronization of content to Capsules. (BZ#1370302, BZ#1432649, BZ#1448777)
    • Specific customers encountered an error where synchronization plans stopped updating their status in the web UI. This communication path has been hardened. (BZ#1466919)
    • Remote execution will no longer execute the same job multiple times if the host has many NICs or if the host belongs to many host groups. (BZ#1480235, BZ#1465628)
    • Scheduling a recurring job would create duplicate subtasks. This behavior has been fixed, and there are no duplicates. (BZ#1478886)
    • Large SCAP reports would not upload because of a low HTTP timeout. The timeout has been increased to allow the reports to upload. (BZ#1360452)
    • The performance of the package page has been improved for customers who have large package downloads. (BZ#1419139)
    • Satellite 6 was not able to connect to Red Hat Virtualization 4.1 instances. This platform can now be used as a Compute Resource. (BZ#1463264, BZ#1479954)
    • Package filters were not respecting the package release version correctly. This field is now handled correctly. (BZ#1395642)
    • Satellite 6 now supporting uploading RPMs whose headers are not UTF-8 compliant. (BZ#1333110)
    • Regenerating certificates now regenerates Ueber Certificates as well. (BZ#1434948)
    • The handing of Lifecycle permissions have been improved. (BZ#1317837)
    • Manual registration was not recording who performed the registration if no Activation Key was used. (BZ#1380117)

    [1] https://access.redhat.com/errata/RHBA-2017:2803
    [2] https://access.redhat.com/errata/RHBA-2017:2804
    [3] https://access.redhat.com/documentation/en/red-hat-satellite/6.2/paged/installation-guide/chapter-6-upgrading-satellite-server-and-capsule-server

    Posted: 2017-09-26T09:06:52+00:00
  • Red Hat Satellite and Red Hat Virtualization Cloud-init integration

    Authored by: Ivan Necas

    As part of the upcoming 6.2.12 release, we are adding additional support for cloud-init provisioning using the Red Hat Enterprise Virtualization (RHEV/RHV) provider.

    The Cloud-init tool allows to configure the provisioned virtual machine via a configuration, that is passed to the VM though the virtualization platform (RHV in this case).

    The advantage of this approach is not requiring any special configuration on the network (such as managed DHCP and TFTP) in order to finish the installation of the virtual machine, neither it requires for the Satellite to actively connect to the provisioned machine via SSH to run the finish script.

    It is also faster than the network-based provisioning, as we are using an image with pre-installed operating system.

    VM Template Preparation

    There are two ways we can get a cloud-init ready image to our RHV instance.

    The first one is using the KVM base image on from our portal and import it to the RHV. The image should already have cloud-init installed.

    The second option is building the template from scratch. For that, will use a standard server installation of RHEL 7 as a base.

    We install cloud-init from the rhel-7-server-rpms repository:

    yum install -y cloud-init
    

    Next, we will do some basic cloud-init configuration. By default, cloud-init tries to load the configuration from external sources. While this is being used with some cloud providers, in case of RHEV, the data are being passed to the VM via a mounted drive and the additional data sources make some unnecessary delays. Therefore, we will set the following in /etc/cloud/cloud.cfg:

    datasource_list: ["NoCloud", "ConfigDrive"]
    

    This should be just enough configuration we need to do. As the final step, I recommend following steps to make a clean VM for use as a template or clone to make sure the newly reacted VMs will start from scratch.

    Preparing Satellite for the cloud-init provisioning

    We assume that the Satellite was already configured to provision hosts via RHV, either using network based provisioning or finish scripts via SSH.

    First of all, we need to add the newly create image to the Satellite.

    In Satellite, go to Infrastructure -> Compute Resources -> Your RHV resource, in the Images tab click New Image button. Fill in the necessary details. Make sure to check the "User Data" checkbox: this way the Satellite will know to use the cloud-init template, when provisioning the VM.

    Next, we will create a provisioning template that will generate the cloud-init configuration. Currently, the list of configuration options for the cloud-init is limited. However, even with the subset of commands, it's possible to do just enough for finishing the configuration. We will go to Hosts -> Provisioning Templates and create a new template:

    <%#
    kind: user_data
    name: My Satellite RHV Cloud-init
    -%>
    #cloud-config
    hostname: <%= @host.shortname %>
    
    <%# Allow user to specify additional SSH key as host paramter -%>
    <% if @host.params['sshkey'].present? || @host.params['remote_execution_ssh_keys'].present? -%>
    ssh_authorized_keys:
    <% if @host.params['sshkey'].present? -%>
      - <%= @host.params['sshkey'] %>
    <% end -%>
    <% if @host.params['remote_execution_ssh_keys'].present? -%>
    <% @host.params['remote_execution_ssh_keys'].each do |key| -%>
      - <%= key %>
    <% end -%>
    <% end -%>
    <% end -%>
    runcmd:
      - |
        #!/bin/bash
    <%= indent 4 do
        snippet 'subscription_manager_registration'
    end %>
    <% if @host.info['parameters']['realm'] && @host.realm && @host.realm.realm_type == 'Red Hat Identity Management' -%>
      <%= indent 4 do
        snippet 'idm_register'
      end %>
    <% end -%>
    <% unless @host.operatingsystem.atomic? -%>
        # update all the base packages from the updates repository
        yum -t -y -e 0 update
    <% end -%>
    <%
        # safemode renderer does not support unary negation
        non_atomic = @host.operatingsystem.atomic? ? false : true
        pm_set = @host.puppetmaster.empty? ? false : true
        puppet_enabled = non_atomic && (pm_set || @host.params['force-puppet'])
    %>
    <% if puppet_enabled %>
        yum install -y puppet
        cat > /etc/puppet/puppet.conf << EOF
      <%= indent 4 do
        snippet 'puppet.conf'
      end %>
        EOF
        # Setup puppet to run on system reboot
        /sbin/chkconfig --level 345 puppet on
    
        /usr/bin/puppet agent --config /etc/puppet/puppet.conf --onetime --tags no_such_tag <%= @host.puppetmaster.blank? ? '' : "--server #{@host.puppetmaster}" %> --no-daemonize
        /sbin/service puppet start
    <% end -%>
    phone_home:
     url: <%= foreman_url('built') %>
     post: []
     tries: 10pp
    

    This is an equivalent of "Satellite Kickstart Default" template written using the cloud-init modules. In the Type tab, we will select User data template and in Association tab, we will add this template for the operating system we want to assign it to. Don't forget to also go to the details of the operating system (Hosts -> Operating Systems -> Your Selected OS) and select the newly created template as User data template.

    This should be enough configuration for us to start with provisioning the hosts using the cloud-init.

    Next steps

    As mentioned, there is still some work to be done to make availability of wider range for the cloud-init modules and we are working in upstream projects to make it happened. However, I hope that even this small addition will make your life and provisioning easier.

    Posted: 2017-08-25T16:00:30+00:00
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.