Warning message

Log in to add comments.

The Satellite Blog is a place for the Engineering team to publish tips and tricks about how to use our products, videos which showcase new features, and to point users at new content on the Customer Portal.

Latest Posts

  • Satellite 6 and iPXE

    Authored by: Lukas Zapletal

    TFTP is slow and unreliable protocol on high-latency networks, but if your hardware is supported by iPXE (http://ipxe.org/appnote/hardware_drivers) or if UNDI driver of the NIC is compatible with iPXE, it is possible to configure PXELinux to chainboot iPXE and continue booting via HTTP protocol which is fast and reliable.

    There are three scenarios described in this article. In the first two, PXELinux is loaded via TFTP and it chainloads iPXE directly or via UNDI which then carries over the kernel and init RAM disk (the largest artifact for transfer) via more robust protocol HTTP:

    • hardware is turned on
    • PXE driver gets network credentials from DHCP
    • PXE driver gets PXELinux firmware from TFTP (pxelinux.0)
    • PXELinux searches for configuration file on TFTP
    • PXELinux chainloads iPXE (undionly-ipxe.0 or ipxe.lkrn)
    • iPXE gets network credentials from DHCP again
    • iPXE gets HTTP address from DHCP
    • iPXE chainloads the iPXE template from Satellite
    • iPXE loads kernel and init RAM disk of the installer

    Requirements:

    • a host entry is created in Satellite
    • MAC address of the provisioning interface matches
    • provisioning interface of the host has a valid DHCP reservation
    • the host has special PXELinux template (below) associated
    • the host has iPXE template associated
    • hardware is capable of PXE booting
    • hardware NIC is compatible with iPXE

    The iPXE project offers two options: using PXE interface (UNDI) or using built-in linux network card driver. Both options have pros and cons and each gives different results with different hardware cards. Some NIC adapters can be slow with UNDI, some are actually faster. Not all network cards will work with either or both ways.

    Both scenarios are meant for bare-metal hosts. There are known issues with chainloading iPXE from libvirt, because it already uses iPXE as PXE ROMs by default. To avoid PXE in virtual environments, use scenaio C below.

    The third approach (B) is avoiding PXE (TFTP) completely by using iPXE as the primary firmware for virtual machines. It is only possible in virtual environments.

    A1. Chainbooting iPXE directly

    In this setup, iPXE uses build-in driver for network communication. Therefore this will only work on supported cards (see above).

    Capsule setup

    The following steps need to be done on all TFTP Capsules. Make sure ipxe-bootimgs RPM package is installed. Copy the iPXE firmware to the TFTP root directory:

    cp /usr/share/ipxe/ipxe.lkrn /var/lib/tftpboot/
    

    Do not use symbolic links as TFTP runs in chroot. When using SELinux, remember to correct file contexts:

    restorecon -RvF /var/lib/tftpboot/
    

    Satellite setup

    In your Satellite instance, go to "Provisioning templates" and create new template of PXELinux kind with the following contents:

    DEFAULT linux
    LABEL linux
    KERNEL ipxe.lkrn
    APPEND dhcp && chain <%= foreman_url('iPXE') %>
    IPAPPEND 2
    

    Associate iPXE template which ships with Satellite which is named 'Kickstart default iPXE' containing something like (no changes required):

    #!ipxe
    kernel <%= "#{@host.url_for_boot(:kernel)}" %> ks=<%= foreman_url("provision")%>
    initrd <%= "#{@host.url_for_boot(:initrd)}" %>
    boot
    

    If there was a host associated with PXELinux templates, you may need to cancel and re-enter Build state for the TFTP configuration to be redeployed via Cancel and Build button. Satellite 6.2+ do this automatically on template save.

    A2. Chainbooting iPXE via UNDI

    In this setup, iPXE uses UNDI for network communication. The hardware must support that.

    Capsule setup

    The following steps need to be done on all TFTP Capsules. Make sure ipxe-bootimgs RPM package is installed. Copy the iPXE firmware to the TFTP root directory and rename it:

    cp /usr/share/ipxe/undionly.kpxe /var/lib/tftpboot/undionly-ipxe.0
    

    Do not use symbolic links as TFTP runs in chroot. When using SELinux, remember to correct file contexts:

    restorecon -RvF /var/lib/tftpboot/
    

    Capsule setup (gPXELinux alternative)

    This is alternative approach if none of the above configurations work (gPXE is an older generation of iPXE project). Make sure syslinux RPM package is installed and copy the gPXE firmware to the TFTP root directory:

    cp /usr/share/syslinux/gpxelinuxk.0 /var/lib/tftpboot/
    

    Do not use symbolic links as TFTP runs in chroot. When using SELinux, remember to correct file contexts:

    restorecon -RvF /var/lib/tftpboot/
    

    Satellite setup

    In your Satellite instance, go to "Provisioning templates" and create new template of PXELinux kind with the following contents:

    DEFAULT undionly-ipxe
    LABEL undionly-ipxe
    MENU LABEL iPXE UNDI
    KERNEL undionly-ipxe.0
    IPAPPEND 2
    

    When using alternative gPXELinux, then use the following template contents:

    DEFAULT gpxelinux
    LABEL gpxelinux
    MENU LABEL gPXELinux
    KERNEL gpxelinuxk.0
    IPAPPEND 2
    

    Associate iPXE template which ships with Satellite which is named 'Kickstart default iPXE' containing something like (no changes required):

    #!ipxe
    kernel <%= "#{@host.url_for_boot(:kernel)}" %> ks=<%= foreman_url("provision")%>
    initrd <%= "#{@host.url_for_boot(:initrd)}" %>
    boot
    

    When using gPXELinux, replace ipxe "shebang" with gpxe.

    If there was a host associated with PXELinux templates, you may need to exit and re-enter Build state for the TFTP configuration to be redeployed. Satellite 6.2+ do this automatically on template save.

    DHCP setup

    The above configuration will lead to an endless loop of chainbooting iPXE/gPXE firmware. To break this loop, configure DHCP server to hand over correct URL to continue booting. In the /etc/dhcp/dhcpd.conf file change the "filename" global or subnet configuration as follows:

    if exists user-class and option user-class = "iPXE" {
      filename "https://satellite:443/unattended/iPXE";
    } else {
      filename "pxelinux.0";
    }
    

    This file is under foreman-installer (Puppet) control, make sure to do the change every time installer is executed (e.g. during upgrade).

    On isolated networks, use Capsule URL (port 9090) instead of Satellite when templates feature is enabled. When using gPXE, use "gPXE" user-class instead or provide additional "elseif" block for both cases.

    B. Chainbooting virtual machines

    Since most virtualization hypervisors use iPXE as the primary firmware for PXE booting, the above configuration will directly work without TFTP and PXELinux involved. This is known to work with libvirt, oVirt and RHEV. If the hypervisor is capable of replacing PXE firmware, it will work too (e.g. VMWare is documented at http://ipxe.org/howto/vmware). The workflow is simplified in this case:

    • VM is turned on
    • iPXE gets network credentials from DHCP again
    • iPXE gets HTTP address from DHCP
    • iPXE chainloads the iPXE template from Satellite
    • iPXE loads kernel and init RAM disk of the installer

    To configure this, make sure your hypervisor is using iPXE, configure iPXE template for your host(s) and DHCP server to return valid URL:

    Associate iPXE template which ships with Satellite which is named 'Kickstart default iPXE'. The contents is the same as in the workflows above. If there was a host associated with PXELinux templates, you may need to exit and re-enter Build state for the TFTP configuration to be redeployed (Satellite 6.2+ does this automatically).

    Similarly to above configuration, this will lead to an endless loop of chainbooting iPXE firmware. To break this loop, configure DHCP server to hand over correct URL to iPXE to continue booting. In the /etc/dhcp/dhcpd.conf file change the "filename" global or subnet configuration as follows:

    if exists user-class and option user-class = "iPXE" {
      filename "https://satellite:443/unattended/iPXE";
    } else {
      filename "pxelinux.0";
    }
    

    On isolated networks, use Capsule URL instead of Satellite when templates feature is enabled. When using gPXE, use "gPXE" user-class instead.

    Posted: 2017-10-06T07:00:00+00:00
  • Satellite 6.2.12 is released

    Authored by: Rich Jerrido

    Satellite 6.2.12 has been released today. 6.2.12 introduces a new tool for renaming the satellite, and several other new features and fixes. There is one erratum for the server [1] and one for the hosts [2]. The install ISOs will be updated later this week.

    Customers who have already upgraded to 6.2 should follow the instructions in the errata. Customers who are on 6.1.x should follow the upgrade instructions at [3].

    PLEASE NOTE: Customers who have received hotfixes should verify the list below to ensure their hotfix is contained in the release before upgrading. Their hotfixes may be overwritten if they upgrade. Please reach out to the Satellite team if you are unsure.

    The list of new features included in 6.2.12 are:

    • A new tool, satellite-host-rename, is now provided, which allows customers to rename a Satellite Server. Full documentation can be found in the Server Administration Guide under the "Renaming a Server" section. (BZ#924383)

    • The hosts API now accepts a parameter, "thin", to return only host names. This provides a more performant call for iterating over a list of hosts. (BZ#1461194)

    • Temporary subscriptions for guests are now valid for 7 days instead of 24 hours. This change should provide more time to address issues with virt-who without impacting the availability of Red Hat content. (BZ#1418482)

    The list of bugs which have been fixed in 6.2.12 include:

    • 6.2.11 introduced a new dependency on katello-agent which was not represented in the RPM package. The dependencies now resolve correctly. (BZ#1482189)
    • Several improvements have been made to synchronization of content to Capsules. (BZ#1370302, BZ#1432649, BZ#1448777)
    • Specific customers encountered an error where synchronization plans stopped updating their status in the web UI. This communication path has been hardened. (BZ#1466919)
    • Remote execution will no longer execute the same job multiple times if the host has many NICs or if the host belongs to many host groups. (BZ#1480235, BZ#1465628)
    • Scheduling a recurring job would create duplicate subtasks. This behavior has been fixed, and there are no duplicates. (BZ#1478886)
    • Large SCAP reports would not upload because of a low HTTP timeout. The timeout has been increased to allow the reports to upload. (BZ#1360452)
    • The performance of the package page has been improved for customers who have large package downloads. (BZ#1419139)
    • Satellite 6 was not able to connect to Red Hat Virtualization 4.1 instances. This platform can now be used as a Compute Resource. (BZ#1463264, BZ#1479954)
    • Package filters were not respecting the package release version correctly. This field is now handled correctly. (BZ#1395642)
    • Satellite 6 now supporting uploading RPMs whose headers are not UTF-8 compliant. (BZ#1333110)
    • Regenerating certificates now regenerates Ueber Certificates as well. (BZ#1434948)
    • The handing of Lifecycle permissions have been improved. (BZ#1317837)
    • Manual registration was not recording who performed the registration if no Activation Key was used. (BZ#1380117)

    [1] https://access.redhat.com/errata/RHBA-2017:2803
    [2] https://access.redhat.com/errata/RHBA-2017:2804
    [3] https://access.redhat.com/documentation/en/red-hat-satellite/6.2/paged/installation-guide/chapter-6-upgrading-satellite-server-and-capsule-server

    Posted: 2017-09-26T09:06:52+00:00
  • Red Hat Satellite and Red Hat Virtualization Cloud-init integration

    Authored by: Ivan Necas

    As part of the upcoming 6.2.12 release, we are adding additional support for cloud-init provisioning using the Red Hat Enterprise Virtualization (RHEV/RHV) provider.

    The Cloud-init tool allows to configure the provisioned virtual machine via a configuration, that is passed to the VM though the virtualization platform (RHV in this case).

    The advantage of this approach is not requiring any special configuration on the network (such as managed DHCP and TFTP) in order to finish the installation of the virtual machine, neither it requires for the Satellite to actively connect to the provisioned machine via SSH to run the finish script.

    It is also faster than the network-based provisioning, as we are using an image with pre-installed operating system.

    VM Template Preparation

    There are two ways we can get a cloud-init ready image to our RHV instance.

    The first one is using the KVM base image on from our portal and import it to the RHV. The image should already have cloud-init installed.

    The second option is building the template from scratch. For that, will use a standard server installation of RHEL 7 as a base.

    We install cloud-init from the rhel-7-server-rpms repository:

    yum install -y cloud-init
    

    Next, we will do some basic cloud-init configuration. By default, cloud-init tries to load the configuration from external sources. While this is being used with some cloud providers, in case of RHEV, the data are being passed to the VM via a mounted drive and the additional data sources make some unnecessary delays. Therefore, we will set the following in /etc/cloud/cloud.cfg:

    datasource_list: ["NoCloud", "ConfigDrive"]
    

    This should be just enough configuration we need to do. As the final step, I recommend following steps to make a clean VM for use as a template or clone to make sure the newly reacted VMs will start from scratch.

    Preparing Satellite for the cloud-init provisioning

    We assume that the Satellite was already configured to provision hosts via RHV, either using network based provisioning or finish scripts via SSH.

    First of all, we need to add the newly create image to the Satellite.

    In Satellite, go to Infrastructure -> Compute Resources -> Your RHV resource, in the Images tab click New Image button. Fill in the necessary details. Make sure to check the "User Data" checkbox: this way the Satellite will know to use the cloud-init template, when provisioning the VM.

    Next, we will create a provisioning template that will generate the cloud-init configuration. Currently, the list of configuration options for the cloud-init is limited. However, even with the subset of commands, it's possible to do just enough for finishing the configuration. We will go to Hosts -> Provisioning Templates and create a new template:

    <%#
    kind: user_data
    name: My Satellite RHV Cloud-init
    -%>
    #cloud-config
    hostname: <%= @host.shortname %>
    
    <%# Allow user to specify additional SSH key as host paramter -%>
    <% if @host.params['sshkey'].present? || @host.params['remote_execution_ssh_keys'].present? -%>
    ssh_authorized_keys:
    <% if @host.params['sshkey'].present? -%>
      - <%= @host.params['sshkey'] %>
    <% end -%>
    <% if @host.params['remote_execution_ssh_keys'].present? -%>
    <% @host.params['remote_execution_ssh_keys'].each do |key| -%>
      - <%= key %>
    <% end -%>
    <% end -%>
    <% end -%>
    runcmd:
      - |
        #!/bin/bash
    <%= indent 4 do
        snippet 'subscription_manager_registration'
    end %>
    <% if @host.info['parameters']['realm'] && @host.realm && @host.realm.realm_type == 'Red Hat Identity Management' -%>
      <%= indent 4 do
        snippet 'idm_register'
      end %>
    <% end -%>
    <% unless @host.operatingsystem.atomic? -%>
        # update all the base packages from the updates repository
        yum -t -y -e 0 update
    <% end -%>
    <%
        # safemode renderer does not support unary negation
        non_atomic = @host.operatingsystem.atomic? ? false : true
        pm_set = @host.puppetmaster.empty? ? false : true
        puppet_enabled = non_atomic && (pm_set || @host.params['force-puppet'])
    %>
    <% if puppet_enabled %>
        yum install -y puppet
        cat > /etc/puppet/puppet.conf << EOF
      <%= indent 4 do
        snippet 'puppet.conf'
      end %>
        EOF
        # Setup puppet to run on system reboot
        /sbin/chkconfig --level 345 puppet on
    
        /usr/bin/puppet agent --config /etc/puppet/puppet.conf --onetime --tags no_such_tag <%= @host.puppetmaster.blank? ? '' : "--server #{@host.puppetmaster}" %> --no-daemonize
        /sbin/service puppet start
    <% end -%>
    phone_home:
     url: <%= foreman_url('built') %>
     post: []
     tries: 10pp
    

    This is an equivalent of "Satellite Kickstart Default" template written using the cloud-init modules. In the Type tab, we will select User data template and in Association tab, we will add this template for the operating system we want to assign it to. Don't forget to also go to the details of the operating system (Hosts -> Operating Systems -> Your Selected OS) and select the newly created template as User data template.

    This should be enough configuration for us to start with provisioning the hosts using the cloud-init.

    Next steps

    As mentioned, there is still some work to be done to make availability of wider range for the cloud-init modules and we are working in upstream projects to make it happened. However, I hope that even this small addition will make your life and provisioning easier.

    Posted: 2017-08-25T16:00:30+00:00
  • Performing DHCP kexec on discovered hosts

    Authored by: Lukas Zapletal

    Satellite 6.2 introduced PXE-less discovery which is targeted to networks without PXE or DHCP services available. In this workflow, kernel on discovered nodes is replaced (via kexec technology) instead of rebooting. This turns out to be useful feature on PXE/DHCP networks as well.

    To configure kexec on PXE/DHCP enabled network, do the following simple steps.

    Step 1: Verify foreman discovery image version

    Newer version of foreman-discovery-image must be used in order to send required "discovery_kexec" fact. We are planning rebase of foreman-discovery-image package in Satellite 6.2 upcoming errata. In the meantime, it is possible to download upstream version which is based on CentOS 7 rather than Red Hat Enterprise Linux. Series 3.4 are known to work with Satellite 6.2.

    tftp_capsule# cd /var/lib/tftpboot/boot/
    tftp_capsule# wget http://downloads.theforeman.org/discovery/releases/3.4/fdi-image-3.4.1.tar
    tftp_capsule# tar -xvf fdi-image-3.4.1.tar
    

    Step 2: Configure foreman discovery image kernel options

    Satellite will either reboot or kexec discovered node based on presence of special fact called "discovery_kexec". When present and KExec template is associated with given Hostgroup and Operating System, kexec is performed. Reboot is the fallback mechanism if the two requirements are not met.

    In Hosts - Provisioning templates search for "global" string and edit "PXELinux global default" template. In Satellite 6.2 it is possible to directly edit this template, in upcoming version this template must be cloned and Global setting must be changed to use the cloned template instead the default one.

    ONTIMEOUT discovery_upstream
    ...
    LABEL discovery_upstream
      MENU LABEL Foreman Discovery Image 3.4.1
      KERNEL boot/fdi-image/vmlinuz0
      APPEND initrd=boot/fdi-image/initrd0.img rootflags=loop root=live:/fdi.iso rootfstype=auto ro rd.live.image acpi=force rd.luks=0 rd.md=0 rd.dm=0 rd.lvm=0 rd.bootif=0 rd.neednet=0 nomodeset proxy.url=https://CAPSULE_URL:9090 proxy.type=proxy fdi.pxfactname1=discovery_kexec fdi.pxfactvalue1=1
      IPAPPEND 2
    

    Change proxy.url and proxy.type accordingly to point to either Satellite Server or Capsule. Note the options fdi.pxfactname1=discovery_kexec fdi.pxfactvalue1=1 which are required in order to have the "discovery_kexec" fact sent along.

    Now click on Build PXE Default to rebuild pxelinux.cfg/default template. Hosts can be discovered now.

    Step 3: Modify Red Hat KExec template

    Find Red Hat Kexec template, clone it, associate it with requied Operating System and edit its contents:

    <%#
    kind: kexec
    name: Red Hat kexec
    oses:
    - CentOS 4
    - CentOS 5
    - CentOS 6
    - CentOS 7
    - Fedora 19
    - Fedora 20
    - Fedora 21
    - Fedora 22
    - RedHat 4
    - RedHat 5
    - RedHat 6
    - RedHat 7
    -%>
    <%
      mac = @host.facts['discovery_bootif']
      bootif = '00-' + mac.gsub(':', '-') if mac
      append = @host.facts['append']
    -%>
    {
      "kernel": "<%= @kernel %>",
      "initram": "<%= @initrd %>",
    <% if (@host.operatingsystem.name == 'Fedora' and @host.operatingsystem.major.to_i > 16) or
        (@host.operatingsystem.name != 'Fedora' and @host.operatingsystem.major.to_i >= 7) -%>
      "append": "ks=<%= foreman_url('provision') %> inst.ks.sendmac <%= "ksdevice=bootif BOOTIF=#{bootif} #{append}" %>"
    <% else -%>
      "append": "ks=<%= foreman_url('provision') %> kssendmac <%= "BOOTIF=#{bootif} #{append}" %>"
    <% end -%>
    }
    

    The template was designed for DHCP-less environments and it assumes static IP configuration. These "discovery_*" facts are not present, therefore the default template generates "ip=::::::none" Anaconda configuration which leads to System halted. The modified version does not provide static configuration, therefore Anaconda performs DHCP request.

    Make sure to set this template as default one for required Operating Systems.

    Step 4: Verify requirements and provision

    Before provisioning the very first discovered node, show the detail page, expand all facts and search for "discovery_kexec" fact, it should be set to "1". Also make sure the Red Hat KExec template is associated with Operating System.

    Discovered hosts now can be provisioned or auto-provisioned and kexec will be performed instead of reboot.

    Troubleshooting

    On some hardware or VMs, kexec will freeze when discovered node is on the first virtual console (tty1) with the text user interface.

    To workaround this issue switch virtual terminal to tty2 (Alt+F2) before kexecing.

    Posted: 2017-08-11T08:00:00+00:00
  • Satellite 6.2.11 is released

    Authored by: Rich Jerrido

    Satellite 6.2.11 has been released today. 6.2.11 introduces many fixes in the messaging infrastructure of Satellite 6. There is one erratum for the server [1] and one for the hosts [2]. The install ISOs will be updated next week at the earliest.

    Customers who have already upgraded to 6.2 should follow the instructions in the errata. Customers who are on 6.1.x should follow the upgrade instructions at [3].

    PLEASE NOTE: Customers who have received hotfixes should verify the list below to ensure their hotfix is contained in the release before upgrading. Their hotfixes may be overwritten if they upgrade. Please reach out to the Satellite team if you are unsure.

    The list of bugs resolved in 6.2.11 include:

    • Updated QPID packages have been included to address performance and memory issues. (BZ#1367735, BZ#1443470, BZ#1431783, BZ#1449743, BZ#1468450, BZ#1368718)

    • Users can now manage a system without the use of gofer. (BZ#1445403)

    • Several search bugs have been resolved. (BZ#1392422, BZ#1462350)

    • The RBAC system has been updated to unlock behaviors for non-admin users and to suppress database errors. (BZ#1246865, BZ#1427282)

    • Manifest refreshes were not respecting proxy settings on the host. These settings are now respected. (BZ#1464881)

    • A batch job has been introduced to clean up repository metadata on Capsules. (BZ#1425691)

    • An error message from foreman-rake was causing confusion. The message has now been cleaned up. (BZ#1429418)

    • Deleting an interface was deleting the entire host record. The code will now correctly only delete the interface. (BZ#1285669)

    • Bulk Capsule syncs were causing a race condition which resulted in failed syncs. This case is now handled correctly. (BZ#1391298, BZ#1443213)

    • This release contains various memory and performance improvements. (BZ#1434040, BZ#1458857, BZ#1437150)

    • Users can now browse to /pub on Capsule Servers over HTTPS. (BZ#1432580)

    • Certain trends would show a database error. The query behind this trend has been resolved. (BZ#1375000)

    Please reach out with any questions or concerns.

    [1] https://access.redhat.com/errata/RHBA-2017:2466
    [2] https://access.redhat.com/errata/RHBA-2017:2467
    [3] https://access.redhat.com/documentation/en/red-hat-satellite/6.2/paged/installation-guide/chapter-6-upgrading-satellite-server-and-capsule-server

    Posted: 2017-08-10T20:57:13+00:00
  • Dealing with many network interfaces during host check-ins

    Authored by: Lukas Zapletal

    Satellite 6 comes with powerful host importing capabilities as part of its inventory feature. When a host checks-in via Puppet or subscription-manager, all incoming data, which we call "facts", are parsed. This mechanism is called "fact import".

    By default Satellite 6 extracts networking information such as NICs, MAC and IP addresses making necessary changes to reflect the new state in the inventory database. When an IP address of a registered host changes for example, the same change is applied in Satellite 6 database during fact import.

    This can be problem for hosts with frequently changing interfaces, typically virtualization hypervisors or container hosts. The default behavior in Satellite 6 is safe; new interfaces are added but missing interfaces are never removed. This stems from Puppet behavior when disabled interfaces are not reported via facter which could lead to mis-deletions in the Satellite 6 inventory.

    In these workloads, Satellite 6 will be adding new network interfaces to hosts indefinitely leading to slow performance of host check-ins for both Puppet and subscription-manager. We've seen hosts with thousands of records of invalid network interface inventory data. There are two configuration options to solve this situation.

    First, fact import for NICs can be completely disabled via Ignore Puppet facts for provisioning global setting. When this option is turned on, the IP or MAC address of existing host is never updated automatically. Although the name of this setting implies this only affects Puppet, it also affects subscription-manager import code as well. We will rename this option in the future to match its real meaning.

    This will essentially completely turn off Puppet fact parsing which cannot be used in case hosts are being registered via Puppet and network interfaces are needed, for example to remotely execute scripts. For this case, there is an alternative method to filter out some interfaces from being added or updated in the Satellite 6 inventory via Ignore interfaces with matching identifier global option. By default it is set to:

    'lo', 'usb*', 'vnet*', 'macvtap*', '_vdsmdummy_'
    

    For example to filter out docker network interfaces, 'veth*' would be added to the list. Interface naming conventions are different for virtualization or container technologies like libvirt, vdsm, xen or lxc. What is usually common is some prefix or suffix that can be easily matched using a wildcard syntax. Note the syntax is not a regular expression, but a simple wildcard.

    Satellite 6.2.9 introduced two new settings called Ignore facts for operating system and Ignore facts for subnet which work in a similar way, but are not related to network interfaces.

    Posted: 2017-08-09T08:00:00+00:00
  • How to share custom repositories across organizations

    Authored by: Lukas Zapletal

    Satellite 6 is strictly a multi-tenant application meaning that every organization gets its own subscription manifest and must select appropriate repositories and sync it. Although the design of Satellite 6 makes sure every single RPM package is downloaded only once across all organization, syncing metadata and publishing and promoting content within many organizations can be time consuming for some specific use cases. The following will work with Satellite 6.2 or newer.

    One use case is a custom Yum repository with EPEL. This type of content can be easily shared across organizations in case there is a need to use latest and greatest EPEL content synced on a daily basis for example. In this case, the recommended way is to create a special organization which we will refer to under the name "SharedContent". It is recommended to give it unique and short label as it will be present in all repository URLs; a good candidate would be "shared".

    Under SharedContent organization new Custom Product and Repository can be created, let's say EPEL product and EPEL7 repository. During creation, the Publish via HTTP option must be checked, or it can be enabled later on in the Repository detail page. For such a large repository like EPEL, it is recommended to use the on-demand download policy and also to customize Satellite 6 squid daemon configuration and cache volume size to handle a large cache .

    After initial synchronization which usually takes longer, the key thing is to find out the URL where the content is being published to. This can be done in the Product - Repository detail page as Published At field. It will be something like:

    http://sat6.host.lan/pulp/repos/shared/Library/custom/EPEL/EPEL7/
    

    The same content is also available under HTTPS but to access content via SSL, a client certificate must be provided. One is available under Organization detail page and it's called Debug Certificate. It is possible to use it to consume the shared content if HTTP is not preferred.

    To consume EPEL content, provisioning templates or puppet manifests need to be updated to use the custom repository. As an example, we will leave here a snippet for Anaconda RHEL kickstart. It is recommended to clone the original template and create a custom one associating it with all Operating Systems and setting it as default:

    ...
    repo --name=epel --baseurl=http://sat6.host.lan/pulp/repos/shared/Library/custom/EPEL/EPEL7/
    ...
    

    For HTTPS URL make sure to use --noverifyssl option or use post scriplet to deploy server certificate and yum configuration after installation when SSL server cert verification is a must.

    Once everything is tested, Sync Plan can be created. It is also possible to create Content Environments or Content Views and leverage content promotion capabilities in the shared organization. That's how any custom repository can be easily shared.

    Posted: 2017-08-07T06:00:00+00:00
  • Satellite 6.2.10 is released

    Authored by: Rich Jerrido

    Satellite 6.2.10 has been released today. 6.2.10 introduces many fixes based on customer cases and feedback. There is one erratum for the server [1] and one for the hosts [2]. The install ISOs will be updated next week at the earliest.

    Customers who have already upgraded to 6.2 should follow the instructions in the errata. Customers who are on 6.1.x should follow the upgrade instructions at [3].

    PLEASE NOTE: Customers who have received hotfixes should verify the list below to ensure their hotfix is contained in the release before upgrading. Their hotfixes may be overwritten if they upgrade. Please reach out to the Satellite team if you are unsure.

    The list of bugs fixed in in 6.2.10 include:

    • The refreshing of manifests process has been improved with several bug fixes and enhancements such as subscription refreshes and a confirmation dialog. (BZ#1455154, BZ#1455199, BZ#1393580, BZ#1436242, BZ#1266827)
    • The upgrade process will now execute foreman-rake katello:correct_repositories for the user. (BZ#1446713, BZ#1365223, BZ#1375075, BZ#1425437)
    • The subscription status is now consistent across the web UI and CLI. (BZ#1371605)
    • Memory leaks in OpenSCAP and the tasks plug-in have been resolved. (BZ#1432263, BZ#1412307)
    • Updating a content set associated to a pool with very large numbers of entitlements caused Candlepin to run out of memory. Consequently, updating a custom product with a large number of hosts took a long time. This bug has been fixed and Candlepin performance is now improved in the scenario described. (BZ#1435809)
    • Previously, when applying two errata to one thousand hosts the Gofer Agent daemon (goferd) sometimes terminated unexpectedly with a segmentation fault. The code has been improved and goferd no longer fails in the scenario described. (BZ#1418557)
    • Several fixes have been made to remote execution. (BZ#1387658, BZ#1417978, BZ#1422690, BZ#1449830, BZ#1413966, BZ#1390931, BZ#1438601)
    • Capsule synchronization was failing after upgrade for certain library configurations. These cases have been fixed. (BZ#1456446, BZ#1456559)

    [1] https://access.redhat.com/errata/RHBA-2017:1553
    [2] https://access.redhat.com/errata/RHBA-2017:1554
    [3] https://access.redhat.com/documentation/en/red-hat-satellite/6.2/paged/installation-guide/chapter-6-upgrading-satellite-server-and-capsule-server

    Posted: 2017-06-20T19:13:25+00:00
  • Satellite 5.8 cdn-sync Performance

    Authored by: Grant Gainey

    Introduction

    Red Hat Satellite 5.8.0 comes with a new way to synchronize channel-content from Red Hat: cdn-sync. Unlike satellite-sync, which synchronizes content (i.e. channels, packages, errata, and kickstart trees) from RHN Classic servers, cdn-sync retrieves content from Red Hat's CDN servers (the same source which is used by systems registered via subscription-manager). cdn-sync command attempts to keep option parity with satellite-sync where it makes sense to do so.

    Although Satellite 5.8.0 itself (when running in connected mode) now registers to Red Hat using subscription-manager, clients registered to Satellite continue to use the RHN Classic tooling (e.g. rhn-register).

    While the content-sources differ between satellite-sync and cdn-sync, the goal is to end up with the same results in the Satellite doing the synchronizing. Red Hat is aware of some discrepancies between the two content-sources, and is working on making them consistent. Also, as content is structured differently on CDN, the cdn-sync-mapping package was added, which maps content locations (i.e. repositories with packages or kickstart trees) on CDN to matching channels on Satellites.

    Performance Comparison

    Tests were run concurrently on the same hardware (4 x Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz, 8 GB RAM) located in same lab. The operating system was a fully updated Red Hat Enterprise Linux 6 Server.

    The tables below show the results from a comparison of sync times for the "Red Hat Enterprise Linux Server (v. 7 for 64-bit x86_64)" repository. The comparison is between the following commands:

    5.8.0 Beta (build 20170330) and the 5.8.0 Release Candidate (build 20170515): cdn-sync -c rhel-x86_64-server-7
    5.6.0 and 5.7.0: satellite-sync -c rhel-x86_64-server-7

    Satellite version time cdn-sync/satellite-sync -c rhel-x86_64-server-7
    Red Hat Satellite 5.6.0 13 hours, 8 minutes, 9 seconds
    Red Hat Satellite 5.7.0 13 hours, 8 minutes, 10 seconds
    Red Hat Satellite 5.8.0 Beta 3 hours, 20 minutes, 28 seconds
    Red Hat Satellite 5.8.0 RC 1 hour, 51 minutes, 7 seconds

    (On the 5.8.0 Beta, sync was without 7.3 kickstart tree, due to a beta bug)

    Conclusion

    Initial sync of a channel takes far less time on Red Hat Satellite 5.8.0 when compared to 5.7.0 or 5.6.0 (around two hours instead of thirteen, assuming a fast connection to cdn.redhat.com). This is primarily the result of the switch to the CDN-based syncing tool cdn-sync, which retrieves content from the content distribution network, fetching content from a source that should be closer to your Satellite Server.

    Posted: 2017-06-19T19:47:38+00:00
  • Satellite 5.8 is released

    Authored by: Rich Jerrido

    Red Hat Satellite 5.8 now generally available

    Today, Red Hat is pleased to announce the general availability of Red Hat Satellite 5.8, the last minor release of the Satellite 5 product line. Red Hat Satellite 5.8 builds upon 10 years of enterprise-proven successes, offering a complete lifecycle management solution to help keep Red Hat infrastructure running efficiently and with greater security, helping to reduce costs and overall complexity. Red Hat Satellite 5.8 is now available to all customers with an active Satellite subscription.

    Red Hat Satellite 5.8 introduces several new features, enhancements and programs, including:

    • Increased speed with channel install and content syncing. For the first time in Satellite 5, customers can now register, activate and update the Satellite server from the Customer Portal, as well as synchronize content via the Red Hat Content Delivery Network. Read more about the improvements in content synchronization in this blog post
    • Improved diagnostics of background tasks and jobs. Red Hat Satellite 5.8 Introduces the Taskotop utility, which monitors Taskomatic activities and provides insights and information on the status of jobs, which can now run background tasks individually or in bulk.
    • Updated support of Oracle DB and PostgreSQL. Red Hat Satellite 5.8 offers expanded support for two additional databases -- External Oracle Database 12c and Embedded/Managed PostgreSQL 9.5 DB.
    • Extended lifecycle support beginning in 2019. Satellite 5.8 is the only minor release of the Satellite 5 product line to offer an Extended Lifecycle Support option beginning in early 2019.

    Content Delivery Network Sync process:

    The CDN sync process used by Satellite 5.8 is configured to work with products that are available in Satellite 5 today and are not at their end-of-life (EOL). All currently mirrored content will be available after upgrading to Satellite 5.8, but after the upgrade you will not be able to re-sync content that is at
    or beyond its EOL. If you are planning a new install of 5.8 rather than an upgrade, and need to be able to sync this content, contact your account team to determine the best practices for your situation.

    For more information on Satellite 5.8:

    Posted: 2017-06-19T19:35:28+00:00
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.