Red Hat Training

A Red Hat training course is available for Red Hat Enterprise Linux

5.4 Release Notes

Red Hat Enterprise Linux 5

Release Notes for all architectures.

Red Hat Inc.

Abstract

This document details the Release Notes for Red Hat Enterprise Linux 5.4.
This document contains the Release Notes for the Red Hat Enterprise Linux 5.4 family of products including:
  • Red Hat Enterprise Linux 5 Advanced Platform for x86, AMD64/Intel® 64, Itanium Processor Family, System p and System z
  • Red Hat Enterprise Linux 5 Server for x86, AMD64/Intel® 64, Itanium Processor Family, System p and System z
  • Red Hat Enterprise Linux 5 Desktop for x86 and AMD64/Intel®
The Release Notes provide high level coverage of the improvements and additions that have been implemented in Red Hat Enterprise Linux 5.4. For detailed documentation on all changes to Red Hat Enterprise Linux for the 5.4 update, refer to the Technical Notes

1. Virtualization Updates

Red Hat Enterprise Linux 5.4 now includes full support for the Kernel-based Virtual Machine (KVM) hypervisor on x86_64 based architectures. KVM is integrated into the Linux kernel, providing a virtualization platform that takes advantage of the stability, features, and hardware support inherent in Red Hat Enterprise Linux. Virtualization using the KVM hypervisor is supported on wide variety of guest operating systems, including:
  • Red Hat Enterprise Linux 3
  • Red Hat Enterprise Linux 4
  • Red Hat Enterprise Linux 5
  • Windows XP
  • Windows Server 2003
  • Windows Server 2008

Important

Xen based virtualization is fully supported. However, Xen-based virtualization requires a different version of the kernel to function. The KVM hypervisor can only be used with the regular (non-Xen) kernel.

Warning

While Xen and KVM may be installed on the same system, the default networking configuration for these are different. Users are strongly recommended to only install one hypervisor on a system.

Note

Xen is the default hypervisor that is shipped with Red Hat Enterprise Linux. As such all configuration defaults are tailored for use with the Xen hypervisor. For details on configuring a system for KVM, please refer to the Virtualization Guide.
Virtualization using KVM allows both 32-bit and 64-bit versions of guest operating systems to be run without modification. Paravirtualized disk and network drivers have also been included in Red Hat Enterprise Linux 5.4 for enhanced I/O performance. All the libvirt based tools (i.e. virsh, virt-install and virt-manager) have also been updated with added support for KVM.
USB passthrough with the KVM hypervisor is considered to be a Technology Preview for the 5.4 release.
With resolution of various issues such as: save/restore, live migration and core dumps, Xen based 32 bit paravirtualized guests on x86_64 hosts are no longer classed as a Technology Preview, and are fully supported on Red Hat Enterprise Linux 5.4.
the etherboot package has been added in this update, providing the capability to boot guest virtual machines using the Preboot eXecution Environment (PXE). This process occurs before the OS is loaded and sometimes the OS has no knowledge that it was booted through PXE. Support for etherboot is limited to usage in the KVM context.
The qspice packages have been added to Red Hat Enterprise Linux 5.4 to support the spice protocol in qemu-kvm based virtual machines. qspice contains both client, server and web browser plugin components. However, only the qspice server in the qspice-libs package is fully supported. The qspice client (supplied by the qspice package) and qspice mozilla plugin (supplied by the qspice-mozilla package) are both included as Technology Previews. The qspice-libs package contains the server implementation that is used in conjunction with qemu-kvm and as such is fully supported. However, in Red Hat Enterprise Linux 5.4 there is no libvirt support for the spice protocol; the only supported use of spice in Red Hat Enterprise Linux 5.4 is through the use of the Red Hat Enterprise Virtualization product.

Important

The virtio-win component is only available via the Red Hat Network, and is not included on the physical Supplementary CD for Red Hat Enterprise Linux 5.4. For more information, see the Red Hat Knowledgebase.

2. Clustering Updates

Clusters are multiple computers (nodes) working in concert to increase reliability, scalability, and availability to critical production services.
All updates to clustering in Red Hat Enterprise Linux 5.4 are detailed in the Technical Notes. Further information on clustering in Red Hat Enterprise Linux is available in the Cluster Suite Overview and the Cluster Administration documents.
Cluster Suite tools have been upgraded to support automatic hypervisor detection. However, running the cluster suite in conjunction with KVM hypervisor is considered to be a Technology Preview.
OpenAIS now provides broadcast network communication in addition to multicast. This functionality is considered Technology Preview for standalone usage of OpenAIS and for usage with the Cluster Suite. Note, however, that the functionality for configuring OpenAIS to use broadcast is not integrated into the cluster management tools and must be configured manually.

Note

SELinux in Enforcing mode is not supported with the Cluster Suite; Permissive or Disabled modes must be used. Using Cluster Suite on bare metal PPC systems is not supported. Guests running Cluster Suite on VMWare ESX hosts and using fence_vmware is considered a Technology Preview. Running Cluster Suite in guests on VMWare ESX hosts that are managed by Virtual Center is not supported.
Mixed architecture clusters using Cluster Suite are not supported. All Nodes in the cluster must be of the same architecture. For the purposes of Cluster Suite, x86_64, x86 and ia64 are considered to be the same architecture, so running clusters with combinations of these architectures is supported.

2.1. Fencing Improvements

Fencing is the disconnection of a node from the cluster's shared storage. Fencing cuts off I/O from shared storage, thus ensuring data integrity.
In Red Hat Enterprise Linux 5.4, fencing support on Power Systems has been added, as a Technology Preview, for IBM Logical Partition (LPAR) instances that are managed using the Hardware Management Console (HMC) (BZ#485700). Fencing support has also been added, as a Technology Preview for Cisco MDS 9124 & Cisco MDS 9134 Multilayer Fabric Switches (BZ#480836).
The fence_virsh fence agent is provided in this release of Red Hat Enterprise Linux as a Technology Preview. fence_virsh provides the ability for one guest (running as a domU) to fence another using the libvirt protocol. However, as fence_virsh is not integrated with cluster-suite it is not supported as a fence agent in that environment.
The fence_scsi man page has been updated, detailing the following limitations:
The fence_scsi fencing agent requires a minimum of three nodes in the cluster to operate.  For an FC connected SAN devices, these must be physical nodes.  SAN devices connected via iSCSI may use virtual or physical nodes.  In addition, fence_scsi cannot be used in conjunction with qdisk.
Additionally, the following new articles on fencing have been published on the Red Hat Knowledge Base:

3. Networking Updates

With this update, Generic Receive Offload (GRO) support has been implemented in both the kernel and the userspace application, ethtool.((BZ#499347)) The GRO system increases the performance of inbound network connections by reducing the amount of processing done by the Central Processing Unit (CPU). GRO implements the same technique as the Large Receive Offload (LRO) system, but can be applied to a wider range of transport layer protocols. GRO support has also been added to a several network device drivers, including the igb driver for Intel® Gigabit Ethernet Adapters and the ixgbe driver for Intel 10 Gigabit PCI Express network devices.
The Netfilter framework (the portion of the kernel resposible for network packet filtering) has been updated with added support for Differentiated Services Code Point (DSCP) values
the bind (Berkeley Internet Name Domain) package provides an implementation of the DNS (Domain Name System) protocols. Previously, bind did not offer a mechanism to easily distinguish between requests that will receive authoritative and non-authoritative replies. Consequently, an incorrectly configured server may have replied to requests that should have been denied. With this update, bind has been updated, providing the new option allow-query-cache that controls access to non-authoritative data on a server (for example: cached recursive results and root zone hits). (BZ#483708)

4. Filesystems and Storage updates

In the 5.4 update, several significant additions have been made to file systems support. Base Red Hat Enterprise Linux now includes the Filesystem in Userspace (FUSE) kernel modules and user space utilities, allowing users to install and run their own FUSE file systems on an unmodified Red Hat Enterprise Linux kernel (BZ#457975). Support for the XFS file system has also been added to the kernel for future product enablement (BZ#470845). The FIEMAP input/output control (ioctl) interface has been implemented, allowing the physical layout of files to be mapped efficiently. The FIEMAP ioctl can be used by applications to check for fragmentation of a specific file or to create an optimized copy of a sparsely allocated file (BZ#296951).
Additionally, the Common Internet File System (CIFS) has been updated in the kernel (BZ#465143). The ext4 file system (included in Red Hat Enterprise Linux as a Technology Preview) has also been updated (BZ#485315).
In Red Hat Enterprise Linux 5.4, the use of the Global File System 2 (GFS2) as a single server file system (i.e. not in a clustered environment) is deprecated. Users of GFS2 that do not need high availability clustering are encouraged to look at migrating to other file systems like the ext3 or xfs offerings. The xfs file system is specifically targeted at very large file systems (16 TB and above). Existing users will continue to be supported.
The required semantics indicate that a process which completes a stat, write, stat, should see a different mtime (time of last modification) on the file in the results from the second stat call compared to the mtime in the results from the first stat call. File times in NFS are maintained strictly by the server, so the file mtime will not be updated until the data has been transmitted to the server via the WRITE NFS protocol operation. Simply copying data into the pagecache is not sufficient to cause the mtime to be updated. This is one place where NFS differs from local file systems. Therefore, an NFS filesystem which is under a heavy write workload may result in stat calls having a high latency.(BZ#469848)
The ext4 filesystem Technology Preview has been refreshed with updated userspace tools. Ext4 is an incremental improvement on the ext3 file system developed by Red Hat and the Linux community.

Note

In previous versions of Red Hat Enterprise Linux utilizing the ext4 Technology Preview, ext4 filesystems were labeled as ext4dev. With this update, ext4 filesystems are now tagged as ext4.
With this update, the dmraid logwatch-based email reporting feature has been moved from the dmraid-events package into the new dmraid-events-logwatch package. Consequently, systems that use this dmraid feature will need to complete the following manual procedure:
  1. ensure the new 'dmraid-events-logwatch' package is installed.
  2. un-comment the functional portion of the /etc/cron.d/dmeventd-logwatch crontab file.
(BZ#512833)
samba3x and ctdb are provided as a Technology Preview on the x86_64 platform. Samba3x package provides Samba 3.3 and ctdb provide a clustered TDB backend. Running samba3x and ctdb on a set of cluster nodes with GFS file system will allow the export of a clustered CIFS filesystem.

Important

the samba3x packages conflict with the samba-3.0 packages shipped with Red Hat Enterprise Linux 5. To use the Technology Preview, it is recommended to perform a fresh install that does not include the samba-3.0 packages, and then to install the samba3x packages from the Supplementary media.

5. Desktop Updates

5.1. Advanced Linux Sound Architecture

In Red Hat Enterprise Linux 5.4, the Advanced Linux Sound Architecture (ALSA) has been updated — providing enhanced support for High Definition Audio (HDA).

5.2. Graphics Drivers

The ati driver for ATI video devices has been updated.
The i810 and intel drivers for Intel integrated display devices have been updated.
The mga driver for Matrox video devices has been updated.
The nv driver for nVidia video devices has been updated.

5.3. Laptop Support

Previously, when undocking and docking some laptops with docking stations containing integrated CD/DVD drives, the drive would no longer be recognized. The system would need to be rebooted for the drive to be accessible. With this update, the ACPI docking drivers have been updated in the kernel, resolving this issue. (BZ#485181).

6. Tools Updates

Important

All the IBM Java components are available online due to a late detection of missing COPYRIGHT notice. This applies to the Supplementary CD contents for Red Hat Enterprise Linux 5 on all architectures and releases. For more information, see the Red Hat Knowledgebase.
  • SystemTap is now fully supported, and has been re-based to the latest upstream version. This update features improved user-space probing through shared libraries, experimental DWARF unwinding, and a new <sys/sdt.h> header file which provides dtrace-compatible markers.
    This re-base also enhances support for debuginfo-less operations. Typecasting (through the @cast operator) is now supported, along with kernel tracepoint probing. Several 'kprobe.*' probe bugs that hampered debuginfo-less operations are also now resolved.
    SystemTap also features several documentation improvements. A new '3stap' feature provides users with useful man pages on most SystemTap probes and functions. The systemtap-testsuite package also features a larger library of sample scripts.
    For more information about the SystemTap re-base, please refer to the SystemTap section Package Updates chapter of the Technical Notes.
  • Systemtap tracepoints are placed in important sections of the kernel, allowing system administrators to analyze the performance of, and debug portions of code. In Red Hat Enterprise Linux 5.4, tracepoints have been added to the following sections of the kernel subsystem as a Technology Preview:
  • The Gnu Compiler Collection version 4.4 (GCC4.4) is now included in this release as a Technology Preview. This collection of compilers include C, C++, and Fortran compilers along with support libraries.
  • glibc new MALLOC behaviour: The upstream glibc has been changed recently to enable higher scalability across many sockets and cores. This is done by assigning threads their own memory pools and by avoiding locking in some situations. The amount of additional memory used for the memory pools (if any) can be controlled using the environment variables MALLOC_ARENA_TEST and MALLOC_ARENA_MAX.
    MALLOC_ARENA_TEST specifies that a test for the number of cores is performed once the number of memory pools reaches this value. MALLOC_ARENA_MAX sets the maximum number of memory pools used, regardless of the number of cores.
    The glibc in the RHEL 5.4 release has this functionality integrated as a Technology Preview of the upstream malloc. To enable the per-thread memory pools the environment variable MALLOC_PER_THREAD needs to be set in the environment. This environment variable will become obsolete when this new malloc behaviour becomes default in future releases. Users experiencing contention for the malloc resources could try enabling this option.

7. Architecture Specific Support

7.1. i386

  • In a virtual environment, timekeeping for Red Hat Enterprise Linux 64-bit kernels can be problematic, since time is kept by counting timer interrupts. De- and re-scheduling the virtual machine can cause a delay in these interrupts, resulting in a timekeeping discrepancy. This kernel release reconfigures the timekeeping algorithm to keep time based on a time-elapsed counter. (Bugzilla #463573)
  • It was found that, if their stacks exceed the combined size of ~4GB, 64-bit threaded applications slowed down drastically in pthread_create(). This is because glibc uses MAP_32BIT to allocate those stacks. As the use of MAP_32BIT is a legacy implementation, this update adds a new flag (MAP_STACK mmap) to the kernel to avoid constraining 64-bit applications. (Bugzilla #459321)
  • The update includes a feature bit that encourages TSCs to keep running in deep-C states. This bit NONSTOP_TSC acts in conjunction with CONSTANT_TSC. CONSTANT_TSC indicates that the TSC runs at constant frequency irrespective of P/T- states, and NONSTOP_TSC indicates that TSC does not stop in deep C-states. (Bugzilla #474091)
  • This update includes a patch to include asm-x86_64 headers in kernel-devel packages built on or for i386, i486, i586 and i686 architectures. (Bugzilla #491775)
  • This update includes a fix to ensure that specifying memmap=X$Y as a boot parameter on i386 architectures yields a new BIOS map. (Bugzilla #464500)
  • This update adds a patch to correct a problem with the Non-Maskable Interrupt (NMI) that appeared in previous kernel releases. The problem appeared to affect various Intel processors and caused the system to report the NMI watchdog was 'stuck'. New parameters in the NMI code correct this issue. (Bugzilla #500892)
  • This release re-introduces PCI Domain support for HP xw9400 and xw9300 systems. (Bugzilla #474891)
  • Functionality has been corrected to export module powernow-k8 parameters to /sys/modules. This information was previously not exported.(Bugzilla #492010)

7.2. x86_64

  • An optimization error was found in linux-2.6-misc-utrace-update.patch. When running 32-bit processes on a 64-bit machine systems didn't return ENOSYS on missing (out of table range) system calls. This kernel release includes a patch to correct this. (Bugzilla #481682)
  • Some cluster systems were found to boot with an unstable time source. It was determined that this was a result of kernel code not checking for a free performance counter (PERFCTR) when calibrating the TSC (Time Stamp Clock) during the boot process. This resulted, in a small percentage of cases, in the system defaulting to a busy PERFCTR and getting unreliable calibrations.
    A fix was implemented to correct this by ensuring the system checked for a free PERFCTR before defaulting (Bugzilla #467782). This fix, however, cannot satisfy all possible contingencies as it is possible that all PERFCTRs will be busy when required for TSC calibration. Another patch has been included to initiate a kernel panic in the unlikely event (fewer than 1% of cases) that this scenario occurs. (Bugzilla #472523).

7.3. PPC

  • This kernel release includes various patches to update the spufs (Synergistic Processing Units file system) for Cell processors. (Bugzilla #475620)
  • An issue was identified wherein /proc/cpuinfo would list logical PVR Power7 processor architecture as "unknown" when show_cpuinfo() was run. This update adds a patch to have show_cpuinfo() identify Power7 architectures as Power6. (Bugzilla #486649)
  • This update includes several patches that are required to add/improve MSI-X (Message Signaled Interrupts) support on machines using System P processors. (Bugzilla #492580)
  • A patch has been added to this release to enable the functionality of the previously problematic power button on Cell Blades machines. (Bugzilla #475658)

7.4. s390x

Red Hat Enterprise Linux introduces a wide range of new features for IBM System z machines, most notably:
  • Utilizing Named Saved Segments (NSS), the z/VM hypervisor makes operating system code in shared real memory pages available to z/VM guest virtual machines. With this update, multiple Red Hat Enterprise Linux guest operating systems on the z/VM can boot from the NSS and be run from a single copy of the Linux kernel in memory. (BZ#474646)
  • Device driver support has been added in this update for the new IBM System z PCI cryptography accelerators, utilizing the same interfaces as prior versions. (BZ#488496)
  • Red Hat Enterprise Linux 5.4 adds support for processor degradation, which allows processor speed to be reduced in some circumstances (i.e. system overheating). (BZ#474664) This new feature allows automation software to observe the machine state and act based on defined policies.

    Note

    Processor degradation is supported on z990, z890 and later systems and is observed through SCLP system service event type 4 event qualifier 3. STSI will report the new capacity of the processor in the file: /sys/devices/system/cpu/cpuN/capability.
  • Control Program Identification (CPI) descriptive data is used to identify individual systems on the Hardware Management Console (HMC). With this update, CPI data can now be associated with a Red Hat Enterprise Linux instance. (BZ#475820)
    For more information on CPI refer to the Device Drivers, Features, and Commands document
  • Fibre Channel Protocol (FCP) performance data can now be measured on Red Hat Enterprise Linux instances on the IBM System z platform. (BZ#475334) Metrics that are collected and reported on include:
    • Performance relevant data on stack components such as Linux devices, Small Computer System Interface (SCSI) Logical Unit Numbers (LUNs) and Host Bus Adapter (HBA) storage controller information.
    • Per stack component: current values of relevant measurements as throughput, utilization and other applicable measurements.
    • Statistical aggregations (minimum, maximum, averages and histogram) of data associated with I/O requests including size, latency per component and totals.
  • Support has been added to the kernel to issue EMC Symmetrix Control I/O. This update provides the ability to manage EMC Symmetrix storage arrays with Red Hat Enterprise Linux on the IBM System z platform. (BZ#461288)
  • A new feature has been implemented in the kernel to perform an Initial Program Load (IPL) on a Red Hat Enterprise Linux virtual machine immediately following a kernel panic and dump.(BZ#474688)
  • Hardware that supports the configuration topology facility passes the system CPU topology information to the scheduler, allowing it to make load balancing decisions. On machines where I/O interrupts are unevenly distributed, CPUs that are grouped together and get more I/O interrupts than others will tend to have a higher average load, creating performance issues in some cases.
    Previously, CPU topology support was enabled by default. With this update, CPU topology support is disabled by default, and the kernel parameter "topology=on" has been added to allow this feature to be enabled. (BZ#475797)
  • New kernel options can now be added using the IPL command without modifying the content of the CMS parmfile, allowing for temporary overwriting of kernel options that are already provided by the parmfile. The entire boot command line can be replaced with the VM parameter string, bypassing any kernel options from the parmfile. Furthermore, customers can create new Linux Named Saved Systems (NSS) on the CP/CMS command line. (BZ#475530)
  • The qeth driver has been updated with HiperSockets Layer3 support for IPv6. (BZ#475572) For further details on this feature, refer to the "qeth device driver for OSA-Express (QDIO) and HiperSockets" chapter in IBM's "Device Drivers, Features, and Commands" book located at: http://www.ibm.com/developerworks/linux/linux390/october2005_documentation.html
  • Starting with z9 HiperSocket firmware returns the version string in a different format. This change resulted in missing mcl_level information in the qeth status message issued during online setting of the device. The updated qeth driver now correctly reads the new version string format of HiperSockets, allowing for a standardization of output format. (BZ#479881)
  • In Red Hat Enterprise Linux 5.4, the s390utils package has been rebased to version 1.8.1. For a full list of features that this rebase provides, please refer to the Package Updates section of the Technical Notes. (BZ#477189)
  • In the kernel, a sysfs interface has been implemented to associate actions to shutdown triggers. For more details on this feature, refer to the "Shutdown actions" chapter in IBM's "Device Drivers, Features, and Commands" book located at: http://www.ibm.com/developerworks/linux/linux390/development_documentation.html

9. Technology Previews

Technology Preview features are currently not supported under Red Hat Enterprise Linux subscription services, may not be functionally complete, and are generally not suitable for production use. However, these features are included as a customer convenience and to provide the feature with wider exposure.
The following Technology Previews are new or enhanced in Red Hat Enterprise Linux 5.4. For detailed information on the Technology Previews in Red Hat Enterprise Linux 5.4, refer to the Technology Previews section of the 5.4 Technical Notes located at http://www.redhat.com/docs/manuals/enterprise/

A. Revision History

Revision History
Revision 1.0-402Fri Oct 25 2013Rüdiger Landmann
Rebuild with Publican 4.0.0
Revision 1.0-572012-07-18Anthony Towns
Rebuild for Publican 3.0
Revision 1.0-0Wed Sep 02 2009Ryan Lerch
Initial version of the online version of the Red Hat Enterprise Linux 5.4 Release Notes

Legal Notice

Copyright © 2009 Red Hat, Inc..
This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.