Red Hat Cluster Suite 4 Update 5 Release Notes

The following topics are covered in this document:

  • New Features

  • Known Issues

  • Resolved Issues

Some updates on Red Hat Cluster Suite 4 Update 5 may not appear in this version of the Release Notes. Please consult the following URL for an updated version of the Release Notes:

http://www.redhat.com/docs/manuals/enterprise/

New Features

Cluster Mirrors

Synchronous Mirroring has been added to the Cluster Logical Volume Manager (CLVM). This allows two leg mirrors to use a single transaction log, which track blocks as they are duplicated. The write will not complete until both legs have been updated.

If one leg of the mirror is lost, the operation will continue on the remaining leg. Note that even a temporary loss of one leg will necessitate a full recovery. This recovery will occur in the background.

More than two legs can be configured. However, the loss of any leg will require dropping down to one leg and recreating all others.

Conga

Red Hat Cluster Suite 4 Update 5 includes Conga, a new web-based management interface for installing, configuring and managing one or more clusters. It provides:

  • a secure way to remotely install rpms and configure a cluster from a single web interface

  • cluster fencing setup

  • failover configuration for applications across the cluster

  • provisions storage, including physical and logical volumes as well as local and cluster file systems

SNMP/CIM Reporting

CIM and SNMP MIB reporting is now supported. Information reported includes:

  • cluster name and general status

  • cluster nodes and their status

  • quorum status

  • a list of managed services, including status and location

LVM Failover

Logical Volume Manager Failover — also known as Highly Available LVM (HA LVM) — is now possible with rgmanager. This provides a way to use LVM in an active/passive environment.

The most compelling application of HA LVM is the mirroring of two distinct SAN-connected sites. Using HA-LVM in SAN-connected sites allows one site to take over serving content from a site that fails.

Note that HA LVM is currently unable to handle a complete SAN connectivity loss on the current server, even if the standby machine is still SAN-connected. To mitigate such an event, use multipathing.

To set up LVM Failover, perform the following procedure:

  1. Create the logical volume and filesystem. Only one logical volume is allowed per volume group in HA LVM. For example:

    pvcreate /dev/sd[cde]1
    vgcreate <volume group name> /dev/sd[cde]1
    lvcreate -L 10G -n <logical volume name> <volume group name>
    mkfs.ext3 /dev/<volume group name>/<logical volume name>
    
  2. Edit /etc/cluster/cluster.conf to include the newly created logical volume as a resource in one of your services. Alternatively, you can also use system-config-cluster or the Conga GUI.

    Below is a sample resource manager section from /etc/cluster/cluster.conf:

    <rm>
      <failoverdomains>
        <failoverdomain name="FD" ordered="1" restricted="0">
          <failoverdomainnode name="neo-01" priority="1"/>
          <failoverdomainnode name="neo-02" priority="2"/>
        </failoverdomain>
      </failoverdomains>
    
      <resources>
        <lvm name="lvm" vg_name="<volume group name>" lv_name="<logical volume name>"/>
        <fs name="FS" device="/dev/<volume group name>/<logical volume name>"
            force_fsck="0" force_unmount="1" fsid="64050" fstype="ext3"
            mountpoint="/mnt" options="" self_fence="0"/>
      </resources>
    
      <service autostart="1" domain="FD" name="serv" recovery="relocate">
        <lvm ref="lvm"/>
        <fs ref="FS"/>
      </service>
    </rm>
    
  3. Edit the volume_list field in /etc/lvm/lvm.conf. Include the name of your root volume group and your machine name as listed in /etc/cluster/cluster.conf preceded by @. Below is a sample entry from /etc/lvm/lvm.conf:

    volume_list = [ "VolGroup00", "@neo-01" ]
    
  4. Update the initrd on all your cluster machines. To do this, use the following command:

    new-kernel-pkg --mkinitrd --initrdfile=/boot/initrd-halvm-`uname -r`.img --install `uname -r` --make-default

  5. Reboot all of your machines to ensure the correct initrd is in use.

Fast statfs

GFS for Red Hat Enterprise Linux 4 Update 5 now includes the df command, which significantly improves the execution time of the statfs call. This option uses cached information for the calculation of used space. For the majority of users this is sufficiently accurate.

To use the Fast statfs option, start by issuing the following command on each node:

gfs_tool settune <mount point> statfs_fast 1

This command must be run after mount. Note that Fast statfs is not persistent after an unmount. It is recommended that you use a wrapper script to perform the mount then set the flag. Setting the flag can be integrated into the gfs init script in /etc/init.d/gfs.

Fast statfs can also be turned off for mounted volumes. To do this, issue the following command on all nodes:

gfs_tool settune <mount point> statfs_fast 0

Additional Support

Red Hat Cluster Suite 4 Update 5 now features the following support enhancements:

  • The fence_drac agent now supports DRAC 4/I, and its telnet timeout has been increased. In addition, DRAC 4/P is now supported, and has been tested for 4/p firmware version 1.40.

  • Baytech RPC-5NC Remote Power Control Switch is now supported.

  • SCSI reservations are now supported. Note that the SCSI reservation feature does not work with dm-multipath.

  • DRAC5 is now supported.

  • PowerPC is now supported.

Known Issues

  • Only one HA LVM service is allowed per lvm volume group. This is because, with multiple services, it is possible to end up with multiple machines attemping to update the vg metadata at the same time. This could lead to corruption.

    The latest version of lvm.sh should check and fail if this is not the case. Do not create another logical volume on a volume group with an HA LVM service on it; doing so may cause volume group metadata corruption.

  • The lvm.conf file must have a locking type equal to 1. To set this, add the following line to the file:

    'locking_type = 1'

    In addition, the lvm.conf file must have a tag indicating the machine tag name for HA lvm service activation:

    'volume_list = [ "VolGroup00", "@<machinename>" ]'

Resolved Issues

The following errata contain complete listings of issues resolved in Red Hat Cluster Suite 4 Update 5: