- Policies for Guest Clusters
- Policies for VMs Highly Available Resources
Red Hat Enterprise Linux (RHEL) High Availability and Resilient Storage clusters may be deployed within virtual machines running on various virtualization platforms, and may also manage and maintain high availability of certain types of virtual machines running as resources. This document describes the virtualization platforms that are supported by Red Hat for use in these contexts.
- Red Hat Enterprise Linux (RHEL) 5, 6, or 7 with the High Availability Add-On
- A cluster deployment in which the targeted use case is to either:
- Manage one or more virtual machines as a highly available resource, or
- Operate one or more virtual machines as a node in the cluster.
High Availability and Virtualization Use Cases
There are two use cases for deployment of virtualization in conjunction with the RHEL High Availability or Resilient Storage Add-On: VMs as highly available resources and guest clusters.
For detailed information, including general recommendations and best practices, see the official product documentation regarding Virtualization and High Availability.
Policies for Guest Clusters
The following Xen virtual machine configurations are supported by Red Hat for use as High Availability cluster nodes:
|Hypervisor OS||VM OS||Supported Fencing Mechanisms|
|RHEL 5 Update 3 or later||RHEL 5 Update 3 or later||
- Physical Host Mixing is not supported in clusters with member nodes that run on Xen virtualization platforms.
RHEL KVM with libvirt
The following KVM/
libvirt virtual machine configurations are supported by Red Hat for use as High Availability cluster nodes:
|Hypervisor OS||VM OS||Supported Fencing Mechanisms|
|RHEL 5, 6, or 7||RHEL 5, 6, or 7||
- Physical Host Mixing is supported in clusters with member nodes that run on the supported KVM/
libvirtvirtualization platforms listed above.
Red Hat Enterprise Virtualization (RHEV)
The following RHEV virtual machine configurations are supported by Red Hat for use as High Availability cluster nodes:
|RHEV-M Version||VM OS||Supported Fencing Mechanisms|
|3.1 - 3.6||
- Physical Host Mixing is not supported in clusters with member nodes that run on RHEV virtualization platforms.
Red Hat does not support the RHEL High Availability or Resilient Storage Add Ons running within Openstack VMs. This is true for Red Hat Openstack Platform (RH-OSP) and for other Openstack deployments.
VMware vSphere / ESXi
The following VMware virtual machine configurations are supported by Red Hat for use as High Availability cluster nodes:
|VMware Product||VM OS||Shared VM Cluster Storage||Supported Fencing Mechanisms|
- Physical Host Mixing is supported in clusters with member nodes that run on the supported VMware platforms listed above.
- Performing an upgrade of the VMware software (vSphere/vCenter/ESXi) while a managed cluster is running within guests is not supported by Red Hat. Problems arising in such scenarios must be reproduced outside of the upgrade context in order to receive support from Red Hat.
IBM z Systems
Please contact Red Hat Support or your account representatives for information on running RHEL High Availability on IBM z Systems.
Red Hat does not support the RHEL High Availability or Resilient Storage Add Ons on the Hyper-V platform.
Amazon EC2 or Other Hosted Cloud Providers
Red Hat does not support the RHEL High Availability or Resilient Storage Add Ons on these hosted cloud platforms.
Policies for VMs Highly Available Resources
The following VM configurations are supported as Highly Available resources managed by a RHEL High Availability cluster. Other virtual machine types or configurations may function, but Red Hat cannot guarantee they will be free from issues and may be unable to assist in the event of any issues arising.
|Hypervisor Type||Guest OS Type|
|RHEL 5 Update 3 or later Xen Hosts||Any guest OS fully supported by Xen|
|RHEL 5 Update 5 or later, RHEL 6, or RHEL 7
||Any guest OS fully supported by libvirt/KVM|
- In these configurations, the VM resources would be stored on shared block devices such as clustered LVM Volumes, or on disk images stored on shared file systems such as NFS, GFS, GFS2.
Physical Host Mixing
Physical Host Mixing is a High Availability cluster configuration in which some of the nodes in the cluster membership are running in virtual machines while others are running on bare metal hardware. In general it is always advisable to keep the cluster configuration as homogenous as possible to prevent issues when migrating services or dealing with failure conditions. However, there are limited cases for which physical host mixing is acceptable. The following is an example of such a use case:
- A bare metal cluster with an even number of nodes (2, 4, 6) with a single cluster node as a virtual machine. This cluster node is part of the membership and provides a tie-breaker function since if half of the nodes fail, this extra node can be used to make sure the other half has quorum. This configuration generally implies that the 'tie-breaker node' not run any services, since it's function is limited to quorum support. The benefit to running this 'tie-breaker node' in a virtual machine is that multiple of these nodes could coexist on the same physical host. For example, you could have four clusters, each with 4 physical nodes (16 physical hosts total). All of them could have a virtual machine running on a 17th physical host. This can also be used to replace the need for a quorum disk in 2 or 4 node clusters.
Note: Physical host mixing with a single virtualization environment is allowed for certain hypervisor technologies, as noted in the sections above. However, mixing different hypervisor types is not allowed (see below).
Shared VM Cluster Storage
This category of storage referenced in the sections above refers to the types of block devices that can be exposed to the members of the VM cluster and used for shared highly available applications. These shared block devices can be used for clustered LVM with
clvmd, HA-LVM, and/or GFS/GFS2, where supported by the guest operating system.
fence_scsi and iSCSI
fence_scsi with iSCSI storage is limited to iSCSI servers that support SCSI 3 Persistent Reservations with the
preempt-and-abort command. Not all iSCSI servers support this functionality. Check with the storage vendor to ensure that your server is compliant with SCSI 3 Persistent Reservation support. Note that the
tgtd) iSCSI server shipped with RHEL 6 does not support SCSI 3 Persistent Reservations, so it is not suitable for use with
targetd iSCSI server shipped in RHEL 7 does support SCSI 3 Persistent Reservations, and can be used with
fence_scsi is not compatible with virtual SCSI devices presented to virtual machines on any of the platforms listed above. Xen and KVM virtualization platforms emulate hardware SCSI controllers in a way that does not support SCSI SPC-3 Persistent Reservations.
vSphere 5.0 API Issues
Due to an incomplete WDSL schema provided in the initial release of vSphere v5.0, the fence_vmware_soap utility does not work on the default install. To use SOAP fencing apply the VMware workaround described here. VMware intends to correct this in an update to 5.0, when that is available in a service pack, this knowledge base will be updated to reflect that.
fence_vmware_soap vs fence_vmware
In RHEL 5 Update 7 and later, RHEL 6 Update 2 and later, and RHEL 7, the proper fence agent to use for managing VM power states via the VMware vCenter/ESXI API
fence_vmware_soap. Prior to these releases, an older agent called
fence_vmware was available, but upon the release of
fence_vmware_soap support for
fence_vmware was discontinued. If one of these older RHEL releases must be used where
fence_vmware_soap is not available, Red Hat will provided limited support for configuring and managing
fence_vmware. However, because support has transitioned to
fence_vmware_soap in later releases, no bug fixes or enhancements will be made for
fence_vmware. It is strongly recommended that a release with
fence_vmware_soap be used when possible.
Mixing Virtualization Technologies
Mixing different hypervisor technologies among cluster nodes is not supported. All nodes in a given cluster must reside on the same type of hypervisor. For example, a single cluster with nodes on both VMware vSphere and RHEL KVM with
libvirt would not be supported.
fence_xvm with Multiple Virtualization Hosts
Tracking of virtual machines (using the
libvirt-qmf plugin or the
checkpoint plugin) is currently not tested or supported, hence, virtual machines acting as cluster members must be statically assigned to given hosts for support. These plugins may work, but Red Hat cannot guarantee they will operate without issues, and Red Hat may require that any problems that arise while they are in use be reproduced without the use of these components before providing support.
Redundancy in Virtualization Hosts
Placing all virtual machines of a single cluster on a single host is useful for deploying proof-of-concept clusters, but is not supported for production use, as a failure of the host will take out the entire cluster.
Cluster Node Live Migration
Live migration of VMs that are members of a RHEL cluster is not supported