How to configure Red Hat Cluster with fencing of two KVM guests running on two different KVM hosts?

Solution Verified - Updated -

Environment

  • Red Hat Enterprise Linux (RHEL) 6 with High Availability or Resilient Storage Add-Ons
  • KVM guests configured as cluster nodes.

Issue

  • There are two physical servers and KVM guests are installed on each machine. Hosts and guests OS are RHEL 6. A cluster should be configured which must contain one virtual machine of one physical host and another virtual machine of another physical host. A fencing should be setup, so the guests can fence each other.
  • The following error occurs while testing fence_xvm in a Red Hat cluster for two KVM guests running on two different KVM hosts:
Timed out waiting for response
Operation failed

Resolution

Limitations

KVM-guest migration are not supported in case of clustered guests. Tracking of virtual machines is currently not tested or supported, hence, virtual machines acting as cluster members must be statically assigned to given hosts.

Cluster Setup

Cluster setup will be described briefly here as the main point of this document is to describe the configuration of a cross-kvm-host fencing. Please, refer following article for configuring the cluster and for detailed explanation of the steps below: Configuring and Managing the High Availability Add-On

The steps to configure the cluster on KVM guests are as follows:

1. Install RHEL 6 on 2 physical systems and create one KVM VM on each system.
2. Configure a KVM bridge on both systems so that the nodes to can communicate on the public-facing network.
3. Make sure that the network between the hosts allows them to receive multicast packets from each other.
4. Add an entry in each systems "/etc/hosts" for nodes so the nodes can communicate on the network.
5. Install the luci package on the system you will connect to for managing the cluster through a GUI interface.
6. Install the cluster packages on cluster nodes (on both KVM VMs).
7. Create a cluster.

Fencing Setup

Once the cluster is formed it is important to setup fencing. [NOTE] An High Availability Cluster without fencing is not a configuration that Red Hat supports. We assume here physical hosts are named "host1" and "host2" and virtual guests are named "guest1" and "guest2".

1. Install fencing packages on both "host1" and "host2" using yum command:

# yum install fence-virt fence-virtd fence-virtd-libvirt fence-virtd-multicast fence-virtd-serial

2. Once these are installed, check that a directory /etc/cluster exists on all hosts and guests, if it does not please create it:

# mkdir -p /etc/cluster

3. Create "fence_xvm.key" on the hosts and the guests

3.1. On "host1" create a "fence_xvm.key" file to use with "guest2" and copy this file to "/etc/cluster" on "guest2":

# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4k count=1
# scp /etc/cluster/fence_xvm.key root@guest2:/etc/cluster/fence_xvm.key

3.2. On "host2" create a "fence_xvm.key" file to use with "guest1" and copy this file to "/etc/cluster" on "guest1":

# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4k count=1
# scp /etc/cluster/fence_xvm.key root@guest1:/etc/cluster/fence_xvm.key

3.3. It is important that you must have different key per physical host. To distinguish each key, make a copy of the file with a unique name identifying the corresponding host:

[on guest1]# cp /etc/cluster/fence_xvm.key /etc/cluster/fence_xvm_host2.key
[on guest2]# cp /etc/cluster/fence_xvm.key /etc/cluster/fence_xvm_host1.key

4. Next you will need to configure the "fence_virtd" daemon. To do that run "fence_virtd -c" on "host1":

# fence_virtd -c

At the prompts use the following values:

  • accept default search path
  • accept multicast as default
  • accept default multicast address
  • accept default multicast port
  • set interface to br0 (replace the bridge name with the one configured on your hosts)
  • accept default fence_xvm.key path
  • set backend module to libvirt
  • accept default URI
  • enter "y" to write config

This would create a "/etc/fence_virt.conf" file with the following content:

fence_virtd {
    module_path = "/usr/lib64/fence-virt";
    listener = "multicast";
    backend = "libvirt";
}

listeners {
    multicast {
        key_file = "/etc/cluster/fence_xvm.key";
        interface = "br0";
        port = "1229";
        address = "225.0.0.12";
        family = "ipv4";
    }
}

backends {
    libvirt {
        uri = "qemu:///system";
    }

}

5. Copy this file to "host2" and make required changes to the "address" value on both hosts. The values should be changed to:

    (on host1)
    address = "225.0.1.12";

    (on host2)
    address = "225.0.2.12";

Start "fence_virtd" daemon on both the hosts (i.e. "host1" and "host2"). To start the "fence_virtd" daemon:

# service fence_virtd start

6. On both the hosts, configure "fence_virtd" to start at reboots:

# chkconfig fence_virtd on

7. On your virtual machine cluster nodes you should install "fence-virt" package:

# yum install fence-virt

Now try to get status of each node from each of the nodes. Any cluster node should be able to get status of any other cluster node. Please, use the following commands to check:

[on guest1]$ fence_xvm -a 225.0.2.12 -k /etc/cluster/fence_xvm-host2.key -H guest2 -o status
[on guest2]$ fence_xvm -a 225.0.1.12 -k /etc/cluster/fence_xvm_host1.key -H guest1 -o status

To test fencing "guest1" from "guest2", run the following command on "guest2":

[on guest2]# fence_xvm -o reboot -a 225.0.1.12 -k /etc/cluster/fence_xvm_host1.key -H guest1 

Note that fence_xvm command used on the virtual guests. There are some important things to consider before going forward. The "domain" name is actually the name of the virtual machine in kvm, not the hostname or dns domain name of the cluster node. Whilst they might happen to be the same the distinction is important, because if your machine is named differently in KVM to its hostname and you attempt to use the hostname in fencing, fencing would not work.

"virsh list" command can be used to list the virtual machine names:

[on host1]# virsh list 
 Id Name                 State 
---------------------------------- 
 1  guest1               running 

8. Assuming that the node immediately reboots, it's good to set up the fencing in a "cluster.conf" file:

  • Within "luci", add a shared fence device (select a device of type "fence_virt multicast mode" in RHEL6 or "Virtual Machine Fencing" in RHEL5), it doesn't really matter what you call it, as its name is just a label.
  • Within each cluster node, add an instance of the previously created fence device. It'll ask for a domain for the fence device, use the KVM machine name as noted earlier.
  • Save the changes to the node
  • Repeat on each node

To test, stop networking on one node. It should be automatically restarted by the other node.

9. You can manually try to configure "cluster.conf" or double check what "luci" has configured for you. The simple configuration that is known to be working looks as follows:

[root@node2 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="1" name="kvm_cluster">
  <clusternodes>
    <clusternode name="node1.example.com" nodeid="1">
      <fence>
    <method name="1">
      <device domain="node1" name="virtfence1"/>
    </method>
      </fence>
    </clusternode>
    <clusternode name="node2.example.com" nodeid="2">
      <fence>
    <method name="1">
      <device domain="node2" name="virtfence2"/>
    </method>
      </fence>
    </clusternode>
  </clusternodes>
  <cman expected_votes="1" two_node="1"/>
  <fencedevices>
    <fencedevice agent="fence_xvm" name="virtfence1" key_file="/etc/cluster/fence_xvm_host1.key" multicast_address="225.0.1.12"/>
    <fencedevice agent="fence_xvm" name="virtfence2" key_file="/etc/cluster/fence_xvm_host2.key" multicast_address="225.0.2.12"/>
  </fencedevices>
  <rm>
    <failoverdomains/>
    <resources/>
  </rm>
</cluster>

10. After configuring cluster.conf try to fence a node by the means of a cluster:

[on guest2]$ fence_node guest1
fence guest1 success

If the fencing is successful, the KVM-guests cluster with fencing is complete.

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.

Comments