Chapter 4. KVM Live Migration
- Load balancing - guest virtual machines can be moved to host physical machines with lower usage when their host physical machine becomes overloaded, or another host physical machine is under-utilized.
- Hardware independence - when we need to upgrade, add, or remove hardware devices on the host physical machine, we can safely relocate guest virtual machines to other host physical machines. This means that guest virtual machines do not experience any downtime for hardware improvements.
- Energy saving - guest virtual machines can be redistributed to other host physical machines and can thus be powered off to save energy and cut costs in low usage periods.
- Geographic migration - guest virtual machines can be moved to another location for lower latency or in serious circumstances.
4.1. Live Migration Requirements
- A guest virtual machine installed on shared storage using one of the following protocols:
- Fibre Channel-based LUNs
- SCSI RDMA protocols (SCSI RCP): the block export protocol used in Infiniband and 10GbE iWARP adapters
- The migration platforms and versions should be checked against table Table 4.1, “Live Migration Compatibility”. It should also be noted that Red Hat Enterprise Linux 6 supports live migration of guest virtual machines using raw and qcow2 images on shared storage.
- Both systems must have the appropriate TCP/IP ports open. In cases where a firewall is used, refer to the Red Hat Enterprise Linux Virtualization Security Guide which can be found at https://access.redhat.com/site/documentation/ for detailed port information.
- A separate system exporting the shared storage medium. Storage should not reside on either of the two host physical machines being used for migration.
- Shared storage must mount at the same location on source and destination systems. The mounted directory names must be identical. Although it is possible to keep the images using different paths, it is not recommended. Note that, if you are intending to use virt-manager to perform the migration, the path names must be identical. If however you intend to use virsh to perform the migration, different network configurations and mount directories can be used with the help of
--xmloption or pre-hooks when doing migrations. Even without shared storage, migration can still succeed with the option
--copy-storage-all(deprecated). For more information on
prehooks, refer to libvirt.org, and for more information on the XML option, refer to Chapter 20, Manipulating the Domain XML.
- When migration is attempted on an existing guest virtual machine in a public bridge+tap network, the source and destination host physical machines must be located in the same network. Otherwise, the guest virtual machine network will not operate after migration.
- In Red Hat Enterprise Linux 5 and 6, the default cache mode of KVM guest virtual machines is set to
none, which prevents inconsistent disk states. Setting the cache option to
virsh attach-disk cache none, for example), causes all of the guest virtual machine's files to be opened using the
O_DIRECTflag (when calling the
opensyscall), thus bypassing the host physical machine's cache, and only providing caching on the guest virtual machine. Setting the cache mode to
noneprevents any potential inconsistency problems, and when used makes it possible to live-migrate virtual machines. For information on setting cache to
none, refer to Section 13.3, “Adding Storage Devices to Guests”.
libvirtdservice is enabled (
# chkconfig libvirtd on) and running (
# service libvirtd start). It is also important to note that the ability to migrate effectively is dependent on the parameter settings in the
Procedure 4.1. Configuring libvirtd.conf
- Opening the
libvirtd.confrequires running the command as root:
# vim /etc/libvirt/libvirtd.conf
- Change the parameters as needed and save the file.
- Restart the
# service libvirtd restart