4.4. Live KVM Migration with virsh
virsh command. The migrate command accepts parameters in the following format:
# virsh migrate --live GuestName DestinationURL--live option may be eliminated when live migration is not desired. Additional options are listed in Section 4.4.2, “Additional Options for the virsh migrate Command”.
GuestName parameter represents the name of the guest virtual machine which you want to migrate.
DestinationURL parameter is the connection URL of the destination host physical machine. The destination system must run the same version of Red Hat Enterprise Linux, be using the same hypervisor and have libvirt running.
Note
DestinationURL parameter for normal migration and peer-to-peer migration has different semantics:
- normal migration: the
DestinationURLis the URL of the target host physical machine as seen from the source guest virtual machine. - peer-to-peer migration:
DestinationURLis the URL of the target host physical machine as seen from the source host physical machine.
Important
/etc/hosts file on the source server is required for migration to succeed. Enter the IP address and host name for the destination host physical machine in this file as shown in the following example, substituting your destination host physical machine's IP address and host name:
10.0.0.20 host2.example.com
This example migrates from host1.example.com to host2.example.com. Change the host physical machine names for your environment. This example migrates a virtual machine named guest1-rhel6-64.
Verify the guest virtual machine is running
From the source system,host1.example.com, verifyguest1-rhel6-64is running:[root@host1 ~]# virsh list Id Name State ---------------------------------- 10 guest1-rhel6-64 running
Migrate the guest virtual machine
Execute the following command to live migrate the guest virtual machine to the destination,host2.example.com. Append/systemto the end of the destination URL to tell libvirt that you need full access.# virsh migrate --live
guest1-rhel6-64 qemu+ssh://host2.example.com/systemOnce the command is entered you will be prompted for the root password of the destination system.Wait
The migration may take some time depending on load and the size of the guest virtual machine.virshonly reports errors. The guest virtual machine continues to run on the source host physical machine until fully migrated.Note
During the migration, the completion percentage indicator number is likely to decrease multiple times before the process finishes. This is caused by a recalculation of the overall progress, as source memory pages that are changed after the migration starts need to be be copied again. Therefore, this behavior is expected and does not indicate any problems with the migration.Verify the guest virtual machine has arrived at the destination host
From the destination system,host2.example.com, verifyguest1-rhel6-64is running:[root@host2 ~]# virsh list Id Name State ---------------------------------- 10 guest1-rhel6-64 running
Note
Note
virsh migrate command. To migrate a non-running guest virtual machine, the following script should be used:
virsh dumpxml Guest1 > Guest1.xml virsh -c qemu+ssh://<target-system-FQDN> define Guest1.xml virsh undefine Guest1
4.4.1. Additional Tips for Migration with virsh
- Open the libvirtd.conf file as described in Procedure 4.1, “Configuring libvirtd.conf”.
- Look for the Processing controls section.
################################################################# # # Processing controls # # The maximum number of concurrent client connections to allow # over all sockets combined. #max_clients = 20 # The minimum limit sets the number of workers to start up # initially. If the number of active clients exceeds this, # then more threads are spawned, upto max_workers limit. # Typically you'd want max_workers to equal maximum number # of clients allowed #min_workers = 5 #max_workers = 20 # The number of priority workers. If all workers from above # pool will stuck, some calls marked as high priority # (notably domainDestroy) can be executed in this pool. #prio_workers = 5 # Total global limit on concurrent RPC calls. Should be # at least as large as max_workers. Beyond this, RPC requests # will be read into memory and queued. This directly impact # memory usage, currently each request requires 256 KB of # memory. So by default upto 5 MB of memory is used # # XXX this isn't actually enforced yet, only the per-client # limit is used so far #max_requests = 20 # Limit on concurrent requests from a single client # connection. To avoid one client monopolizing the server # this should be a small fraction of the global max_requests # and max_workers parameter #max_client_requests = 5 #################################################################
- Change the
max_clientsandmax_workersparameters settings. It is recommended that the number be the same in both parameters. Themax_clientswill use 2 clients per migration (one per side) andmax_workerswill use 1 worker on the source and 0 workers on the destination during the perform phase and 1 worker on the destination during the finish phase.Important
Themax_clientsandmax_workersparameters settings are effected by all guest virtual machine connections to the libvirtd service. This means that any user that is using the same guest virtual machine and is performing a migration at the same time will also beholden to the limits set in themax_clientsandmax_workersparameters settings. This is why the maximum value needs to be considered carefully before performing a concurrent live migration. - Save the file and restart the service.
Note
There may be cases where a migration connection drops because there are too many ssh sessions that have been started, but not yet authenticated. By default,sshdallows only 10 sessions to be in a "pre-authenticated state" at any time. This setting is controlled by theMaxStartupsparameter in the sshd configuration file (located here:/etc/ssh/sshd_config), which may require some adjustment. Adjusting this parameter should be done with caution as the limitation is put in place to prevent DoS attacks (and over-use of resources in general). Setting this value too high will negate its purpose. To change this parameter, edit the file/etc/ssh/sshd_config, remove the#from the beginning of theMaxStartupsline, and change the10(default value) to a higher number. Remember to save the file and restart thesshdservice. For more information, refer to thesshd_configman page.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.