Instance live migration is not working

Solution Verified - Updated -

Issue

  • Instance is booted from cinder volume. The backend for cinder is nimble storage.
[heat-admin@overcloud-controller-0 ~]$ nova show f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | AUTO                                                     |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-SRV-ATTR:host                 | overcloud-compute-1.localdomain                          |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | overcloud-compute-1.localdomain                          |


source node:
[...]
2016-03-25 12:44:45.547 3959 ERROR nova.compute.manager [req-315aa2c0-8283-41bc-8e0c-42a3bbe6084f c51e6febd03749cb9735b22539767e09 e6dcf8b9007341aa95182be0847c05dc - - -] [i
nstance: f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7] Pre live migration failed at overcloud-compute-0.localdomain
2016-03-25 12:44:45.547 3959 TRACE nova.compute.manager [instance: f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7] Traceback (most recent call last):
2016-03-25 12:44:45.547 3959 TRACE nova.compute.manager [instance: f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", l
ine 5308, in _do_live_migration
2016-03-25 12:44:45.547 3959 TRACE nova.compute.manager [instance: f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7]     block_migration, disk, dest, migrate_data)
2016-03-25 12:44:45.547 3959 TRACE nova.compute.manager [instance: f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7]   File "/usr/lib/python2.7/site-packages/nova/compute/rpcapi.py", li
ne 627, in pre_live_migration
2016-03-25 12:44:45.547 3959 TRACE nova.compute.manager [instance: f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7]     disk=disk, migrate_data=migrate_data)
2016-03-25 12:44:45.547 3959 TRACE nova.compute.manager [instance: f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7]   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.p
y", line 156, in call
2016-03-25 12:44:45.547 3959 TRACE nova.compute.manager [instance: f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7]     retry=self.retry)
2016-03-25 12:44:45.547 3959 TRACE nova.compute.manager [instance: f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7]   File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py
", line 90, in _send
2016-03-25 12:44:45.547 3959 TRACE nova.compute.manager [instance: f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7]     timeout=timeout, retry=retry)
2016-03-25 12:44:45.547 3959 TRACE nova.compute.manager [instance: f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7]   File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amq
pdriver.py", line 350, in send
2016-03-25 12:44:45.547 3959 TRACE nova.compute.manager [instance: f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7]     retry=retry)
2016-03-25 12:44:45.547 3959 TRACE nova.compute.manager [instance: f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7]   File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amq
pdriver.py", line 341, in _send
2016-03-25 12:44:45.547 3959 TRACE nova.compute.manager [instance: f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7]     raise result
2016-03-25 12:44:45.547 3959 TRACE nova.compute.manager [instance: f0cc1f78-2f25-46a7-a4d8-d8c45ef5dfa7] NovaException_Remote: iSCSI device not found at [u'/dev/disk/by-path
/ip-172.22.20.248:3260-iscsi-iqn.2007-11.com.nimblestorage:volume-9a72cda8-a2be-4e86-8ecc-8580923bb566-v2e7f538db2a6f503.000000a4.9cd9a68e-lun-0']

_______________________________________

Destination compute:
[...]
2016-03-25 12:43:49.624 3980 DEBUG oslo_concurrency.processutils [req-315aa2c0-8283-41bc-8e0c-42a3bbe6084f c51e6febd03749cb9735b22539767e09 e6dcf8b9007341aa95182be0847c05dc - - -] CMD "sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2007-11.com.nimblestorage:volume-9a72cda8-a2be-4e86-8ecc-8580923bb566-v2e7f538db2a6f503.000000a4.9cd9a68e -p xx.xx.xx.xx:3260" returned: 21 in 0.057s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:254  <====


2016-03-25 12:43:49.624 3980 DEBUG oslo_concurrency.processutils [req-315aa2c0-8283-41bc-8e0c-42a3bbe6084f c51e6febd03749cb9735b22539767e09 e6dcf8b9007341aa95182be0847c05dc - - -] u'sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2007-11.com.nimblestorage:volume-9a72cda8-a2be-4e86-8ecc-8580923bb566-v2e7f538db2a6f503.000000a4.9cd9a68e -p xx.xx.xx.xx:3260' failed. Not Retrying. execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:291

2016-03-25 12:43:50.116 3980 DEBUG oslo_concurrency.processutils [req-315aa2c0-8283-41bc-8e0c-42a3bbe6084f c51e6febd03749cb9735b22539767e09 e6dcf8b9007341aa95182be0847c05dc - - -] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2007-11.com.nimblestorage:volume-9a72cda8-a2be-4e86-8ecc-8580923bb566-v2e7f538db2a6f503.000000a4.9cd9a68e -p xx.xx.xx.xx:3260 --rescan execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:223

2016-03-25 12:43:50.173 3980 DEBUG oslo_concurrency.processutils [req-315aa2c0-8283-41bc-8e0c-42a3bbe6084f c51e6febd03749cb9735b22539767e09 e6dcf8b9007341aa95182be0847c05dc - - -] CMD "sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2007-11.com.nimblestorage:volume-9a72cda8-a2be-4e86-8ecc-8580923bb566-v2e7f538db2a6f503.000000a4.9cd9a68e -p xx.xx.xx.xx:3260 --rescan" returned: 0 in 0.057s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:254
2016-03-25 12:43:50.173 3980 DEBUG nova.virt.libvirt.volume [req-315aa2c0-8283-41bc-8e0c-42a3bbe6084f c51e6febd03749cb9735b22539767e09 e6dcf8b9007341aa95182be0847c05dc - - -] iscsiadm ('--rescan',): stdout= stderr=iscsiadm: invalid error code 65280

Environment

  • Red Hat Open Stack

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content