Unable to migrate an instance off a server that is down hard

Solution In Progress - Updated -

Issue

  • We are trying to do a host-evacuate on a physical compute node and all were able to migrate but one of the instances is giving an error while trying to migrate:
2022-07-11 09:40:28.741 7 INFO nova.compute.manager [req-32aa5961-d2e8-46ad-bf19-633bc1c7728f fff6b73d3a504cd99460d277b6ed4111 64f47aac10424bf7b22d400d41de1ffa - default default] [instance: 613c60fd-65ad-4c29-ae88-917903961c72] Evacuating instance
2022-07-11 09:40:28.785 7 INFO nova.compute.claims [req-32aa5961-d2e8-46ad-bf19-633bc1c7728f fff6b73d3a504cd99460d277b6ed4111 64f47aac10424bf7b22d400d41de1ffa - default default] [instance: 613c60fd-65ad-4c29-ae88-917903961c72] Claim successful on node overcloud-compute-0
2022-07-11 09:40:28.904 7 INFO nova.compute.resource_tracker [req-32aa5961-d2e8-46ad-bf19-633bc1c7728f fff6b73d3a504cd99460d277b6ed4111 64f47aac10424bf7b22d400d41de1ffa - default default] [instance: 613c60fd-65ad-4c29-ae88-917903961c72] Updating resource usage from migration f3ab47b1-044b-4ca6-b4f0-f41bd93c198c
2022-07-11 09:40:29.353 7 INFO nova.compute.manager [req-32aa5961-d2e8-46ad-bf19-633bc1c7728f fff6b73d3a504cd99460d277b6ed4111 64f47aac10424bf7b22d400d41de1ffa - default default] disk on shared storage, evacuating using existing disk
2022-07-11 09:40:31.228 7 INFO nova.compute.manager [req-32aa5961-d2e8-46ad-bf19-633bc1c7728f fff6b73d3a504cd99460d277b6ed4111 64f47aac10424bf7b22d400d41de1ffa - default default] [instance: 613c60fd-65ad-4c29-ae88-917903961c72] Detaching volume c429e370-a68b-45dc-bccc-bc8d75454850
2022-07-11 09:40:31.578 7 INFO nova.virt.block_device [req-32aa5961-d2e8-46ad-bf19-633bc1c7728f fff6b73d3a504cd99460d277b6ed4111 64f47aac10424bf7b22d400d41de1ffa - default default] [instance: 613c60fd-65ad-4c29-ae88-917903961c72] Booting with volume c429e370-a68b-45dc-bccc-bc8d75454850 at /dev/vda
2022-07-11 09:40:31.772 7 ERROR nova.volume.cinder [req-32aa5961-d2e8-46ad-bf19-633bc1c7728f fff6b73d3a504cd99460d277b6ed4111 64f47aac10424bf7b22d400d41de1ffa - default default] Update attachment failed for attachment b997420b-b61a-4346-a09b-2674c469d659. Error: Unable to update attachment.(Invalid volume: duplicate connectors detected on volume c429e370-a68b-45dc-bccc-bc8d75454850). (HTTP 500) (Request-ID: req-08fe14c2-faa1-4a8c-9608-b6039a0ccf1a) Code: 500: cinderclient.exceptions.ClientException: Unable to update attachment.(Invalid volume: duplicate connectors detected on volume c429e370-a68b-45dc-bccc-bc8d75454850). (HTTP 500) (Request-ID: req-08fe14c2-faa1-4a8c-9608-b6039a0ccf1a)
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [req-32aa5961-d2e8-46ad-bf19-633bc1c7728f fff6b73d3a504cd99460d277b6ed4111 64f47aac10424bf7b22d400d41de1ffa - default default] [instance: 613c60fd-65ad-4c29-ae88-917903961c72] Instance failed block device setup: cinderclient.exceptions.ClientException: Unable to update attachment.(Invalid volume: duplicate connectors detected on volume c429e370-a68b-45dc-bccc-bc8d75454850). (HTTP 500) (Request-ID: req-08fe14c2-faa1-4a8c-9608-b6039a0ccf1a)
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [instance: 613c60fd-65ad-4c29-ae88-917903961c72] Traceback (most recent call last):
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [instance: 613c60fd-65ad-4c29-ae88-917903961c72]   File "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line 3611, in _do_rebuild_instance
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [instance: 613c60fd-65ad-4c29-ae88-917903961c72]     self.driver.rebuild(**kwargs)
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [instance: 613c60fd-65ad-4c29-ae88-917903961c72]   File "/usr/lib/python3.6/site-packages/nova/virt/driver.py", line 314, in rebuild
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [instance: 613c60fd-65ad-4c29-ae88-917903961c72]     raise NotImplementedError()
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [instance: 613c60fd-65ad-4c29-ae88-917903961c72] NotImplementedError
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [instance: 613c60fd-65ad-4c29-ae88-917903961c72]
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [instance: 613c60fd-65ad-4c29-ae88-917903961c72] During handling of the above exception, another exception occurred:
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [instance: 613c60fd-65ad-4c29-ae88-917903961c72]
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [instance: 613c60fd-65ad-4c29-ae88-917903961c72] Traceback (most recent call last):
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [instance: 613c60fd-65ad-4c29-ae88-917903961c72]   File "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line 1912, in _prep_block_device
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [instance: 613c60fd-65ad-4c29-ae88-917903961c72]     wait_func=self._await_block_device_map_created)
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [instance: 613c60fd-65ad-4c29-ae88-917903961c72]   File "/usr/lib/python3.6/site-packages/nova/virt/block_device.py", line 892, in attach_block_devices
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [instance: 613c60fd-65ad-4c29-ae88-917903961c72]     _log_and_attach(device)
2022-07-11 09:40:31.787 7 ERROR nova.compute.manager [instance: 613c60fd-65ad-4c29-ae88-917903961c72]   File "/usr/lib/python3.6/site-packages/nova/virt/block_device.py", line 889, in _log_and_attach

Environment

  • Red Hat OpenStack Platform 16.1 (RHOSP)

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content