A Live Storage Migration fails to complete, leaving a disk in a locked status and an unfinished task/job
Environment
- Red Hat Enterprise Virtualization (RHEV) 3.4.3
-
Red Hat Enterprise Linux (RHEL) 6.5 host;
- vdsm-4.13.2-0.13
- vdsm-4.14.11-5
-
Red Hat Enterprise Virtualization (RHEV) 3.5
Issue
A Live Storage Migration sequence does not complete and a disk remains in a locked
status;
-
In the Admin Portal, the Disks tab will show the disk in a
locked
status. -
The Tasks pane in the Admin Portal shows that the storage migration is still in progress.
-
The disk volumes now physically reside in both storage domains, however the Admin Portal only shows them residing in the source storage domain (as is also seen in the database).
-
The VM is performing mirrored i/o to the active images in both storage domains.
Resolution
-
The problem described in the Issue section was tracked by BZ 1161261 for RHEV 3.4. This bug is now closed, as a fix was included in RHEV 3.5.
-
However, we have seen some instances of this same scenario in RHEV 3.5, for which BZ 1318724 was opened.
-
The interim solution for this is to also modify the database to set the job as
FINISHED
and then to restart the engine, and to also shut the VM down and remove the images from the destination storage domain. The images should not be removed from the destination storage domain while the VM is up and running as they are open and in use. The storage migration can then be retried either live or offline. -
Please contact Red Hat Technical Support for assistance in resolving the issue.
-
Also, please reference this article for another similar Live Storage Migration problem.
Root Cause
Not all of the steps in the sequence performed by the engine were executed. Only the following commands were executed;
CloneImageGroupStructureVDSCommand
VmReplicateDiskStartVDSCommand
SyncImageGroupDataVDSCommand
These were not;
VmReplicateDiskFinishVDSCommand
DeleteImageGroupVDSCommand
The result was that the images for the disk were copied to the destination storage domain, but not removed from the source storage domain, i.e. they existed in both domains. The database however still only show the images being associated with the source domain. Also the VM's qemu-kvm
process was performing mirrored i/o to the active images, i.e. the active images in both storage domains were in use.
NOTE:
- No errors were reported.
- This condition was reproduced by running a certain number concurrent Live Storage Migrations, in this case 20.
Diagnostic Steps
After a Live Storage Migration (LSM) a disk remains in "locked" status and the LSM sequence doesn't complete on the engine side. VmReplicateDiskFinishVDSCommand and DeleteImageGroupVDSCommand are never executed
- 20 disks were live migrated from 'LSM_GFW' to 'NFS_GFW', both NFS data domains. Only 19 completed.
# grep "FINISH, VmReplicateDiskFinishVDSCommand" engine.log|wc -l
19
- Unfinished job.
correlation_id | job_id | action_type | description | status
----------------+--------------------------------------+-----------------+-----------------------------------------------------+---------
edf0edf | de4e97cf-e747-4657-bd07-e60aa278c0f1 | LiveMigrateDisk | Migrating Disk lsm-vm_Disk1 from LSM_GFW to NFS_GFW | STARTED
- Correlation id and VM name.
32196296-26db-4316-8a04-a426c76f4957 | lsm-pool-9
- Images in the database for VM 'lsm-pool-9'.
image_guid | image_group_id | vm_snapshot_id | parentid | imagestatus | creation_date | volume_type | volume_format | active
--------------------------------------+--------------------------------------+--------------------------------------+--------------------------------------+-------------+------------------------+-------------+---------------+--------
bfbcf27c-c196-4f97-801c-35d9874d74e5 | 402fb943-ba57-4d1c-b95d-440646abedd5 | 7b0b4f24-74af-4054-b764-faf347c5fd23 | 2e4fbb3b-22cf-438a-ab41-89667ffafe69 | 2 | 2014-11-04 17:16:37-05 | 2 | 4 | f
89fa8627-65c4-417f-a73c-a4d8e5df3b9e | 402fb943-ba57-4d1c-b95d-440646abedd5 | 505261cf-7eff-495b-bca1-96a8ae24dec1 | bfbcf27c-c196-4f97-801c-35d9874d74e5 | 2 | 2014-11-04 17:33:08-05 | 2 | 4 | t
(2 rows)
- Storage domain.
9482b44a-c8f9-4b28-8700-d27424e9d4ac - LSM_GFW - 10.x.x.x:/home/exports/lsm_nfs
23cb53e2-9aa4-4a85-ac55-f46c5443c486 - NFS_GFW - 10.x.x.x:/home/exports/rhev_export
- image_storage_domain_map.
image_id | storage_domain_id | quota_id
--------------------------------------+--------------------------------------+----------
bfbcf27c-c196-4f97-801c-35d9874d74e5 | 9482b44a-c8f9-4b28-8700-d27424e9d4ac |
89fa8627-65c4-417f-a73c-a4d8e5df3b9e | 9482b44a-c8f9-4b28-8700-d27424e9d4ac |
- The disk images are on both Storage Domains
# grep 402fb943-ba57-4d1c-b95d-440646abedd5 su_vdsm_-s_.bin.sh_-c_.bin.ls_-lR_.rhev.data-center
drwxr-xr-x. 2 vdsm kvm 4096 Nov 4 17:32 402fb943-ba57-4d1c-b95d-440646abedd5
/rhev/data-center/mnt/10.x.x.x:_home_exports_lsm__nfs/9482b44a-c8f9-4b28-8700-d27424e9d4ac/images/402fb943-ba57-4d1c-b95d-440646abedd5:
drwxr-xr-x. 2 vdsm kvm 4096 Nov 4 17:35 402fb943-ba57-4d1c-b95d-440646abedd5
/rhev/data-center/mnt/10.x.x.x:_home_exports_rhev__export/23cb53e2-9aa4-4a85-ac55-f46c5443c486/images/402fb943-ba57-4d1c-b95d-440646abedd5
- Correlation id edf0edf in the engine.log for VM 'lsm-pool-9' (VmReplicateDiskFinishVDSCommand and DeleteImageGroupVDSCommand are never executed).
2014-11-04 17:34:20,244 INFO [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (org.ovirt.thread.pool-4-thread-46) [edf0edf] Lock Acquired to object EngineLock [exclusiveLocks= , sharedLocks= key: 32196296-26db-4316-8a04-a426c76f4957 value: VM
2014-11-04 17:34:20,321 INFO [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (org.ovirt.thread.pool-4-thread-46) [edf0edf] Running command: LiveMigrateDiskCommand Task handler: CreateImagePlaceholderTaskHandler internal: true. Entities affected : ID: 402fb943-ba57-4d1c-b95d-440646abedd5 Type: Disk, ID: 23cb53e2-9aa4-4a85-ac55-f46c5443c486 Type: Storage
2014-11-04 17:34:20,326 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CloneImageGroupStructureVDSCommand] (org.ovirt.thread.pool-4-thread-46) [edf0edf] START, CloneImageGroupStructureVDSCommand( storagePoolId = 6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8, ignoreFailoverLimit = false, storageDomainId = 9482b44a-c8f9-4b28-8700-d27424e9d4ac, imageGroupId = 402fb943-ba57-4d1c-b95d-440646abedd5, dstDomainId = 23cb53e2-9aa4-4a85-ac55-f46c5443c486), log id: 1ad5ebda
2014-11-04 17:34:43,345 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CloneImageGroupStructureVDSCommand] (org.ovirt.thread.pool-4-thread-46) [edf0edf] FINISH, CloneImageGroupStructureVDSCommand, log id: 1ad5ebda
2014-11-04 17:34:43,364 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (org.ovirt.thread.pool-4-thread-46) [edf0edf] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 7db57f28-b10a-476b-bff0-f7578f4c711e
2014-11-04 17:34:43,364 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (org.ovirt.thread.pool-4-thread-46) [edf0edf] CommandMultiAsyncTasks::AttachTask: Attaching task 8214c85b-b577-48ae-96de-712e61f84fad to command 7db57f28-b10a-476b-bff0-f7578f4c711e.
2014-11-04 17:34:45,826 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (org.ovirt.thread.pool-4-thread-46) [edf0edf] Adding task 8214c85b-b577-48ae-96de-712e61f84fad (Parent Command LiveMigrateDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet..
2014-11-04 17:34:45,872 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-4-thread-46) [edf0edf] Correlation ID: edf0edf, Job ID: de4e97cf-e747-4657-bd07-e60aa278c0f1, Call Stack: null, Custom Event ID: -1, Message: User admin moving disk lsm-vm_Disk1 to domain NFS_GFW.
2014-11-04 17:34:45,872 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (org.ovirt.thread.pool-4-thread-46) [edf0edf] BaseAsyncTask::startPollingTask: Starting to poll task 8214c85b-b577-48ae-96de-712e61f84fad.
2014-11-04 17:34:45,872 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (org.ovirt.thread.pool-4-thread-46) [edf0edf] BaseAsyncTask::startPollingTask: Starting to poll task 8214c85b-b577-48ae-96de-712e61f84fad.
2014-11-04 17:34:46,070 INFO [org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] (org.ovirt.thread.pool-4-thread-46) [edf0edf] Lock freed to object EngineLock [exclusiveLocks= key: 402fb943-ba57-4d1c-b95d-440646abedd5 value: DISK
2014-11-04 17:35:25,734 INFO [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (org.ovirt.thread.pool-4-thread-27) [edf0edf] Ending command successfully: org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand
2014-11-04 17:35:25,734 INFO [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (org.ovirt.thread.pool-4-thread-27) [edf0edf] Running command: LiveMigrateDiskCommand Task handler: VmReplicateDiskStartTaskHandler internal: false. Entities affected : ID: 402fb943-ba57-4d1c-b95d-440646abedd5 Type: Disk, ID: 23cb53e2-9aa4-4a85-ac55-f46c5443c486 Type: Storage
2014-11-04 17:35:25,735 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (org.ovirt.thread.pool-4-thread-27) [edf0edf] START, VmReplicateDiskStartVDSCommand(HostName = rhevh-1, HostId = a2c85a15-9b53-493d-9731-8b5cccdd8951, vmId=32196296-26db-4316-8a04-a426c76f4957), log id: 20e9630
2014-11-04 17:35:30,262 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (org.ovirt.thread.pool-4-thread-27) [edf0edf] FINISH, VmReplicateDiskStartVDSCommand, log id: 20e9630
2014-11-04 17:35:30,268 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SyncImageGroupDataVDSCommand] (org.ovirt.thread.pool-4-thread-27) [edf0edf] START, SyncImageGroupDataVDSCommand( storagePoolId = 6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8, ignoreFailoverLimit = false, storageDomainId = 9482b44a-c8f9-4b28-8700-d27424e9d4ac, imageGroupId = 402fb943-ba57-4d1c-b95d-440646abedd5, dstDomainId = 23cb53e2-9aa4-4a85-ac55-f46c5443c486, syncType=INTERNAL), log id: a635a65
2014-11-04 17:35:58,187 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SyncImageGroupDataVDSCommand] (org.ovirt.thread.pool-4-thread-27) [edf0edf] FINISH, SyncImageGroupDataVDSCommand, log id: a635a65
2014-11-04 17:35:58,205 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (org.ovirt.thread.pool-4-thread-27) [edf0edf] CommandMultiAsyncTasks::AttachTask: Attaching task c0615c59-78de-4d0e-8f10-2167bf3e2cbb to command 7db57f28-b10a-476b-bff0-f7578f4c711e.
2014-11-04 17:35:58,217 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (org.ovirt.thread.pool-4-thread-27) [edf0edf] Adding task c0615c59-78de-4d0e-8f10-2167bf3e2cbb (Parent Command LiveMigrateDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet..
2014-11-04 17:35:58,258 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (org.ovirt.thread.pool-4-thread-27) [edf0edf] BaseAsyncTask::startPollingTask: Starting to poll task c0615c59-78de-4d0e-8f10-2167bf3e2cbb.
vdsm.log extracts.
- qcow image created for snapshot-generated image in src domain
e7d8ac46-8fd5-43b6-84e1-32ee3f1b7b86::DEBUG::2014-11-04 17:33:06,539::volume::1042::Storage.Misc.excCmd::(createVolume) '/usr/bin/qemu-img create -f qcow2 -F qcow2 -b ../402fb943-ba57-4d1c-b95d-440646abedd5/bfbcf27c-c196-4f97-801c-35d9874d74e5 /rhev/data-center/6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8/9482b44a-c8f9-4b28-8700-d27424e9d4ac/images/402fb943-ba57-4d1c-b95d-440646abedd5/89fa8627-65c4-417f-a73c-a4d8e5df3b9e' (cwd /rhev/data-center/6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8/9482b44a-c8f9-4b28-8700-d27424e9d4ac/images/402fb943-ba57-4d1c-b95d-440646abedd5)
- cloneImageStructure
Thread-49349::INFO::2014-11-04 17:34:48,182::logUtils::44::dispatcher::(wrapper) Run and protect: cloneImageStructure(spUUID='6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8', sdUUID='9482b44a-c8f9-4b28-8700-d27424e9d4ac', imgUUID='402fb943-ba57-4d1c-b95d-440646abedd5', dstSdUUID='23cb53e2-9aa4-4a85-ac55-f46c5443c486')
- qcow image created for base image in dest domain, and snapshot-generated image in dest domain
8214c85b-b577-48ae-96de-712e61f84fad::DEBUG::2014-11-04 17:35:00,301::volume::1042::Storage.Misc.excCmd::(createVolume) '/usr/bin/qemu-img create -f qcow2 -F raw -b ../402fb943-ba57-4d1c-b95d-440646abedd5/2e4fbb3b-22cf-438a-ab41-89667ffafe69 /rhev/data-center/6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8/23cb53e2-9aa4-4a85-ac55-f46c5443c486/images/402fb943-ba57-4d1c-b95d-440646abedd5/bfbcf27c-c196-4f97-801c-35d9874d74e5' (cwd /rhev/data-center/6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8/23cb53e2-9aa4-4a85-ac55-f46c5443c486/images/402fb943-ba57-4d1c-b95d-440646abedd5)
8214c85b-b577-48ae-96de-712e61f84fad::DEBUG::2014-11-04 17:35:08,242::volume::1042::Storage.Misc.excCmd::(createVolume) '/usr/bin/qemu-img create -f qcow2 -F qcow2 -b ../402fb943-ba57-4d1c-b95d-440646abedd5/bfbcf27c-c196-4f97-801c-35d9874d74e5 /rhev/data-center/6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8/23cb53e2-9aa4-4a85-ac55-f46c5443c486/images/402fb943-ba57-4d1c-b95d-440646abedd5/89fa8627-65c4-417f-a73c-a4d8e5df3b9e' (cwd /rhev/data-center/6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8/23cb53e2-9aa4-4a85-ac55-f46c5443c486/images/402fb943-ba57-4d1c-b95d-440646abedd5)
- vmDiskReplicateStart
Thread-49343::DEBUG::2014-11-04 17:35:35,265::BindingXMLRPC::1067::vds::(wrapper) client [10.10.177.125]::call vmDiskReplicateStart with ('32196296-26db-4316-8a04-a426c76f4957', {'device': 'disk', 'domainID': '9482b44a-c8f9-4b28-8700-d27424e9d4ac', 'volumeID': '89fa8627-65c4-417f-a73c-a4d8e5df3b9e', 'poolID': '6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8', 'imageID': '402fb943-ba57-4d1c-b95d-440646abedd5'}, {'device': 'disk', 'domainID': '23cb53e2-9aa4-4a85-ac55-f46c5443c486', 'volumeID': '89fa8627-65c4-417f-a73c-a4d8e5df3b9e', 'poolID': '6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8', 'imageID': '402fb943-ba57-4d1c-b95d-440646abedd5'}) {} flowID [edf0edf]
Thread-49375::DEBUG::2014-11-04 17:35:37,387::BindingXMLRPC::1074::vds::(wrapper) return vmDiskReplicateStart with {'status': {'message': 'Done', 'code': 0}}
- syncImageData started
Thread-49349::INFO::2014-11-04 17:36:03,881::logUtils::44::dispatcher::(wrapper) Run and protect: syncImageData(spUUID='6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8', sdUUID='9482b44a-c8f9-4b28-8700-d27424e9d4ac', imgUUID='402fb943-ba57-4d1c-b95d-440646abedd5', dstSdUUID='23cb53e2-9aa4-4a85-ac55-f46c5443c486', syncType='INTERNAL')
- base image data copied from src to dest domain
c0615c59-78de-4d0e-8f10-2167bf3e2cbb::DEBUG::2014-11-04 17:36:11,786::utils::683::Storage.Misc.excCmd::(watchCmd) '/bin/nice -n 19 /usr/bin/ionice -c 3 /bin/dd if=/rhev/data-center/mnt/10.x.x.x:_home_exports_lsm__nfs/9482b44a-c8f9-4b28-8700-d27424e9d4ac/images/402fb943-ba57-4d1c-b95d-440646abedd5/bfbcf27c-c196-4f97-801c-35d9874d74e5 of=/rhev/data-center/mnt/10.x.x.x:_home_exports_rhev__export/23cb53e2-9aa4-4a85-ac55-f46c5443c486/images/402fb943-ba57-4d1c-b95d-440646abedd5/bfbcf27c-c196-4f97-801c-35d9874d74e5 bs=197120 seek=0 skip=0 conv=notrunc count=1 oflag=direct' (cwd None)
c0615c59-78de-4d0e-8f10-2167bf3e2cbb::DEBUG::2014-11-04 17:36:11,943::utils::697::Storage.Misc.excCmd::(watchCmd) SUCCESS: <err> = ['1+0 records in', '1+0 records out', '197120 bytes (197 kB) copied, 0.0581716 s, 3.4 MB/s']; <rc> = 0
- syncImageData task ended
hread-49375::INFO::2014-11-04 17:36:51,714::logUtils::44::dispatcher::(wrapper) Run and protect: clearTask(taskID='c0615c59-78de-4d0e-8f10-2167bf3e2cbb', spUUID=None, ptions=None)
Thread-49375::DEBUG::2014-11-04 17:36:51,714::taskManager::161::TaskManager::(clearTask) Entry. taskID: c0615c59-78de-4d0e-8f10-2167bf3e2cbb
c0615c59-78de-4d0e-8f10-2167bf3e2cbb::DEBUG::2014-11-04 17:37:34,111::threadPool::57::Misc.ThreadPool::(setRunningTask) Number of running tasks: 3
- The main problem here is that mirrored i/o has been set up to both active images
$ xzgrep -i mirror libvirtd.log.2?.xz |grep 89fa8627-65c4-417f-a73c-a4d8e5df3b9e
libvirtd.log.28.xz:2014-11-04 22:35:37.160+0000: 19560: debug : qemuMonitorDriveMirror:2842 : mon=0x7ff910142100, device=drive-virtio-disk0, file=/rhev/data-center/6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8/23cb53e2-9aa4-4a85-ac55-f46c5443c486/images/402fb943-ba57-4d1c-b95d-440646abedd5/89fa8627-65c4-417f-a73c-a4d8e5df3b9e, format=qcow2, bandwidth=0, flags=3
libvirtd.log.28.xz:2014-11-04 22:35:37.161+0000: 19560: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7ff910142100 msg={"execute":"drive-mirror","arguments":{"device":"drive-virtio-disk0","target":"/rhev/data-center/6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8/23cb53e2-9aa4-4a85-ac55-f46c5443c486/images/402fb943-ba57-4d1c-b95d-440646abedd5/89fa8627-65c4-417f-a73c-a4d8e5df3b9e","speed":0,"sync":"top","mode":"existing","format":"qcow2"},"id":"libvirt-75"}
libvirtd.log.28.xz:2014-11-04 22:35:37.166+0000: 19553: debug : qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE: mon=0x7ff910142100 buf={"execute":"drive-mirror","arguments":{"device":"drive-virtio-disk0","target":"/rhev/data-center/6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8/23cb53e2-9aa4-4a85-ac55-f46c5443c486/images/402fb943-ba57-4d1c-b95d-440646abedd5/89fa8627-65c4-417f-a73c-a4d8e5df3b9e","speed":0,"sync":"top","mode":"existing","format":"qcow2"},"id":"libvirt-75"}
libvirtd.log.28.xz:2014-11-04 22:35:37.169+0000: 19560: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7ff910142100 msg={"execute":"__com.redhat_drive-mirror","arguments":{"device":"drive-virtio-disk0","target":"/rhev/data-center/6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8/23cb53e2-9aa4-4a85-ac55-f46c5443c486/images/402fb943-ba57-4d1c-b95d-440646abedd5/89fa8627-65c4-417f-a73c-a4d8e5df3b9e","speed":0,"full":false,"mode":"existing","format":"qcow2"},"id":"libvirt-76"}
libvirtd.log.28.xz:2014-11-04 22:35:37.170+0000: 19553: debug : qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE: mon=0x7ff910142100 buf={"execute":"__com.redhat_drive-mirror","arguments":{"device":"drive-virtio-disk0","target":"/rhev/data-center/6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8/23cb53e2-9aa4-4a85-ac55-f46c5443c486/images/402fb943-ba57-4d1c-b95d-440646abedd5/89fa8627-65c4-417f-a73c-a4d8e5df3b9e","speed":0,"full":false,"mode":"existing","format":"qcow2"},"id":"libvirt-76"}
# fuser /rhev/data-center/6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8/9482b44a-c8f9-4b28-8700-d27424e9d4ac/images/402fb943-ba57-4d1c-b95d-440646abedd5/89fa8627-65c4-417f-a73c-a4d8e5df3b9e
/rhev/data-center/6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8/9482b44a-c8f9-4b28-8700-d27424e9d4ac/images/402fb943-ba57-4d1c-b95d-440646abedd5/89fa8627-65c4-417f-a73c-a4d8e5df3b9e: 11454
# fuser /rhev/data-center/6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8/23cb53e2-9aa4-4a85-ac55-f46c5443c486/images/402fb943-ba57-4d1c-b95d-440646abedd5/89fa8627-65c4-417f-a73c-a4d8e5df3b9e
/rhev/data-center/6d1b58e5-cc59-41e7-a65c-d2ef3def3ef8/23cb53e2-9aa4-4a85-ac55-f46c5443c486/images/402fb943-ba57-4d1c-b95d-440646abedd5/89fa8627-65c4-417f-a73c-a4d8e5df3b9e: 11454
# ps -ef|grep 11454
qemu 11454 1 0 Nov04 ? 00:17:17 /usr/libexec/qemu-kvm -name lsm-pool-9 -
- What's missing;
a) The ending sequence in the engine did not occur, the following commands were not executed;
VmReplicateDiskFinishVDSCommand
DeleteImageGroupVDSCommand
b) The disk images exist in both Storage Domains. So the VM is up and running and writing to both images but the database only sees these images as still only belonging to the source domain.
NOTE: Some of the output above is from SQL commands. Please contact Red Hat Technical Support for details about these commands.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.
Comments