RHEL OSP 4.0 instance get stuck SPAWNING

Latest response

Hi,

I am currently using RHEL OSP 4.0 which I have installed with packstack. My problem is: when I try to launch an instance from an image, the instance get stuck in SPAWNING state indefinitely. I have no errors in my logs. I can launch an instance from a volume. Md5 images that I'm using are okay (http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2).

Nova-scheduler redirects me correctly to the good host :

2014-03-18 15:08:31.184 2599 INFO nova.scheduler.filter_scheduler [req-78abeb9f-f734-4be2-8ad5-7bac385ba507
0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Attempting to build 1 instance(s) uuids:
[u'04890bed-e7ca-4c9a-b630-b99012ca2991']
2014-03-18 15:08:31.219 2599 INFO nova.scheduler.filter_scheduler [req-78abeb9f-f734-4be2-8ad5-7bac385ba507
0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Choosing host WeighedHost [host:
compute.openstack, weight: 1272.0] for instance 04890bed-e7ca-4c9a-b630-b99012ca2991
2014-03-18 15:08:31.327 2599 INFO nova.openstack.common.rpc.impl_qpid [req-78abeb9f-f734-4be2-8ad5-7bac385ba507
0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Connected to AMQP server on 192.168.1.3:5672

And here we have the logs of my compute :

2014-03-18 15:18:56.208 2498 INFO nova.compute.manager [req-006eb263-bb19-4765-a3f6-d0bb782a3b92 None None] Updating
bandwidth usage cache
2014-03-18 15:19:38.735 2498 AUDIT nova.compute.manager [req-bdae4f4e-cd18-4464-bd56-642e86e8c109
0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] [instance: 8f10e5af-6ffe-437b-af80-a6e1666e40d0]
Starting instance...
2014-03-18 15:19:39.326 2498 AUDIT nova.compute.claims [req-bdae4f4e-cd18-4464-bd56-642e86e8c109
0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] [instance: 8f10e5af-6ffe-437b-af80-a6e1666e40d0]
Attempting claim: memory 2048 MB, disk 20 GB, VCPUs 1
2014-03-18 15:19:39.327 2498 AUDIT nova.compute.claims [req-bdae4f4e-cd18-4464-bd56-642e86e8c109
0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] [instance: 8f10e5af-6ffe-437b-af80-a6e1666e40d0]
Total Memory: 129063 MB, used: 512.00 MB
2014-03-18 15:19:39.328 2498 AUDIT nova.compute.claims [req-bdae4f4e-cd18-4464-bd56-642e86e8c109
0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] [instance: 8f10e5af-6ffe-437b-af80-a6e1666e40d0]
Memory limit: 193594.50 MB, free: 193082.50 MB
2014-03-18 15:19:39.328 2498 AUDIT nova.compute.claims [req-bdae4f4e-cd18-4464-bd56-642e86e8c109
0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] [instance: 8f10e5af-6ffe-437b-af80-a6e1666e40d0]
Total Disk: 49 GB, used: 0.00 GB
2014-03-18 15:19:39.329 2498 AUDIT nova.compute.claims [req-bdae4f4e-cd18-4464-bd56-642e86e8c109
0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] [instance: 8f10e5af-6ffe-437b-af80-a6e1666e40d0]
Disk limit not specified, defaulting to unlimited
2014-03-18 15:19:39.329 2498 AUDIT nova.compute.claims [req-bdae4f4e-cd18-4464-bd56-642e86e8c109
0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] [instance: 8f10e5af-6ffe-437b-af80-a6e1666e40d0]
Total CPU: 8 VCPUs, used: 0.00 VCPUs
2014-03-18 15:19:39.330 2498 AUDIT nova.compute.claims [req-bdae4f4e-cd18-4464-bd56-642e86e8c109
0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] [instance: 8f10e5af-6ffe-437b-af80-a6e1666e40d0]
CPU limit: 128.00 VCPUs, free: 128.00 VCPUs
2014-03-18 15:19:39.330 2498 AUDIT nova.compute.claims [req-bdae4f4e-cd18-4464-bd56-642e86e8c109
0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] [instance: 8f10e5af-6ffe-437b-af80-a6e1666e40d0]
Claim successful
2014-03-18 15:19:40.275 2498 INFO nova.virt.libvirt.driver [req-bdae4f4e-cd18-4464-bd56-642e86e8c109
0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] [instance: 8f10e5af-6ffe-437b-af80-a6e1666e40d0]
Creating image
2014-03-18 15:19:56.313 2498 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2014-03-18 15:19:56.636 2498 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 126503
2014-03-18 15:19:56.636 2498 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 29
2014-03-18 15:19:56.637 2498 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 7
2014-03-18 15:19:56.768 2498 INFO nova.compute.resource_tracker [-] Compute_service record updated for compute.openstack:compute.openstack

So it seems that the instance get stuck in the creating image task...

In debug mode after "Creating image"

Creating image
2014-03-18 16:04:08.539 4569 DEBUG nova.openstack.common.processutils [req-51c73889-945e-4068-83f9-31f3f24e2642 0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f13c5a4412bfdf532ac5b4617bb705b25907db9b execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:147
2014-03-18 16:04:08.556 4569 DEBUG nova.openstack.common.lockutils [req-51c73889-945e-4068-83f9-31f3f24e2642 0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Got semaphore "/var/lib/nova/instances/_base/f13c5a4412bfdf532ac5b4617bb705b25907db9b" lock /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:166
2014-03-18 16:04:08.557 4569 DEBUG nova.openstack.common.lockutils [req-51c73889-945e-4068-83f9-31f3f24e2642 0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Attempting to grab file lock "/var/lib/nova/instances/_base/f13c5a4412bfdf532ac5b4617bb705b25907db9b" lock /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:176
2014-03-18 16:04:08.558 4569 DEBUG nova.openstack.common.lockutils [req-51c73889-945e-4068-83f9-31f3f24e2642 0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Got file lock "/var/lib/nova/instances/_base/f13c5a4412bfdf532ac5b4617bb705b25907db9b" at /var/lib/nova/instances/locks/nova-_var_lib_nova_instances__base_f13c5a4412bfdf532ac5b4617bb705b25907db9b lock /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:204
2014-03-18 16:04:08.559 4569 DEBUG nova.openstack.common.lockutils [req-51c73889-945e-4068-83f9-31f3f24e2642 0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Got semaphore / lock "copy_qcow2_image" inner /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:245
2014-03-18 16:04:08.559 4569 DEBUG nova.openstack.common.processutils [req-51c73889-945e-4068-83f9-31f3f24e2642 0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f13c5a4412bfdf532ac5b4617bb705b25907db9b execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:147
2014-03-18 16:04:08.572 4569 DEBUG nova.openstack.common.processutils [req-51c73889-945e-4068-83f9-31f3f24e2642 0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Running cmd (subprocess): qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f13c5a4412bfdf532ac5b4617bb705b25907db9b /var/lib/nova/instances/2ec91cee-e164-4c16-9dcd-3d079cc80fd9/disk execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:147
2014-03-18 16:04:08.620 4569 DEBUG nova.virt.disk.api [req-51c73889-945e-4068-83f9-31f3f24e2642 0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Checking if we can resize image /var/lib/nova/instances/2ec91cee-e164-4c16-9dcd-3d079cc80fd9/disk. size=21474836480 can_resize_image /usr/lib/python2.6/site-packages/nova/virt/disk/api.py:161
2014-03-18 16:04:08.621 4569 DEBUG nova.openstack.common.processutils [req-51c73889-945e-4068-83f9-31f3f24e2642 0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2ec91cee-e164-4c16-9dcd-3d079cc80fd9/disk execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:147
2014-03-18 16:04:08.635 4569 DEBUG nova.openstack.common.processutils [req-51c73889-945e-4068-83f9-31f3f24e2642 0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Running cmd (subprocess): qemu-img resize /var/lib/nova/instances/2ec91cee-e164-4c16-9dcd-3d079cc80fd9/disk 21474836480 execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:147
DEBUG nova.virt.disk.api [req-51c73889-945e-4068-83f9-31f3f24e2642 0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Checking if we can resize filesystem inside /var/lib/nova/instances/2ec91cee-e164-4c16-9dcd-3d079cc80fd9/disk. CoW=True is_image_partitionless /usr/lib/python2.6/site-packages/nova/virt/disk/api.py:175
2014-03-18 16:04:08.653 4569 DEBUG nova.virt.disk.vfs.api [req-51c73889-945e-4068-83f9-31f3f24e2642 0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Instance for image imgfile=/var/lib/nova/instances/2ec91cee-e164-4c16-9dcd-3d079cc80fd9/disk imgfmt=qcow2 partition=None instance_for_image /usr/lib/python2.6/site-packages/nova/virt/disk/vfs/api.py:31
2014-03-18 16:04:08.654 4569 DEBUG nova.virt.disk.vfs.api [req-51c73889-945e-4068-83f9-31f3f24e2642 0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Trying to import guestfs instance_for_image /usr/lib/python2.6/site-packages/nova/virt/disk/vfs/api.py:34
2014-03-18 16:04:08.663 4569 DEBUG nova.virt.disk.vfs.api [req-51c73889-945e-4068-83f9-31f3f24e2642 0884e76472074cab875b2fd5b1cef892 c9ae596af2684b298a899938e20e32b0] Using primary VFSGuestFS instance_for_image /usr/lib/python2.6/site-packages/nova/virt/disk/vfs/api.py:41
DEBUG nova.network.neutronv2.api [-] [instance: 2ec91cee-e164-4c16-9dcd-3d079cc80fd9] Successfully created port: 3a03093b-1f33-43bd-b0be-aeaa6599aa5e _create_port /usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py:184
2014-03-18 16:04:09.026 4569 DEBUG nova.network.neutronv2.api [-] get_instance_nw_info() for rh _get_instance_nw_info /usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py:464

In the reportroy of the instance, I juste have console.log and disk
ls /var/lib/nova/instances/2ec91cee-e164-4c16-9dcd-3d079cc80fd9/
console.log disk

Moreover, it seems that the instance get stuck in spawning state because of this command :

/usr/libexec/qemu-kvm -global virtio-blk-pci.scsi=off -nodefconfig -nodefaults -nographic -machine accel=kvm:tcg -cpu host,+kvmclock -m 500 -no-reboot -kernel /var/tmp/.guestfs-162/kernel.13189 -initrd /var/tmp/.guestfs-162/initrd.13189 -device virtio-scsi-pci,id=scsi -drive file=/var/lib/nova/instances/c61bb918-cd6f-4973-be52-cad1264acec4/disk,cache=none,format=qcow2,id=hd0,if=none -device scsi-hd,drive=hd0 -drive file=/var/tmp/.guestfs-162/root.13189,snapshot=on,id=appliance,if=none,cache=unsafe -device scsi-hd,drive=appliance -device virtio-serial -serial stdio -device sga -chardev socket,path=/tmp/libguestfstb1xv2/guestfsd.sock,id=channel0 -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 -append panic=1 console=ttyS0 udevtimeout=600 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 TERM=xterm

I am not sure to know what she is doing exactly, but if I stop it, the instance will be in active status (even if it's not correct). This command is executed by nova.

If you have any ideas to help me...

Thank you,

Responses