Unable to create or delete nova instances
Issue
- After restarting the controller, instances fail to build and will either get stuck in the Build state or show the following error:
Error: Block Device Mapping is Invalid: Boot sequence for the instance and
image/block device mapping combination is not valid. (HTTP 400) (Request-ID:
req-<uuid>)
- Unable to delete instances as they stay stuck in the deleting state.
- Compute node(s) have lost their connection to libvirtd:
[root@node0 tmp(keystone_linux_prod)]# nova service-list
+----+------------------+------------+----------+----------+-------+----------------------------+-------------------------------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+------------+----------+----------+-------+----------------------------+-------------------------------------+
| 1 | nova-consoleauth | node0 | internal | enabled | up | 2015-10-13T19:30:43.000000 | - |
| 2 | nova-scheduler | node0 | internal | enabled | up | 2015-10-13T19:30:52.000000 | - |
| 4 | nova-conductor | node0 | internal | enabled | up | 2015-10-13T19:30:51.000000 | - |
| 5 | nova-cert | node0 | internal | enabled | up | 2015-10-13T19:30:53.000000 | - |
| 6 | nova-compute | node1 | nova | enabled | up | 2015-10-13T19:30:51.000000 | None |
| 7 | nova-compute | node2 | nova | disabled | up | 2015-10-13T19:30:45.000000 | AUTO: Connection to libvirt lost: 1 |
| 8 | nova-compute | node3 | nova | enabled | up | 2015-10-13T19:30:51.000000 | None |
| 9 | nova-compute | node4 | nova | enabled | up | 2015-10-13T19:30:47.000000 | None |
| 10 | nova-compute | node5 | nova | enabled | up | 2015-10-13T19:30:51.000000 | None |
| 11 | nova-compute | node6 | nova | disabled | up | 2015-10-13T19:30:45.000000 | AUTO: Connection to libvirt lost: 1 |
| 12 | nova-compute | node7 | nova | enabled | up | 2015-10-13T19:30:44.000000 | None |
| 13 | nova-compute | node8 | nova | enabled | up | 2015-10-13T19:30:50.000000 | None |
+----+------------------+------------+----------+----------+-------+----------------------------+-------------------------------------+
- Cinder volume is down on the controller:
[root@node0 tmp(keystone_linux_prod)]# cinder service-list
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | node0 | nova | enabled | up | 2015-10-13T19:31:38.000000 | None |
| cinder-volume | node0@nfs | nova | enabled | down | 2015-07-16T04:31:34.000000 | None |
| cinder-volume | node0@rbd | nova | enabled | up | 2015-10-13T19:31:40.000000 | None |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
- Errors are seen in the cinder volume log:
2015-10-12 11:01:56.031 10969 ERROR cinder.volume.flows.manager.create_volume [req-<uuid> <uuid> <uuid> - - -] Failed to copy image <uuid> to volume: <uuid>, error: [Errno 28] No space left on device
2015-10-12 11:01:56.035 10969 WARNING cinder.volume.manager [req-<uuid> <uuid> <uuid> - - -] Task 'cinder.volume.flows.manager.create_volume.CreateVolumeFromSpecTask;volume:create' (<uuid>) transitioned into state 'FAILURE'
Environment
- Red Hat OpenStack Platform 6.0
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
