Overcloud deployment failed for "No valid host was found...code:500"

Latest response

Hi,
I tried to deploy my OSP9 overcloud in nested virtual environment. That means the control, compute and ceph-storage nodes all are VMs. The undercloud node is also a VM. So totally there are 10 VMs running in my VMhost. I successfully installed the director nodes in one of VM node (called OSP-D as below). Then when I try to deploy overcloud from OSP-D, I always get error Message: No valid host was found. There are not enough hosts available., Code: 500. The introspection was successful. And from "nova hypervisor-stats", the resource seems quite OK before deployment. What could be the problem?

Enclosed please find the nova-api.log and nova-conductor.log. Thanks!

[root@redhatvmhost scripts]# virsh list --all

Id Name State

41 OSP-D running
220 overcloud-node1 running
221 overcloud-node2 running
222 overcloud-node3 running
223 overcloud-node4 running
224 overcloud-node5 running
225 overcloud-node6 running
226 overcloud-node7 running
227 overcloud-node8 running
228 overcloud-node9 running

[stack@osp-d ~]$ ironic node-list
+--------------------------------------+-----------------+--------------------------------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-----------------+--------------------------------------+-------------+--------------------+-------------+
| 4bbcd3c1-3c6c-4826-8e72-52a9b71258d6 | overcloud-node1 | fc677759-d2ff-4419-974c-f7e6f27a8176 | power on | available | False |
| 8989f7c1-5012-43c3-a653-1e5f1ff7fe18 | overcloud-node2 | 7d8a6365-f6ac-4fb1-85b6-e08f65b84dd5 | power on | available | False |
| 14aef095-12ca-41de-93c4-668428f923fd | overcloud-node3 | 89c74b76-ffec-4f51-a411-d215a6dd5111 | power on | available | False |
| 0078f452-9f5f-4c79-8885-10e1169c6060 | overcloud-node4 | 75d86d28-f237-42c7-8bed-82adc6cf8f78 | power on | available | False |
| 4772d9b4-099d-48f0-9f8f-8dd39d21e8a3 | overcloud-node5 | 75637932-ba6a-4e6f-9ff0-571f2a27807a | power on | available | False |
| 853c7540-cf6e-4bd9-b065-9aba340af5cf | overcloud-node6 | 3f796335-5c41-44b4-a8f8-db5a9fde0cdf | power on | available | False |
| 62182238-ae17-4bd2-95f9-40a5e9237045 | overcloud-node7 | 65c4f34e-8a03-4731-9d80-ed6a0d91ea8a | power on | available | False |
| 96744a84-5710-4a6f-a53e-ac628a459c55 | overcloud-node8 | 0f75c8aa-a885-48f3-b7f3-324e77acac4a | power on | available | False |
| 12029189-7f79-401e-9634-93d8b5a5950b | overcloud-node9 | c818667e-5dcb-4a50-a362-81f4a23d6efe | power on | available | False |
+--------------------------------------+-----------------+--------------------------------------+-------------+--------------------+-------------+
[stack@osp-d ~]$ openstack stack list
+--------------------------------------+------------+---------------+---------------------+--------------+
| ID | Stack Name | Stack Status | Creation Time | Updated Time |
+--------------------------------------+------------+---------------+---------------------+--------------+
| 8e95fa28-8e97-47e8-97c6-3ba46315912a | overcloud | CREATE_FAILED | 2016-11-25T10:25:30 | None |
+--------------------------------------+------------+---------------+---------------------+--------------+

Deployment logs

2016-11-25 10:27:35 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:36 [Controller]: CREATE_FAILED ResourceInError: resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:36 [2]: CREATE_FAILED ResourceInError: resources[2].resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:36 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:36 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:36 [overcloud-Compute-eamlrfwlozpy-2-gxh6ct45rrnn]: CREATE_FAILED Resource CREATE failed: ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:37 [overcloud-Controller-sm4p5ipb4nqm-1-6wsyzem6omd2]: CREATE_FAILED Resource CREATE failed: ResourceInError: resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:37 [1]: CREATE_FAILED ResourceInError: resources[1].resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:37 [overcloud-Compute-eamlrfwlozpy-1-yuewpgvn4qzo]: CREATE_FAILED Resource CREATE failed: ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:37 [overcloud-Compute-eamlrfwlozpy-0-omrvilffysoh]: CREATE_FAILED Resource CREATE failed: ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:38 [1]: CREATE_FAILED ResourceInError: resources[1].resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:38 [0]: CREATE_FAILED ResourceInError: resources[0].resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:39 [Controller]: CREATE_IN_PROGRESS state changed
2016-11-25 10:27:39 [Controller]: CREATE_IN_PROGRESS state changed
2016-11-25 10:27:39 [overcloud-Compute-eamlrfwlozpy]: CREATE_FAILED Resource CREATE failed: ResourceInError: resources[2].resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:40 [Compute]: CREATE_FAILED ResourceInError: resources.Compute.resources[2].resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:41 [Controller]: CREATE_FAILED ResourceInError: resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:41 [Controller]: CREATE_FAILED ResourceInError: resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:42 [overcloud-Controller-sm4p5ipb4nqm-0-lgjkqxoadfeu]: CREATE_FAILED Resource CREATE failed: ResourceInError: resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:42 [overcloud-Controller-sm4p5ipb4nqm-2-vlrjf4xfkrht]: CREATE_FAILED Resource CREATE failed: ResourceInError: resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:43 [0]: CREATE_FAILED ResourceInError: resources[0].resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:44 [2]: CREATE_FAILED ResourceInError: resources[2].resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:45 [Controller]: CREATE_FAILED ResourceInError: resources.Controller.resources[1].resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
2016-11-25 10:27:45 [overcloud-Controller-sm4p5ipb4nqm]: CREATE_FAILED Resource CREATE failed: ResourceInError: resources[1].resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
Stack overcloud CREATE_FAILED
Deployment failed: Heat Stack create failed.

Nova-conductor.log

2016-11-25 11:27:30.104 28499 WARNING nova.scheduler.utils [req-dd8efd32-4319-4a80-a3ad-31fac99c35b6 8449634a091b4d2f98151f93b15ce30c 961d8f774bdd4a8bad023c95dd37cc1d - - -] [instance: 6f7a50e7-d7a5-414c-abc6-211fd78ae363] Setting instance to ERROR state.
2016-11-25 11:27:30.148 28499 DEBUG nova.objects.instance [req-dd8efd32-4319-4a80-a3ad-31fac99c35b6 8449634a091b4d2f98151f93b15ce30c 961d8f774bdd4a8bad023c95dd37cc1d - - -] Lazy-loading 'metadata' on Instance uuid 6f7a50e7-d7a5-414c-abc6-211fd78ae363 obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:895
2016-11-25 11:27:30.180 28499 DEBUG nova.objects.instance [req-dd8efd32-4319-4a80-a3ad-31fac99c35b6 8449634a091b4d2f98151f93b15ce30c 961d8f774bdd4a8bad023c95dd37cc1d - - -] Lazy-loading 'info_cache' on Instance uuid 6f7a50e7-d7a5-414c-abc6-211fd78ae363 obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:895
2016-11-25 11:27:30.217 28499 DEBUG nova.network.neutronv2.api [req-dd8efd32-4319-4a80-a3ad-31fac99c35b6 8449634a091b4d2f98151f93b15ce30c 961d8f774bdd4a8bad023c95dd37cc1d - - -] [instance: 6f7a50e7-d7a5-414c-abc6-211fd78ae363] deallocate_for_instance() deallocate_for_instance /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:862
2016-11-25 11:27:30.218 28499 DEBUG keystoneauth.session [req-dd8efd32-4319-4a80-a3ad-31fac99c35b6 8449634a091b4d2f98151f93b15ce30c 961d8f774bdd4a8bad023c95dd37cc1d - - -] REQ: curl -g -i -X GET http://192.0.2.1:9696/v2.0/ports.json?device_id=6f7a50e7-d7a5-414c-abc6-211fd78ae363 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}5e0b250b3cdf1389c161d41255eca84ebb746c83" _http_log_request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:248
2016-11-25 11:27:30.238 28499 DEBUG keystoneauth.session [req-dd8efd32-4319-4a80-a3ad-31fac99c35b6 8449634a091b4d2f98151f93b15ce30c 961d8f774bdd4a8bad023c95dd37cc1d - - -] RESP: [200] Content-Type: application/json; charset=UTF-8 Content-Length: 13 X-Openstack-Request-Id: req-0a78270f-da77-43c4-9a46-82dcd53764af Date: Fri, 25 Nov 2016 10:27:30 GMT Connection: keep-alive
RESP BODY: {"ports": []}
_http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:277
2016-11-25 11:27:30.239 28499 DEBUG nova.network.neutronv2.api [req-dd8efd32-4319-4a80-a3ad-31fac99c35b6 8449634a091b4d2f98151f93b15ce30c 961d8f774bdd4a8bad023c95dd37cc1d - - -] [instance: 6f7a50e7-d7a5-414c-abc6-211fd78ae363] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1720
2016-11-25 11:27:30.239 28499 DEBUG nova.network.base_api [req-dd8efd32-4319-4a80-a3ad-31fac99c35b6 8449634a091b4d2f98151f93b15ce30c 961d8f774bdd4a8bad023c95dd37cc1d - - -] [instance: 6f7a50e7-d7a5-414c-abc6-211fd78ae363] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:43
2016-11-25 11:27:34.312 28503 WARNING nova.scheduler.utils [req-c9135a9a-c01f-41e1-8247-8f07925551e3 8449634a091b4d2f98151f93b15ce30c 961d8f774bdd4a8bad023c95dd37cc1d - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 150, in inner
return func(*args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations
dests = self.driver.select_destinations(ctxt, spec_obj)

File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations
raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts available.

Attachments

Responses

Hi Guoliang,

The "no valid host was found" errors are usually from a delta of ironic metadata and the intended overcloud deploy command. Ie a mismatched tag of the ironic node, incorrect introspection data, and/or specifying more nodes than available (according to ironic) in the deploy.

We publish a guide on helping understand these errors here: https://access.redhat.com/solutions/2623661 , I would first double check the ironic nodes' state, and their tagging details.

I see you've got the nova debug logging enabled, you may want to do the same for ironic debug logging. This should show additional details in the log that are more human readable, might provide some insight. Details on that are here: https://access.redhat.com/solutions/1391343

You can tail either of these log files during the deployment to see where they're failing. (ie sudo journalctl -u openstack-ironic-inspector -u openstack-ironic-inspector-dnsmasq -f).

Some additional things you can do to troubleshoot are watching the overcloud nodes' console during the deploy process to see where it might be failing. It should successfully PXE boot, boot the ramdisk, copy the overcloud-full image to disk, reboot using that disk, and lastly provision itself with os-collect-config.

If it fails to PXE boot, or get an IP address via DHCP, we have some docs that may help you troubleshoot that process: https://access.redhat.com/articles/2266971

If the deploy is failing during the ramdisk step (just after successful PXE boot), you can set the ramdisk_logs_dir, and always_store_ramdisk_logs = true parameters in /etc/ironic-inspector/inspector.conf to store the ramdisk logs on the undercloud for further analysis.

If you're still stuck at this point, please open a support case with us and we can help take a look.

Thank you! -Andrew

Hi, Andrew,

Thanks for you reply. I tried a simple deployment with only two ironic nodes as control and compute. Looks got same result. According to your suggestion, I analyzed the logs and found a strange error. It says 'compute' does not match 'control' and then followed with "0 hosts available...". What does it means? Is this the main cause?

Also another thing I don't understand is you can see that the resource (mem and disk) is enough for the expected flavor before the deployement. But why after deployment failed, the resource become 0 from the MariaDB and hypervisor stats.

I also watched the console of the overcloud node and found that the PXE boot even hadn't started yet.

Some nova logs:

2016-11-29 15:02:26.907 28466 DEBUG nova.scheduler.filters.compute_capabilities_filter [req-8ea696e8-9db5-4b50-bbac-837fb2d0f6ef 8449634a091b4d2f98151f93b15ce30c 961d8f774bdd4a8bad023c95dd37cc1d - - -] (osp-d.duacsdn.sero.gic.ericsson.se, 4d9bb9fd-b6ae-483b-bff9-8092caedc41e) ram: 16384MB disk: 43008MB io_ops: 0 instances: 0 fails extra_spec requirements. 'compute' does not match 'control' _satisfies_extra_specs /usr/lib/python2.7/site-packages/nova/scheduler/filters/compute_capabilities_filter.py:912016-11-29 15:02:26.907 28466 DEBUG nova.scheduler.filters.compute_capabilities_filter [req-8ea696e8-9db5-4b50-bbac-837fb2d0f6ef 8449634a091b4d2f98151f93b15ce30c 961d8f774bdd4a8bad023c95dd37cc1d - - -] (osp-d.duacsdn.sero.gic.ericsson.se, 4d9bb9fd-b6ae-483b-bff9-8092caedc41e) ram: 16384MB disk: 43008MB io_ops: 0 instances: 0 fails extra_spec requirements. 'compute' does not match 'control' _satisfies_extra_specs /usr/lib/python2.7/site-packages/nova/scheduler/filters/compute_capabilities_filter.py:91 2016-11-29 15:02:29.714 28466 INFO nova.filters [req-8ea696e8-9db5-4b50-bbac-837fb2d0f6ef 8449634a091b4d2f98151f93b15ce30c 961d8f774bdd4a8bad023c95dd37cc1d - - -] Filter ComputeCapabilitiesFilter returned 0 hosts 2016-11-29 15:02:29.714 28466 DEBUG nova.filters [req-8ea696e8-9db5-4b50-bbac-837fb2d0f6ef 8449634a091b4d2f98151f93b15ce30c 961d8f774bdd4a8bad023c95dd37cc1d - - -] Filtering removed all hosts for the request with instance ID '7bd9d3e9-abee-4962-acf3-fb05e4fabd97'. Filter results: [('RetryFilter', [(u'osp-d.duacsdn.sero.gic.ericsson.se', u'4d9bb9fd-b6ae-483b-bff9-8092caedc41e')]), ('TripleOCapabilitiesFilter', [(u'osp-d.duacsdn.sero.gic.ericsson.se', u'4d9bb9fd-b6ae-483b-bff9-8092caedc41e')]), ('ComputeCapabilitiesFilter', None)] get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:129 2016-11-29 15:02:29.714 28466 INFO nova.filters [req-8ea696e8-9db5-4b50-bbac-837fb2d0f6ef 8449634a091b4d2f98151f93b15ce30c 961d8f774bdd4a8bad023c95dd37cc1d - - -] Filtering removed all hosts for the request with instance ID '7bd9d3e9-abee-4962-acf3-fb05e4fabd97'. Filter results: ['RetryFilter: (start: 2, end: 1)', 'TripleOCapabilitiesFilter: (start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 0)'] 2016-11-29 15:02:29.714 28466 DEBUG nova.scheduler.filter_scheduler [req-8ea696e8-9db5-4b50-bbac-837fb2d0f6ef 8449634a091b4d2f98151f93b15ce30c 961d8f774bdd4a8bad023c95dd37cc1d - - -] There are 0 hosts available but 1 instances requested to build. select_destinations /usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py:71

[stack@osp-d scripts]$ openstack overcloud profiles list +--------------------------------------+-----------------+-----------------+-----------------+-------------------+ | Node UUID | Node Name | Provision State | Current Profile | Possible Profiles | +--------------------------------------+-----------------+-----------------+-----------------+-------------------+ | 4d9bb9fd-b6ae-483b-bff9-8092caedc41e | overcloud-node1 | available | control | | | ee258d4c-62db-471f-8f30-e37329992ad0 | overcloud-node2 | available | compute | | +--------------------------------------+-----------------+-----------------+-----------------+-------------------+

MariaDB [nova]> SELECT hypervisor_hostname, memory_mb, memory_mb_used, free_ram_mb,local_gb,local_gb_used,cpu_info,vcpus FROM compute_nodes; +--------------------------------------+-----------+----------------+-------------+----------+---------------+----------+-------+ | hypervisor_hostname | memory_mb | memory_mb_used | free_ram_mb | local_gb | local_gb_used | cpu_info | vcpus | +--------------------------------------+-----------+----------------+-------------+----------+---------------+----------+-------+ | 70991d53-7c20-45d5-ae4a-5f982abf4517 | 0 | 0 | 0 | 0 | 0 | | 0 | | ebd12552-e98a-496e-8916-08ee272b0df5 | 0 | 0 | 0 | 0 | 0 | | 0 | | 6d33384a-0059-4ec5-b8b4-9607db81c418 | 0 | 0 | 0 | 0 | 0 | | 0 | | 5be95ce6-1cfd-4930-8153-9ac5196490f9 | 0 | 0 | 0 | 0 | 0 | | 0 | | a631dabd-4745-43ae-9505-849e9780f7c8 | 0 | 0 | 0 | 0 | 0 | | 0 | | 2611281c-ed8b-4d63-9bbb-c2308879d3fb | 0 | 0 | 0 | 0 | 0 | | 0 | | 9085fd08-c48e-4c75-a1f1-4f16f3ad2311 | 0 | 0 | 0 | 0 | 0 | | 0 | | 32027530-e738-4a1c-b68a-bdfd6d762e54 | 0 | 0 | 0 | 0 | 0 | | 0 | | 2314f02e-dc4a-40a8-aff1-8c97e3c6d160 | 0 | 0 | 0 | 0 | 0 | | 0 | | 0078f452-9f5f-4c79-8885-10e1169c6060 | 6144 | 0 | 6144 | 42 | 0 | | 4 | | 853c7540-cf6e-4bd9-b065-9aba340af5cf | 6144 | 0 | 6144 | 42 | 0 | | 4 | | 62182238-ae17-4bd2-95f9-40a5e9237045 | 6144 | 0 | 6144 | 59 | 0 | | 4 | | 4bbcd3c1-3c6c-4826-8e72-52a9b71258d6 | 16384 | 0 | 16384 | 42 | 0 | | 4 | | 12029189-7f79-401e-9634-93d8b5a5950b | 6144 | 0 | 6144 | 59 | 0 | | 4 | | 96744a84-5710-4a6f-a53e-ac628a459c55 | 6144 | 0 | 6144 | 59 | 0 | | 4 | | 8989f7c1-5012-43c3-a653-1e5f1ff7fe18 | 16384 | 0 | 16384 | 42 | 0 | | 4 | | 4772d9b4-099d-48f0-9f8f-8dd39d21e8a3 | 6144 | 0 | 6144 | 42 | 0 | | 4 | | 14aef095-12ca-41de-93c4-668428f923fd | 16384 | 0 | 16384 | 42 | 0 | | 4 | | fb80b110-7963-4f3c-af46-0ee29361c861 | 16384 | 0 | 16384 | 40 | 0 | | 4 | | 2bec22b0-0db0-4a45-a692-97b5b92fd976 | 16384 | 0 | 16384 | 40 | 0 | | 4 | | 4d9bb9fd-b6ae-483b-bff9-8092caedc41e | 16384 | 0 | 16384 | 42 | 0 | | 4 | | ee258d4c-62db-471f-8f30-e37329992ad0 | 16384 | 0 | 16384 | 42 | 0 | | 4 | +--------------------------------------+-----------+----------------+-------------+----------+---------------+----------+-------+

[stack@osp-d scripts]$ nova hypervisor-stats +----------------------+-------+ | Property | Value | +----------------------+-------+ | count | 2 | | current_workload | 0 | | disk_available_least | 84 | | free_disk_gb | 84 | | free_ram_mb | 32768 | | local_gb | 84 | | local_gb_used | 0 | | memory_mb | 32768 | | memory_mb_used | 0 | | running_vms | 0 | | vcpus | 8 | | vcpus_used | 0 | +----------------------+-------+

[stack@osp-d scripts]$ openstack flavor list +--------------------------------------+---------------+------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +--------------------------------------+---------------+------+------+-----------+-------+-----------+ | 2632dce5-89e5-4ce6-9eb1-7e862e215d81 | swift-storage | 4096 | 40 | 0 | 1 | True | | 3cfd30e5-5d98-49b0-8f54-215113194ca3 | control | 4096 | 40 | 0 | 1 | True | | 897072b9-65e2-403a-95c3-f69400751050 | ceph-storage | 4096 | 40 | 0 | 1 | True | | 8db4afdf-44ba-4d64-91c5-1e9a0a470b24 | block-storage | 4096 | 40 | 0 | 1 | True | | c660c6fa-493d-4d4f-94ef-20bd1156a7d5 | baremetal | 4096 | 40 | 0 | 1 | True | | ddc35bb7-546f-4b6b-bf02-aee2d91f444e | compute | 4096 | 40 | 0 | 1 | True | +--------------------------------------+---------------+------+------+-----------+-------+-----------+

After deployment failed

MariaDB [nova]> SELECT hypervisor_hostname, memory_mb, memory_mb_used, free_ram_mb,local_gb,local_gb_used,cpu_info,vcpus FROM compute_nodes; +--------------------------------------+-----------+----------------+-------------+----------+---------------+----------+-------+ | hypervisor_hostname | memory_mb | memory_mb_used | free_ram_mb | local_gb | local_gb_used | cpu_info | vcpus | +--------------------------------------+-----------+----------------+-------------+----------+---------------+----------+-------+ | 70991d53-7c20-45d5-ae4a-5f982abf4517 | 0 | 0 | 0 | 0 | 0 | | 0 | | ebd12552-e98a-496e-8916-08ee272b0df5 | 0 | 0 | 0 | 0 | 0 | | 0 | | 6d33384a-0059-4ec5-b8b4-9607db81c418 | 0 | 0 | 0 | 0 | 0 | | 0 | | 5be95ce6-1cfd-4930-8153-9ac5196490f9 | 0 | 0 | 0 | 0 | 0 | | 0 | | a631dabd-4745-43ae-9505-849e9780f7c8 | 0 | 0 | 0 | 0 | 0 | | 0 | | 2611281c-ed8b-4d63-9bbb-c2308879d3fb | 0 | 0 | 0 | 0 | 0 | | 0 | | 9085fd08-c48e-4c75-a1f1-4f16f3ad2311 | 0 | 0 | 0 | 0 | 0 | | 0 | | 32027530-e738-4a1c-b68a-bdfd6d762e54 | 0 | 0 | 0 | 0 | 0 | | 0 | | 2314f02e-dc4a-40a8-aff1-8c97e3c6d160 | 0 | 0 | 0 | 0 | 0 | | 0 | | 0078f452-9f5f-4c79-8885-10e1169c6060 | 6144 | 0 | 6144 | 42 | 0 | | 4 | | 853c7540-cf6e-4bd9-b065-9aba340af5cf | 6144 | 0 | 6144 | 42 | 0 | | 4 | | 62182238-ae17-4bd2-95f9-40a5e9237045 | 6144 | 0 | 6144 | 59 | 0 | | 4 | | 4bbcd3c1-3c6c-4826-8e72-52a9b71258d6 | 16384 | 0 | 16384 | 42 | 0 | | 4 | | 12029189-7f79-401e-9634-93d8b5a5950b | 6144 | 0 | 6144 | 59 | 0 | | 4 | | 96744a84-5710-4a6f-a53e-ac628a459c55 | 6144 | 0 | 6144 | 59 | 0 | | 4 | | 8989f7c1-5012-43c3-a653-1e5f1ff7fe18 | 16384 | 0 | 16384 | 42 | 0 | | 4 | | 4772d9b4-099d-48f0-9f8f-8dd39d21e8a3 | 6144 | 0 | 6144 | 42 | 0 | | 4 | | 14aef095-12ca-41de-93c4-668428f923fd | 16384 | 0 | 16384 | 42 | 0 | | 4 | | fb80b110-7963-4f3c-af46-0ee29361c861 | 16384 | 0 | 16384 | 40 | 0 | | 4 | | 2bec22b0-0db0-4a45-a692-97b5b92fd976 | 16384 | 0 | 16384 | 40 | 0 | | 4 | | 4d9bb9fd-b6ae-483b-bff9-8092caedc41e | 0 | 0 | 0 | 0 | 0 | | 0 | | ee258d4c-62db-471f-8f30-e37329992ad0 | 0 | 0 | 0 | 0 | 0 | | 0 | +--------------------------------------+-----------+----------------+-------------+----------+---------------+----------+-------+

[stack@osp-d scripts]$ nova hypervisor-stats +----------------------+-------+ | Property | Value | +----------------------+-------+ | count | 2 | | current_workload | 0 | | disk_available_least | -84 | | free_disk_gb | 0 | | free_ram_mb | 0 | | local_gb | 0 | | local_gb_used | 0 | | memory_mb | 0 | | memory_mb_used | 0 | | running_vms | 0 | | vcpus | 0 | | vcpus_used | 0 | +----------------------+-------+ [stack@osp-d scripts]$

Hi,

After I check neutron logs, it seems that the root cause is the neutron-openvswitch-agent is down, which caused the port for the provision network cannot be binded. The neutron-openvswitch-agent from openstack-status in director is inactive as below. Any suggestion?

[stack@osp-d ~]$ openstack-status == Nova services == openstack-nova-api: active openstack-nova-compute: active openstack-nova-network: inactive (disabled on boot) openstack-nova-scheduler: active openstack-nova-cert: active openstack-nova-conductor: active openstack-nova-cells: inactive (disabled on boot) openstack-nova-console: inactive (disabled on boot) openstack-nova-consoleauth: inactive (disabled on boot) openstack-nova-xvpvncproxy: inactive (disabled on boot) == Glance services == openstack-glance-api: active openstack-glance-registry: active == Keystone service == openstack-keystone: inactive (disabled on boot) == neutron services == neutron-server: active neutron-dhcp-agent: active neutron-l3-agent: inactive (disabled on boot) neutron-metadata-agent: inactive (disabled on boot) neutron-openvswitch-agent: inactive == Swift services == openstack-swift-proxy: active openstack-swift-account: active openstack-swift-container: active openstack-swift-object: active

Neutron logs

2016-11-29 15:02:27.825 11322 DEBUG neutron.plugins.ml2.drivers.mech_agent [req-4c82d595-02c6-4f5d-b732-982de8d89a29 aa95827aa6c84ddfb4884bb4aa489519 10c7a3971f6647e5b86ad0e7fd59534a - - -] Checking agent: {'binary': u'neutron-openvswitch-agent', 'description': None, 'admin_state_up': True, 'heartbeat_timestamp': datetime.datetime(2016, 11, 9, 21, 1, 25), 'availability_zone': None, 'alive': False, 'topic': u'N/A', 'host': u'osp-d.duacsdn.sero.gic.ericsson.se', 'agent_type': u'Open vSwitch agent', 'resource_versions': {u'QosPolicy': u'1.0'}, 'created_at': datetime.datetime(2016, 11, 1, 10, 41, 22), 'started_at': datetime.datetime(2016, 11, 1, 14, 34, 57), 'id': u'b3f77775-71ea-48e8-8e09-87faed848872', 'configurations': {u'in_distributed_mode': False, u'datapath_type': u'system', u'vhostuser_socket_dir': u'/var/run/openvswitch', u'tunneling_ip': u'192.0.2.1', u'arp_responder_enabled': False, u'devices': 0, u'ovs_capabilities': {u'datapath_types': [u'netdev', u'system'], u'iface_types': [u'geneve', u'gre', u'gre64', u'internal', u'ipsec_gre', u'ipsec_gre64', u'lisp', u'patch', u'stt', u'system', u'tap', u'vxlan']}, u'log_agent_heartbeats': False, u'l2_population': False, u'tunnel_types': [], u'extensions': [], u'enable_distributed_routing': False, u'bridge_mappings': {u'ctlplane': u'br-ctlplane'}}} bind_port /usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/mech_agent.py:75 2016-11-29 15:02:27.825 11322 WARNING neutron.plugins.ml2.drivers.mech_agent [req-4c82d595-02c6-4f5d-b732-982de8d89a29 aa95827aa6c84ddfb4884bb4aa489519 10c7a3971f6647e5b86ad0e7fd59534a - - -] Refusing to bind port 341c3bc6-fc6e-40cd-a0c2-e07275cb7cb6 to dead agent: {'binary': u'neutron-openvswitch-agent', 'description': None, 'admin_state_up': True, 'heartbeat_timestamp': datetime.datetime(2016, 11, 9, 21, 1, 25), 'availability_zone': None, 'alive': False, 'topic': u'N/A', 'host': u'osp-d.duacsdn.sero.gic.ericsson.se', 'agent_type': u'Open vSwitch agent', 'resource_versions': {u'QosPolicy': u'1.0'}, 'created_at': datetime.datetime(2016, 11, 1, 10, 41, 22), 'started_at': datetime.datetime(2016, 11, 1, 14, 34, 57), 'id': u'b3f77775-71ea-48e8-8e09-87faed848872', 'configurations': {u'in_distributed_mode': False, u'datapath_type': u'system', u'vhostuser_socket_dir': u'/var/run/openvswitch', u'tunneling_ip': u'192.0.2.1', u'arp_responder_enabled': False, u'devices': 0, u'ovs_capabilities': {u'datapath_types': [u'netdev', u'system'], u'iface_types': [u'geneve', u'gre', u'gre64', u'internal', u'ipsec_gre', u'ipsec_gre64', u'lisp', u'patch', u'stt', u'system', u'tap', u'vxlan']}, u'log_agent_heartbeats': False, u'l2_population': False, u'tunnel_types': [], u'extensions': [], u'enable_distributed_routing': False, u'bridge_mappings': {u'ctlplane': u'br-ctlplane'}}}

Hello,

Is there any update to this issue ?

I'm facing the same problem trying to deploy overcloud with osp-d 11, Is there some update ?

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.