Stack overcloud CREATE_FAILED - "Message: No valid host was found. , Code: 500"

Latest response

I was trying to deploy RHOSP 13 on RHEV 4.3 platform. But every-time it is getting failed during openstack overcloud deploy step. I found lot of articles about this error in google and also Red Hat solution but nothing helped. I have enough resource on RHEV Hypervisor node (40 Cores CPU, 128GB RAM and 1TB Disk) Any further help highly appreciated.

openstack overcloud deploy error:

 Stack overcloud CREATE_FAILED

overcloud.Controller.1.Controller:
  resource_type: OS::TripleO::ControllerServer
  physical_resource_id: e36a191c-041c-4d06-99ef-03fdac37e137
  status: CREATE_FAILED
  status_reason: |
    ResourceInError: resources.Controller: Went to status ERROR due to "Message: No valid host was found. , Code: 500"
overcloud.Controller.0.Controller:
  resource_type: OS::TripleO::ControllerServer
  physical_resource_id: e40fff3c-1643-4150-9557-496ca764a8e4
  status: CREATE_FAILED
  status_reason: |
    ResourceInError: resources.Controller: Went to status ERROR due to "Message: No valid host was found. , Code: 500"
overcloud.Compute.1.NovaCompute:
  resource_type: OS::TripleO::ComputeServer
  physical_resource_id: 5e7a230a-358c-457c-a733-45c0a9bd62d6
  status: CREATE_FAILED
  status_reason: |
    ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. , Code: 500"
overcloud.Compute.0.NovaCompute:
  resource_type: OS::TripleO::ComputeServer
  physical_resource_id: fea4c120-ac3b-40fa-806d-a74f9cab2cf4
  status: CREATE_FAILED
  status_reason: |
    ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. , Code: 500"
overcloud.CephStorage.1.CephStorage:
  resource_type: OS::TripleO::CephStorageServer
  physical_resource_id: db62a251-621f-4a11-86f3-713f0d78280d
  status: CREATE_FAILED
  status_reason: |
    ResourceInError: resources.CephStorage: Went to status ERROR due to "Message: No valid host was found. , Code: 500"
overcloud.CephStorage.0.CephStorage:
  resource_type: OS::TripleO::CephStorageServer
  physical_resource_id: 9997e37e-eb54-4361-be9c-7d04f095f651
  status: CREATE_FAILED
  status_reason: |
    ResourceInError: resources.CephStorage: Went to status ERROR due to "Message: No valid host was found. , Code: 500"
Heat Stack create failed.
Heat Stack create failed.
(undercloud) stack@undercloud.example.com:/home/stack>openstack flavor list
+--------------------------------------+---------------+-------+------+-----------+-------+-----------+
| ID                                   | Name          |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+---------------+-------+------+-----------+-------+-----------+
| 05bb09dd-65d7-43e1-a732-8e52a164ab4c | block-storage |  4096 |   40 |         0 |     1 | True      |
| 121f9b30-1536-4beb-82fc-4f10b3daa673 | ceph-storage  | 16384 |   40 |         0 |     1 | True      |
| 52b7a88a-0cd2-4986-875b-edbc1bf1aa8c | baremetal     |  4096 |   40 |         0 |     1 | True      |
| bda3a743-2105-4a6d-a698-4c6dc6b45bd6 | swift-storage |  4096 |   40 |         0 |     1 | True      |
| c455fb5a-4a50-49b5-89db-18c1e4517f77 | compute       |  4096 |   40 |         0 |     1 | True      |
| dd16a84d-b22e-4a82-a890-6cdedd194b47 | control       |  4096 |   40 |         0 |     1 | True      |
+--------------------------------------+---------------+-------+------+-----------+-------+-----------+
(undercloud) stack@undercloud.example.com:/home/stack>

(undercloud) stack@undercloud.example.com:/home/stack>openstack baremetal node list
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name        | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
| 2fc8cc9a-c5d6-4b52-b0a4-b1108e2817a2 | controller1 | None          | power off   | available          | False       |
| 6767d1e1-84ec-40ef-95e2-92a671b8add6 | controller2 | None          | power off   | available          | False       |
| aef937a9-8cda-415c-8363-90376d921b2c | compute1    | None          | power off   | available          | False       |
| a11b0bb3-177e-4ac8-9afc-d159e45f953a | compute2    | None          | power off   | available          | False       |
| 2a8bf848-8546-4b8d-ba70-24f1ffce8de4 | ceph1       | None          | power off   | available          | False       |
| 365995b7-ce77-4125-837c-4d5fc55c3468 | ceph2       | None          | power off   | available          | False       |
+--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
(undercloud) stack@undercloud.example.com:/home/stack>

(undercloud) stack@undercloud.example.com:/home/stack>openstack overcloud profiles list
+--------------------------------------+-------------+-----------------+-----------------+-------------------+
| Node UUID                            | Node Name   | Provision State | Current Profile | Possible Profiles |
+--------------------------------------+-------------+-----------------+-----------------+-------------------+
| 2fc8cc9a-c5d6-4b52-b0a4-b1108e2817a2 | controller1 | available       | control         |                   |
| 6767d1e1-84ec-40ef-95e2-92a671b8add6 | controller2 | available       | control         |                   |
| aef937a9-8cda-415c-8363-90376d921b2c | compute1    | available       | compute         |                   |
| a11b0bb3-177e-4ac8-9afc-d159e45f953a | compute2    | available       | compute         |                   |
| 2a8bf848-8546-4b8d-ba70-24f1ffce8de4 | ceph1       | available       | ceph-storage    |                   |
| 365995b7-ce77-4125-837c-4d5fc55c3468 | ceph2       | available       | ceph-storage    |                   |
+--------------------------------------+-------------+-----------------+-----------------+-------------------+
(undercloud) stack@undercloud.example.com:/home/stack>

nova-conductor.log output:

2019-12-24 21:19:22.213 4289 DEBUG oslo_db.sqlalchemy.engines [req-242fa720-0306-4985-949c-9dd476a18539 e91ab96ba445474095c6a56f0e8b5e63 2c1ef3d24f5c49258602f4439a6a6c53 - default default] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:290
2019-12-24 21:19:22.596 4290 DEBUG oslo_db.sqlalchemy.engines [req-d0e42106-aca0-42f6-a204-96e5f9f1ed7b e91ab96ba445474095c6a56f0e8b5e63 2c1ef3d24f5c49258602f4439a6a6c53 - default default] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:290
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager [req-242fa720-0306-4985-949c-9dd476a18539 e91ab96ba445474095c6a56f0e8b5e63 2c1ef3d24f5c49258602f4439a6a6c53 - default default] Failed to schedule instances: NoValidHost_Remote: No valid host was found. 
Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 229, in inner
    return func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 139, in select_destinations
    raise exception.NoValidHost(reason="")

NoValidHost: No valid host was found.
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager Traceback (most recent call last):
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager   File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 1165, in schedule_and_build_instances
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager     instance_uuids, return_alternates=True)
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager   File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 760, in _schedule_instances
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager     return_alternates=return_alternates)
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 793, in wrapped
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager     return func(*args, **kwargs)
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 53, in select_destinations
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager     instance_uuids, return_objects, return_alternates)
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager     return getattr(self.instance, __name)(*args, **kwargs)
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 42, in select_destinations
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager     instance_uuids, return_objects, return_alternates)
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 158, in select_destinations
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager     return cctxt.call(ctxt, 'select_destinations', **msg_args)
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 174, in call
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager     retry=self.retry)
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager   File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 131, in _send
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager     timeout=timeout, retry=retry)
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager   File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 625, in send
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager     retry=retry)
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager   File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager     raise result
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager NoValidHost_Remote: No valid host was found. 
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager Traceback (most recent call last):
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager 
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 229, in inner
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager     return func(*args, **kwargs)
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager 
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 139, in select_destinations
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager     raise exception.NoValidHost(reason="")
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager 
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager NoValidHost: No valid host was found. 
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager 
2019-12-24 21:19:22.918 4289 ERROR nova.conductor.manager 
2019-12-24 21:19:22.938 4289 DEBUG oslo_concurrency.lockutils [req-242fa720-0306-4985-949c-9dd476a18539 e91ab96ba445474095c6a56f0e8b5e63 2c1ef3d24f5c49258602f4439a6a6c53 - default default] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2019-12-24 21:19:22.940 4289 DEBUG oslo_concurrency.lockutils [req-242fa720-0306-4985-949c-9dd476a18539 e91ab96ba445474095c6a56f0e8b5e63 2c1ef3d24f5c49258602f4439a6a6c53 - default default] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.002s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2019-12-24 21:19:22.957 4289 DEBUG oslo_db.sqlalchemy.engines [req-242fa720-0306-4985-949c-9dd476a18539 e91ab96ba445474095c6a56f0e8b5e63 2c1ef3d24f5c49258602f4439a6a6c53 - default default] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:290
2019-12-24 21:19:23.212 4289 DEBUG nova.conductor.manager [req-242fa720-0306-4985-949c-9dd476a18539 e91ab96ba445474095c6a56f0e8b5e63 2c1ef3d24f5c49258602f4439a6a6c53 - default default] [instance: 6258eabf-d7b9-451a-b5a6-989eb5981aa6] block_device_mapping [BlockDeviceMapping(attachment_id=<?>,boot_index=0,connection_info=None,created_at=<?>,delete_on_termination=True,deleted=<?>,deleted_at=<?>,destination_type='local',device_name=None,device_type='disk',disk_bus=None,guest_format=None,id=<?>,image_id='17d31491-de85-4bc5-b9db-739eb8baeca4',instance=<?>,instance_uuid=<?>,no_device=False,snapshot_id=None,source_type='image',tag=None,updated_at=<?>,uuid=<?>,volume_id=None,volume_size=None)] _create_block_device_mapping /usr/lib/python2.7/site-packages/nova/conductor/manager.py:1063
2019-12-24 21:19:23.215 4289 DEBUG oslo_concurrency.lockutils [req-242fa720-0306-4985-949c-9dd476a18539 e91ab96ba445474095c6a56f0e8b5e63 2c1ef3d24f5c49258602f4439a6a6c53 - default default] Lock "00000000-0000-0000-0000-000000000000" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2019-12-24 21:19:23.215 4289 DEBUG oslo_concurrency.lockutils [req-242fa720-0306-4985-949c-9dd476a18539 e91ab96ba445474095c6a56f0e8b5e63 2c1ef3d24f5c49258602f4439a6a6c53 - default default] Lock "00000000-0000-0000-0000-000000000000" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2019-12-24 21:19:23.248 4289 WARNING nova.scheduler.utils [req-242fa720-0306-4985-949c-9dd476a18539 e91ab96ba445474095c6a56f0e8b5e63 2c1ef3d24f5c49258602f4439a6a6c53 - default default] Failed to compute_task_build_instances: No valid host was found. 
Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 229, in inner
    return func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 139, in select_destinations
    raise exception.NoValidHost(reason="")

instackenv.json: (MAC,IP Address changed)

{
  "nodes":[
    {
      "mac":[
        "bb:bb:bb:bb:bb:bb"
      ],
      "name":"controller1",
      "cpu":"1",
      "memory":"4096",
      "disk":"40",
      "arch":"x86_64",
      "pm_type":"staging-ovirt",
      "pm_user":"admin@internal",
      "pm_password":"password123",
      "pm_addr":"10.10.0.10",
      "pm_vm_name":"controller1"
    },
    {
    "mac":[
        "bb:bb:bb:bb:bb:bb"
      ],
      "name":"controller2",
      "cpu":"1",
      "memory":"4096",
      "disk":"40",
      "arch":"x86_64",
      "pm_type":"staging-ovirt",
      "pm_user":"admin@internal",
      "pm_password":"password123",
      "pm_addr":"10.10.0.10",
      "pm_vm_name":"controller2"
    },
    {
    "mac":[
        "bb:bb:bb:bb:bb:bb"
      ],
      "name":"compute1",
      "cpu":"1",
      "memory":"4096",
      "disk":"40",
      "arch":"x86_64",
      "pm_type":"staging-ovirt",
      "pm_user":"admin@internal",
      "pm_password":"password123",
      "pm_addr":"10.10.0.10",
      "pm_vm_name":"compute1"
    },
    {
    "mac":[
        "bb:bb:bb:bb:bb:bb"
      ],
      "name":"compute2",
      "cpu":"1",
      "memory":"4096",
      "disk":"40",
      "arch":"x86_64",
      "pm_type":"staging-ovirt",
      "pm_user":"admin@internal",
      "pm_password":"password123",
      "pm_addr":"10.10.0.10",
      "pm_vm_name":"compute2"
    },
    {
    "mac":[
        "bb:bb:bb:bb:bb:bb"
      ],
      "name":"ceph1",
      "cpu":"1",
      "memory":"16384",
      "disk":"40",
      "arch":"x86_64",
      "pm_type":"staging-ovirt",
      "pm_user":"admin@internal",
      "pm_password":"password123",
      "pm_addr":"10.10.0.10",
      "pm_vm_name":"ceph1"
    },
    {
    "mac":[
        "bb:bb:bb:bb:bb:bb"
      ],
      "name":"ceph2",
      "cpu":"1",
      "memory":"16384",
      "disk":"40",
      "arch":"x86_64",
      "pm_type":"staging-ovirt",
      "pm_user":"admin@internal",
      "pm_password":"password123",
      "pm_addr":"10.10.0.10",
      "pm_vm_name":"ceph2"
    }
  ]
}

Attachments

Responses

Check the nova scheduler log about the filters applied

You could please check each console of these VMs to see if there something wrong.