Impossible de scale des computes nodes et mise a jour du stack aussi à cause d'un problème Mistral

Latest response

Bonjour,
Voici le détail du problème :
"Removing the current plan files
Uploading new plan files
Started Mistral Workflow tripleo.plan_management.v1.update_deployment_plan. Execution ID: 278345f3-ee86-4fd1-9786-1fa86281241b
Plan updated
Deploying templates in the directory /tmp/tripleoclient-aAqJSN/tripleo-heat-templates
Workflow not found [workflow_identifier=tripleo.deployment.v1.deploy_plan]"

Cordialement,

Où rencontrez-vous ce comportement inattendu ? Dans quel environnement ?

Sur undercloud

Quand rencontrez-vous ce comportement ? Fréquemment ? Régulièrement ? À certains moments ?

Responses

Bonjour.

Toutes mes excuses pour mon pauvre français. J'utilise Google Translate.

Quelle version d'OpenStack Platform utilisez-vous?

En outre, voyez-vous le workflow tripleo.deployment.v1.deploy_plan répertorié lorsque vous exécutez la commande suivante:

$ source ~/stackrc
$ openstack workflow list -c Name

Hello Mr, Daniel

Don't bother you can speak in english. when i run those command, i did not find tripleo.deployment.v1.deploy_plan.

+------------------------------------------------------------------------+ | Name | +------------------------------------------------------------------------+ | std.create_instance | | std.delete_instance | | tripleo.baremetal.v1.tag_nodes | | tripleo.baremetal.v1._introspect | | tripleo.baremetal.v1.configure | | tripleo.baremetal.v1.introspect | | tripleo.baremetal.v1.configure_manageable_nodes | | tripleo.baremetal.v1.cellv2_discovery | | tripleo.baremetal.v1.manage | | tripleo.baremetal.v1.tag_node | | tripleo.baremetal.v1.introspect_manageable_nodes | | tripleo.baremetal.v1.set_power_state | | tripleo.baremetal.v1.provide | | tripleo.baremetal.v1.register_or_update | | tripleo.baremetal.v1.set_node_state | | tripleo.baremetal.v1.provide_manageable_nodes | | tripleo.baremetal.v1.create_raid_configuration | | tripleo.baremetal.v1.manual_cleaning | | tripleo.deployment.v1.deploy_on_server | | tripleo.deployment.v1.deploy_on_servers | | tripleo.package_update.v1.cancel_stack_update | | tripleo.package_update.v1.clear_breakpoints | | tripleo.package_update.v1.package_update_plan | | tripleo.plan_management.v1.update_deployment_plan | | tripleo.plan_management.v1.create_deployment_plan | | tripleo.plan_management.v1.get_passwords | | tripleo.plan_management.v1.create_default_deployment_plan | | tripleo.scale.v1.delete_node | | tripleo.stack.v1.wait_for_stack_complete_or_failed | | tripleo.stack.v1.delete_stack | | tripleo.stack.v1.wait_for_stack_in_progress | | tripleo.stack.v1.wait_for_stack_does_not_exist | | tripleo.swift_rings_backup.v1.create_swift_rings_backup_container_plan | | tripleo.validations.v1.add_validation_ssh_key_parameter | | tripleo.validations.v1.run_validation | | tripleo.validations.v1.copy_ssh_key | | tripleo.validations.v1.run_validations | | tripleo.validations.v1.list_groups | | tripleo.validations.v1.run_groups | | tripleo.validations.v1.list | +------------------------------------------------------------------------+

Okay, that seems a little odd. That workflow should be a part of the undercloud installation.

What version of OSP are you running?

If need be, rerun the openstack undercloud install command again. It should recreate the workflows. After running the command, double check if the tripleo.deployment.v1.deploy_plan gets created.

If it doesn't get created, it might be a good idea to file a support case to determine why it's not getting created. I'll do my best to help here but our support team might have to analyze your logs to determine what's occurring.

Hello, I would like to tell you that I have already deploy a overcloud stack. I think i've deleted this workflow "tripleo.deployment.v1.deploy_plan" by error when I was troubleshooting an other error which was an issue regarding the update of the stack (It didn't allow me to update the current stack,the update was stucking in "Uptating progress" on each deployed compute node ). If I rerun the command, Will the stack already created and deployed going to be erased by a new one ?

sorry i forgot to tell you that my OSP version is 10.

No problem. It should be safe to rerun openstack undercloud install while an overcloud is deployed.

Basically, if you run openstack undercloud install after the undercloud has been installed, it just reapplies the configuration, including regenerating the workflows. The actual overcloud stack data stored in the undercloud's databases will remain untouched.

This is also useful if you need to modify the configuration in undercloud.conf (except network settings) and apply the new configuration. For example, if you previously had installed the director with the UI disabled, you can later enable it by setting enable_ui = true in undercloud.conf and rerunning openstack undercloud install.

Wonderful ! Thank you so much Daniel, we will run that Monday. I think we are going to disturb you a quiet more after, because of our real issue, the end of the stack update which is stuck in "update in progress" forever then put in "update failed" status.

Sure thing! Feel free to reach out as much as you like. I'll try to help you diagnose the issue where possible.

Hello Daniel, i find this problem after what i've done last week,

[root@overcloud-controller-0 ~]# pcs status Cluster name: tripleo_cluster Stack: corosync Current DC: overcloud-controller-0 (version 1.1.16-12.el7-94ff4df) - partition with quorum Last updated: Mon Aug 6 09:29:12 2018 Last change: Mon Aug 6 09:24:48 2018 by hacluster via crmd on overcloud-controller-0

1 node configured 7 resources configured

Online: [ overcloud-controller-0 ]

Full list of resources:

Master/Slave Set: galera-master [galera] Stopped: [ overcloud-controller-0 ] Clone Set: rabbitmq-clone [rabbitmq] Stopped: [ overcloud-controller-0 ] Master/Slave Set: redis-master [redis] Stopped: [ overcloud-controller-0 ] ip-192.168.24.12 (ocf::heartbeat:IPaddr2): Stopped ip-192.168.24.18 (ocf::heartbeat:IPaddr2): Stopped Clone Set: haproxy-clone [haproxy] Stopped: [ overcloud-controller-0 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Stopped

Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled

i've already try to cleanup and enable but nothing changed.

i solve it :), now i will procede to rerun openstack undercloud install,

[stack@undercloud root]$ openstack undercloud install
2018-08-06 13:28:55,379 INFO: Logging to /home/stack/.instack/install-undercloud.log
2018-08-06 13:28:55,516 INFO: Checking for a FQDN hostname...
2018-08-06 13:28:55,566 INFO: Static hostname detected as undercloud.devolab
2018-08-06 13:28:55,591 INFO: Transient hostname detected as undercloud.devolab
2018-08-06 13:28:55,658 INFO: Running yum clean all
2018-08-06 13:28:55,983 INFO: Loaded plugins: product-id, search-disabled-repos, subscription-manager
2018-08-06 13:28:56,915 INFO: This system is registered with an entitlement server, but is not receiving updates. You can use subscription-manager to assign subscriptions.
2018-08-06 13:28:56,921 INFO: There are no enabled repos.
2018-08-06 13:28:56,921 INFO:  Run "yum repolist all" to see the repos you have.
2018-08-06 13:28:56,921 INFO:  To enable Red Hat Subscription Management repositories:
2018-08-06 13:28:56,922 INFO:      subscription-manager repos --enable <repo>
2018-08-06 13:28:56,922 INFO:  To enable custom repositories:
2018-08-06 13:28:56,922 INFO:      yum-config-manager --enable <repo>
2018-08-06 13:28:56,972 ERROR:
#############################################################################
Undercloud install failed.

Reason: yum-clean-all failed. See log for details.

See the previous output for details about what went wrong.  The full install
log can be found at /home/stack/.instack/install-undercloud.log.

#############################################################################

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 1464, in install
    _run_yum_clean_all(instack_env)
  File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 1177, in _run_yum_clean_all
    _run_live_command(args, instack_env, 'yum-clean-all')
  File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 558, in _run_live_command
    raise RuntimeError('%s failed. See log for details.' % name)
RuntimeError: yum-clean-all failed. See log for details.
Command 'instack-install-undercloud' returned non-zero exit status 1

Hello Daniel, i didn't understad why it does not work for me me when i run

openstack undercloud install

It looks like the undercloud doesn't have any repos enabled anymore. You might need to re-enable the repos to get it to work.

please can you tell how can i do that ?

i know that i shoud use this one subscription-manager repos --enable= But wich repos shoud i add?

Sure, it should be this:

sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-ha-for-rhel-7-server-rpms --enable=rhel-7-server-openstack-10-rpms

More information:

[root@undercloud ~]# rpm -qa | grep tripleoclient
python-tripleoclient-6.2.0-1.el7ost.noarch
[root@undercloud ~]# sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-ha-for-rhel-7-server-rpms --enable=rhel-7-server-openstack-10-rpms
Error: 'rhel-ha-for-rhel-7-server-rpms' does not match a valid repository ID. Use "subscription-manager repos --list" to see valid repositories.
Error: 'rhel-7-server-openstack-10-rpms' does not match a valid repository ID. Use "subscription-manager repos --list" to see valid repositories.
Repository 'rhel-7-server-rpms' is enabled for this system.
Repository 'rhel-7-server-extras-rpms' is enabled for this system.
Repository 'rhel-7-server-rh-common-rpms' is enabled for this system.

Doesn't work for me

So it looks like you haven't got the right subscription enabled for the undercloud. It looks like only a generic RHEL subscription has been attached. Have a look at the link I added in my previous post to enable the right subscription.

Once the repos are enabled, you should be able to rerun openstack undercloud install.

[root@undercloud ~]# nova list
No handlers could be found for logger "keystoneauth.identity.generic.base"
ERROR (SSLError): SSL exception connecting to https://xxx.xx.xxx.xxx:13000/v2.0/tokens: hostname 'xxx.xx.xxx.xxx' doesn't match 'xxx.xx.xxx.xxx

look what happen when i run the command

So a couple of things here:

  • The openstack commands need to be run as the stack user.

  • The nova list command might be deprecated. Use the following:

$ source ~/stackrc
$ openstack server list

even doing this still tell me : i have alreadysourced the stackrc

No handlers could be found for logger "keystoneauth.identity.generic.base"
ERROR (SSLError): SSL exception connecting to https://xxx.xx.xxx.xxx:13000/v2.0/tokens: hostname 'xxx.xx.xxx.xxx' doesn't match 'xxx.xx.xxx.xxx'

it is already happen to me once but i followed this link (https://access.redhat.com/solutions/3357871?band=se) to help me it worked but now nothing

When you say "but now nothing", do you mean no overcloud nodes appear?

non look when I run what you said

root@undercloud ~]# source stackrc
[root@undercloud ~]# openstack server list
Certificate did not match expected hostname: xxx.xx.xxx.xxx. Certificate: {'subjectAltName': (('DNS', 'xxx.xx.xxx.xxx'),), 'notBefore': u'Jul 27 08:07:14 2018 GMT', 'serialNumber': u'56B853AA36B5419C97F57DEA848ABF60', 'notAfter': 'Jul 27 08:04:16 2019 GMT', 'version': 3L, 'subject': ((('commonName', u'172.16.138.161'),),), 'issuer': ((('commonName', u'Local Signing Authority'),), (('commonName', u'56b853aa-36b5419c-97f57dea-848ab7b7'),))}
Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL.
Certificate did not match expected hostname: 172.16.138.161. Certificate: {'subjectAltName': (('DNS', 'xxx.xx.xxx.xxx'),), 'notBefore': u'Jul 27 08:07:14 2018 GMT', 'serialNumber': u'56B853AA36B5419C97F57DEA848ABF60', 'notAfter': 'Jul 27 08:04:16 2019 GMT', 'version': 3L, 'subject': ((('commonName', u'xxx.xx.xxx.xxx'),),), 'issuer': ((('commonName', u'Local Signing Authority'),), (('commonName', u'56b853aa-36b5419c-97f57dea-848ab7b7'),))}
SSL exception connecting to https://xxx.xx.xxx.xxx:13000/v2.0/tokens: hostname 'xxx.xx.xxx.xxx' doesn't match 'xxx.xx.xxx.xxx'

and when i run openstack undercloud install look what happen

2018-08-07 10:24:45,621 INFO: ln: ‘/etc/puppet/modules/corosync’: cannot overwrite directory
2018-08-07 10:24:45,621 INFO: ln: ‘/etc/puppet/modules/memcached’: cannot overwrite directory
2018-08-07 10:24:45,626 INFO: INFO: 2018-08-07 10:24:45,624 -- ############### End stdout/stderr logging ###############
2018-08-07 10:24:45,626 INFO: ERROR: 2018-08-07 10:24:45,625 --     Hook FAILED.
2018-08-07 10:24:45,627 INFO: ERROR: 2018-08-07 10:24:45,625 -- Failed running command ['dib-run-parts', u'/tmp/tmpSKxwjA/install.d']
2018-08-07 10:24:45,627 INFO:   File "/usr/lib/python2.7/site-packages/instack/main.py", line 168, in main
2018-08-07 10:24:45,628 INFO:     em.run()
2018-08-07 10:24:45,628 INFO:   File "/usr/lib/python2.7/site-packages/instack/runner.py", line 79, in run
2018-08-07 10:24:45,628 INFO:     self.run_hook(hook)
2018-08-07 10:24:45,629 INFO:   File "/usr/lib/python2.7/site-packages/instack/runner.py", line 178, in run_hook
2018-08-07 10:24:45,629 INFO:     raise Exception("Failed running command %s" % command)
2018-08-07 10:24:45,630 INFO: ERROR: 2018-08-07 10:24:45,626 -- None
2018-08-07 10:24:45,642 ERROR:
#############################################################################
Undercloud install failed.

Reason: instack failed. See log for details.

See the previous output for details about what went wrong.  The full install
log can be found at /home/stack/.instack/install-undercloud.log.

#############################################################################

I'll reach out privately to get the full log from your openstack undercloud install.

Do you have any idea about the problem?

I tried to reach out over email but couldn't seem to access your email.

Would you be able to post a bit further back in the log (probably the last 50 lines or so)?

[root@undercloud ~]# tail -50 /home/stack/.instack/install-undercloud.log
2018-08-07 13:34:48,231 INFO: + mkdir -p /etc/puppet/manifests
2018-08-07 13:34:48,234 INFO: ++ dirname /tmp/tmpWUW34s/install.d/10-puppet-stack-config-puppet-module
2018-08-07 13:34:48,236 INFO: + cp /tmp/tmpWUW34s/install.d/../puppet-stack-config.pp /etc/puppet/manifests/puppet-stack-config.pp
2018-08-07 13:34:48,246 INFO: dib-run-parts Tue Aug  7 13:34:48 CEST 2018 10-puppet-stack-config-puppet-module completed
2018-08-07 13:34:48,249 INFO: dib-run-parts Tue Aug  7 13:34:48 CEST 2018 Running /tmp/tmpWUW34s/install.d/11-create-template-root
2018-08-07 13:34:48,256 INFO: ++ os-apply-config --print-templates
2018-08-07 13:34:48,642 INFO: + TEMPLATE_ROOT=/usr/libexec/os-apply-config/templates
2018-08-07 13:34:48,643 INFO: + mkdir -p /usr/libexec/os-apply-config/templates
2018-08-07 13:34:48,651 INFO: dib-run-parts Tue Aug  7 13:34:48 CEST 2018 11-create-template-root completed
2018-08-07 13:34:48,653 INFO: dib-run-parts Tue Aug  7 13:34:48 CEST 2018 Running /tmp/tmpWUW34s/install.d/11-hiera-orc-install
2018-08-07 13:34:48,659 INFO: + set -o pipefail
2018-08-07 13:34:48,660 INFO: + mkdir -p /usr/libexec/os-refresh-config/configure.d/
2018-08-07 13:34:48,663 INFO: ++ dirname /tmp/tmpWUW34s/install.d/11-hiera-orc-install
2018-08-07 13:34:48,665 INFO: + install -m 0755 -o root -g root /tmp/tmpWUW34s/install.d/../10-hiera-disable /usr/libexec/os-refresh-config/configure.d/10-hiera-disable
2018-08-07 13:34:48,679 INFO: ++ dirname /tmp/tmpWUW34s/install.d/11-hiera-orc-install
2018-08-07 13:34:48,681 INFO: + install -m 0755 -o root -g root /tmp/tmpWUW34s/install.d/../40-hiera-datafiles /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles
2018-08-07 13:34:48,695 INFO: dib-run-parts Tue Aug  7 13:34:48 CEST 2018 11-hiera-orc-install completed
2018-08-07 13:34:48,698 INFO: dib-run-parts Tue Aug  7 13:34:48 CEST 2018 Running /tmp/tmpWUW34s/install.d/11-truncate-nova-orc-install
2018-08-07 13:34:48,703 INFO: + set -o pipefail
2018-08-07 13:34:48,703 INFO: + mkdir -p /usr/libexec/os-refresh-config/configure.d/40-truncate-nova-config
2018-08-07 13:34:48,707 INFO: ++ dirname /tmp/tmpWUW34s/install.d/11-truncate-nova-orc-install
2018-08-07 13:34:48,709 INFO: + install -m 0755 -o root -g root /tmp/tmpWUW34s/install.d/../40-truncate-nova-config /usr/libexec/os-refresh-config/configure.d/40-truncate-nova-config
2018-08-07 13:34:48,723 INFO: dib-run-parts Tue Aug  7 13:34:48 CEST 2018 11-truncate-nova-orc-install completed
2018-08-07 13:34:48,725 INFO: dib-run-parts Tue Aug  7 13:34:48 CEST 2018 Running /tmp/tmpWUW34s/install.d/75-puppet-modules-package
2018-08-07 13:34:48,732 INFO: + find /opt/stack/puppet-modules/ -mindepth 1
2018-08-07 13:34:48,733 INFO: + read
2018-08-07 13:34:48,744 INFO: + ln -f -s /usr/share/openstack-puppet/modules/aodh /usr/share/openstack-puppet/modules/apache /usr/share/openstack-puppet/modules/barbican /usr/share/openstack-puppet/modules/cassandra /usr/share/openstack-puppet/modules/ceilometer /usr/share/openstack-puppet/modules/ceph /usr/share/openstack-puppet/modules/certmonger /usr/share/openstack-puppet/modules/cinder /usr/share/openstack-puppet/modules/collectd /usr/share/openstack-puppet/modules/concat /usr/share/openstack-puppet/modules/contrail /usr/share/openstack-puppet/modules/corosync /usr/share/openstack-puppet/modules/datacat /usr/share/openstack-puppet/modules/ec2api /usr/share/openstack-puppet/modules/elasticsearch /usr/share/openstack-puppet/modules/firewall /usr/share/openstack-puppet/modules/fluentd /usr/share/openstack-puppet/modules/git /usr/share/openstack-puppet/modules/glance /usr/share/openstack-puppet/modules/gnocchi /usr/share/openstack-puppet/modules/haproxy /usr/share/openstack-puppet/modules/heat /usr/share/openstack-puppet/modules/horizon /usr/share/openstack-puppet/modules/inifile /usr/share/openstack-puppet/modules/ipaclient /usr/share/openstack-puppet/modules/ironic /usr/share/openstack-puppet/modules/java /usr/share/openstack-puppet/modules/kafka /usr/share/openstack-puppet/modules/keepalived /usr/share/openstack-puppet/modules/keystone /usr/share/openstack-puppet/modules/kibana3 /usr/share/openstack-puppet/modules/kmod /usr/share/openstack-puppet/modules/manila /usr/share/openstack-puppet/modules/memcached /usr/share/openstack-puppet/modules/midonet /usr/share/openstack-puppet/modules/mistral /usr/share/openstack-puppet/modules/module-data /usr/share/openstack-puppet/modules/mongodb /usr/share/openstack-puppet/modules/mysql /usr/share/openstack-puppet/modules/n1k_vsm /usr/share/openstack-puppet/modules/neutron /usr/share/openstack-puppet/modules/nova /usr/share/openstack-puppet/modules/nssdb /usr/share/openstack-puppet/modules/ntp /usr/share/openstack-puppet/modules/octavia /usr/share/openstack-puppet/modules/opendaylight /usr/share/openstack-puppet/modules/openstack_extras /usr/share/openstack-puppet/modules/openstacklib /usr/share/openstack-puppet/modules/oslo /usr/share/openstack-puppet/modules/ovn /usr/share/openstack-puppet/modules/pacemaker /usr/share/openstack-puppet/modules/panko /usr/share/openstack-puppet/modules/rabbitmq /usr/share/openstack-puppet/modules/redis /usr/share/openstack-puppet/modules/remote /usr/share/openstack-puppet/modules/rsync /usr/share/openstack-puppet/modules/sahara /usr/share/openstack-puppet/modules/sensu /usr/share/openstack-puppet/modules/snmp /usr/share/openstack-puppet/modules/ssh /usr/share/openstack-puppet/modules/staging /usr/share/openstack-puppet/modules/stdlib /usr/share/openstack-puppet/modules/swift /usr/share/openstack-puppet/modules/sysctl /usr/share/openstack-puppet/modules/systemd /usr/share/openstack-puppet/modules/tempest /usr/share/openstack-puppet/modules/timezone /usr/share/openstack-puppet/modules/tomcat /usr/share/openstack-puppet/modules/tripleo /usr/share/openstack-puppet/modules/trove /usr/share/openstack-puppet/modules/uchiwa /usr/share/openstack-puppet/modules/vcsrepo /usr/share/openstack-puppet/modules/vlan /usr/share/openstack-puppet/modules/vswitch /usr/share/openstack-puppet/modules/xinetd /usr/share/openstack-puppet/modules/zaqar /usr/share/openstack-puppet/modules/zookeeper /etc/puppet/modules/
2018-08-07 13:34:48,745 INFO: ln: ‘/etc/puppet/modules/corosync’: cannot overwrite directory
2018-08-07 13:34:48,745 INFO: ln: ‘/etc/puppet/modules/memcached’: cannot overwrite directory
2018-08-07 13:34:48,752 INFO: INFO: 2018-08-07 13:34:48,750 -- ############### End stdout/stderr logging ###############
2018-08-07 13:34:48,752 INFO: ERROR: 2018-08-07 13:34:48,750 --     Hook FAILED.
2018-08-07 13:34:48,753 INFO: ERROR: 2018-08-07 13:34:48,751 -- Failed running command ['dib-run-parts', u'/tmp/tmpWUW34s/install.d']
2018-08-07 13:34:48,753 INFO:   File "/usr/lib/python2.7/site-packages/instack/main.py", line 168, in main
2018-08-07 13:34:48,754 INFO:     em.run()
2018-08-07 13:34:48,754 INFO:   File "/usr/lib/python2.7/site-packages/instack/runner.py", line 79, in run
2018-08-07 13:34:48,754 INFO:     self.run_hook(hook)
2018-08-07 13:34:48,755 INFO:   File "/usr/lib/python2.7/site-packages/instack/runner.py", line 178, in run_hook
2018-08-07 13:34:48,755 INFO:     raise Exception("Failed running command %s" % command)
2018-08-07 13:34:48,756 INFO: ERROR: 2018-08-07 13:34:48,753 -- None
2018-08-07 13:34:48,770 ERROR:
#############################################################################
Undercloud install failed.

Reason: instack failed. See log for details.

See the previous output for details about what went wrong.  The full install
log can be found at /home/stack/.instack/install-undercloud.log.

#############################################################################

if you need more then the last 50 lines just tell me

So it looks like it's having trouble running some of the installation scripts (which were stored in /tmp/tmpWUW34s/install.d temporarily).

Can you run the following to see if the content still exist:

$ sudo ls /tmp/tmpWUW34s/install.d/
[root@undercloud ~]#  sudo ls /tmp/tmpWUW34s/install.d/
02-puppet-stack-config                11-hiera-orc-install          99-os-refresh-config-install-scripts  puppet-modules-package-install
10-hiera-yaml-symlink                 11-truncate-nova-orc-install  os-apply-config-source-install        puppet-modules-source-install
10-puppet-stack-config-puppet-module  75-puppet-modules-package     os-refresh-config-source-install
11-create-template-root               99-install-config-templates   package-installs-hiera

Okay, can you also run the following:

$ ls -l /etc/puppet/modules

And post the output for the corosync and memcache folders?

But why i can't run openstack commande !

With regards to why openstack commands aren't working on the undercloud, it seems to be a problem with SSL CA certs. If necessary, we can use openstack undercloud install to refresh the certs. But we'll get to that step in a bit. First we need to see why the openstack undercloud install isn't working.

So what i should do now ?

Run the following:

$ ls -l /etc/puppet/modules

And let me know if corosync and memcached are links, and where they link to.

Just FYI, I made a mistake with the ls command above. What I want to find out is if the corosync and memcache objects are directories or links. If they're links, then everything should be fine and the undercloud is having trouble with the next script.

[stack@undercloud root]$ sudo ls  /etc/puppet/modules/corosync/
CHANGELOG.md  checksums.json  CONTRIBUTING.md  Gemfile  lib  LICENSE  manifests  metadata.json  Rakefile  README.md  spec  templates  tests
[stack@undercloud root]$ sudo ls  /etc/puppet/modules/memcached/
checksums.json  Gemfile  lib  LICENSE  manifests  metadata.json  Rakefile  README-DEVELOPER  README.md  spec  templates  tests

Not quite what I'm after. Can you run these exact commands:

$ ls -l /etc/puppet/modules | grep corosync
$ ls -l /etc/puppet/modules | grep memcached

And post the output?

[stack@undercloud root]$ ls -l /etc/puppet/modules | grep corosync
drwxrwxrwx. 7 root root 225 Jun 29  2016 corosync
[stack@undercloud root]$ ls -l /etc/puppet/modules | grep memcached
drwxrwxrwx. 7 root root 206 Jun  7  2015 memcached

Okay, that's probably why the install failed. Usually it creates a set of symlinks to the openstack puppet modules (which are usually located in /usr/share/openstack-puppet/modules). But in your case, it looks like the actual modules are there.

Can you run the following commands so we can work out where those modules came from:

$ sudo rpm -qf /etc/puppet/modules/corosync
$ sudo rpm -qf /etc/puppet/modules/memcached
[stack@undercloud root]$ sudo rpm -qf /etc/puppet/modules/corosync
file /etc/puppet/modules/corosync is not owned by any package
[stack@undercloud root]$ sudo rpm -qf /etc/puppet/modules/memcached
file /etc/puppet/modules/memcached is not owned by any package

Sorry, I meant to respond to you directly, but it seems like I've posted out of the chain. See my response below.

Okay, let's move those modules out:

$ sudo mkdir /root/tmp-modules
$ sudo mv /etc/puppet/modules/corosync /root/tmp-modules/.
$ sudo mv /etc/puppet/modules/memcached /root/tmp-modules/.

Then retry running:

$ openstack undercloud install

Pages

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.