RHEL OpenStack 6: After node reboot communications is lost between controller and compute node

Solution Unverified - Updated -

Issue

In one of our big OSP6 development/test lab, where node1 is controller. Higher number where x>1 are just compute nodes.
We could not create volumes on node15 and on node16, hence nodes have been rebooted and volumes could be created.
For whatever reason node would be rebooted (above was just an example), communication would be lost.

On controller:

[root@node1 ~(keystone_admin)]# cinder service-list | egrep "node15|node16"
|  cinder-volume   | node15@hdd |   cZone01   | enabled  |  down | 2015-03-17T08:00:20.000000 |       None      |
|  cinder-volume   | node16@hdd |   cZone02   | enabled  |  down | 2015-03-17T08:00:49.000000 |       None

[root@node1 ~(keystone_admin)]# nova service-list | egrep "node15|node16"
| 29 | nova-compute     | node15 | cZone01     | enabled | down  | 2015-03-17T08:00:23.000000 | -               |
| 32 | nova-compute     | node16 | cZone02     | enabled | down  | 2015-03-17T08:00:51.000000 | -  

On compute nodes:

  • node15
[root@node15 ~]# systemctl status openstack-cinder-volume target openstack-nova-compute
openstack-cinder-volume.service - OpenStack Cinder Volume Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; enabled)
   Active: active (running) since Tue 2015-03-17 09:25:09 CET; 36min ago
 Main PID: 2343 (cinder-volume)
   CGroup: /system.slice/openstack-cinder-volume.service
           ââ2343 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/...
           ââ3442 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/...

Mar 17 09:25:10 node15 cinder-volume[2343]: 2015-03-17 09:25:10.634 2343...s
Mar 17 09:25:10 node15 cinder-volume[2343]: 2015-03-17 09:25:10.649 2343...2
Mar 17 09:25:10 node15 cinder-volume[2343]: 2015-03-17 09:25:10.651 3442...)
Mar 17 09:25:10 node15 cinder-volume[2343]: 2015-03-17 09:25:10.653 3442...)
Mar 17 09:25:10 node15 sudo[3452]: cinder : TTY=unknown ; PWD=/ ; USER=...es
Mar 17 09:25:10 node15 sudo[3565]: cinder : TTY=unknown ; PWD=/ ; USER=...ix
Mar 17 09:25:10 node15 sudo[3572]: cinder : TTY=unknown ; PWD=/ ; USER=...es
Mar 17 09:25:11 node15 cinder-volume[2343]: 2015-03-17 09:25:11.371 3442...s
Mar 17 09:25:11 node15 sudo[3623]: cinder : TTY=unknown ; PWD=/ ; USER=...es
Mar 17 09:25:11 node15 cinder-volume[2343]: 2015-03-17 09:25:11.475 3442...2

target.service - Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)
   Active: active (exited) since Tue 2015-03-17 09:25:10 CET; 36min ago
  Process: 2347 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
 Main PID: 2347 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/target.service

Mar 17 09:25:09 node15 systemd[1]: Starting Restore LIO kernel target c.....
Mar 17 09:25:10 node15 target[2347]: No saved config file at /etc/target...g
Mar 17 09:25:10 node15 systemd[1]: Started Restore LIO kernel target co...n.

openstack-nova-compute.service - OpenStack Nova Compute Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled)
   Active: activating (start) since Tue 2015-03-17 10:01:14 CET; 52s ago
 Main PID: 8138 (nova-compute)
   CGroup: /system.slice/openstack-nova-compute.service
           ââ8138 /usr/bin/python /usr/bin/nova-compute

Mar 17 10:01:14 node15 systemd[1]: Starting OpenStack Nova Compute Server...
Hint: Some lines were ellipsized, use -l to show in full.
  • node16
[root@node16 ~]#  systemctl status openstack-cinder-volume target openstack-nova-compute
openstack-cinder-volume.service - OpenStack Cinder Volume Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; enabled)
   Active: active (running) since Tue 2015-03-17 09:25:40 CET; 37min ago
 Main PID: 2156 (cinder-volume)
   CGroup: /system.slice/openstack-cinder-volume.service
           ââ2156 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/...
           ââ3216 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/...

Mar 17 09:25:40 node16 cinder-volume[2156]: 2015-03-17 09:25:40.848 2156...s
Mar 17 09:25:40 node16 cinder-volume[2156]: 2015-03-17 09:25:40.864 2156...6
Mar 17 09:25:40 node16 cinder-volume[2156]: 2015-03-17 09:25:40.866 3216...)
Mar 17 09:25:40 node16 cinder-volume[2156]: 2015-03-17 09:25:40.868 3216...)
Mar 17 09:25:40 node16 sudo[3226]: cinder : TTY=unknown ; PWD=/ ; USER=...es
Mar 17 09:25:40 node16 sudo[3368]: cinder : TTY=unknown ; PWD=/ ; USER=...ix
Mar 17 09:25:41 node16 sudo[3378]: cinder : TTY=unknown ; PWD=/ ; USER=...es
Mar 17 09:25:41 node16 cinder-volume[2156]: 2015-03-17 09:25:41.545 3216...s
Mar 17 09:25:41 node16 sudo[3425]: cinder : TTY=unknown ; PWD=/ ; USER=...es
Mar 17 09:25:41 node16 cinder-volume[2156]: 2015-03-17 09:25:41.635 3216...2

target.service - Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)
   Active: active (exited) since Tue 2015-03-17 09:25:40 CET; 37min ago
  Process: 2157 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
 Main PID: 2157 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/target.service

Mar 17 09:25:40 node16 systemd[1]: Starting Restore LIO kernel target c.....
Mar 17 09:25:40 node16 target[2157]: No saved config file at /etc/target...g
Mar 17 09:25:40 node16 systemd[1]: Started Restore LIO kernel target co...n.

openstack-nova-compute.service - OpenStack Nova Compute Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled)
   Active: activating (start) since Tue 2015-03-17 10:03:14 CET; 7s ago
 Main PID: 7891 (nova-compute)
   CGroup: /system.slice/openstack-nova-compute.service
           ââ7891 /usr/bin/python /usr/bin/nova-compute

Hint: Some lines were ellipsized, use -l to show in full.
So you could see that nova,cinder or nodes are OK, but controller for whatever reason cannot get their status right.

Environment

  • Red Hat OpenStack 6.0

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content