Openstack slow stack creation due to big database tables

Solution Verified - Updated -

Environment

  • Red Hat Openstack Platform 16.2, 17.1

Issue

The stack creation is taking longer than usual and the rowsflush log file of a service contains errors.

Resolution

  1. Run the jobs for purge and archive data from database manually and with verbosity.
# if needed to clean NOVA database
# Purge JOB
[root@overcloud-controller1 ~]# podman exec -u nova -it nova_api_cron nova-manage db purge --before `date --date='today - 14 days' +\%D` --verbose --all-cells
# Archive JOB
[root@overcloud-controller1 ~]# podman exec -u nova -it nova_api_cron nova-manage db archive_deleted_rows --verbose --all-cells --until-complete

# if needed to clean CINDER database
# Purge JOB
[root@overcloud-controller1 ~]# podman exec -u cinder -it cinder_api_cron cinder-manage --debug --watch-log-file db purge 30

# if needed to clean HEAT database
# Purge JOB
[root@overcloud-controller1 ~]# podman exec -u heat -it heat_api_cron heat-manage --debug --watch-log-file purge_deleted 30
  1. Check again the databases size and should be there tables with the prefix "_shadow". If anything changed, open a support case. Maybe the pace of the clean up need to be fit to the environment.

Root Cause

An error during archive job execution or very frequently stack create/delete that contribute to the high number of rows to archive.

Diagnostic Steps

  1. Check the logs files for errors.The services that have jobs for cleanup are Cinder, Heat and Nova. Heat not generates logs.
[root@overcloud-controller-0 ~]# egrep 'WARN|ERRO' /var/log/containers/nova/nova-rows*
[root@overcloud-controller-0 ~]# egrep 'WARN|ERRO' /var/log/containers/cinder/cinder-rowsflush.log 
  1. Check the databases size.
[root@overcloud-controller-0 ~]# podman exec -u root -it $(podman ps | grep -i galera| awk {'print $12'}) mysql -e "SELECT table_schema 'database', concat( round( sum( data_length + index_length ) / ( 1024 *1024 ) , 2 ) , 'M' ) size FROM information_schema.TABLES WHERE ENGINE=('MyISAM' || 'InnoDB' ) GROUP BY table_schema ORDER BY size ASC;"
  1. Check the tables size and if there are tables with the prefix "_shadow".
#NOVA
podman exec -u root -it $(podman ps | grep -i galera| awk {'print $12'}) mysql -e "select table_schema, table_name, round(sum(data_length) / power(1024, 2), 2) as used_table_mb, round(sum(index_length) / power(1024, 2), 2) as used_index_mb, round(sum(data_length + index_length + data_free) / power(1024, 2), 2) as allocated_mb from information_schema.tables where table_schema = 'nova' and table_type = 'BASE TABLE' group by table_schema, table_name order by used_table_mb desc;"

#CINDER
podman exec -u root -it $(podman ps | grep -i galera| awk {'print $12'}) mysql -e "select table_schema, table_name, round(sum(data_length) / power(1024, 2), 2) as used_table_mb, round(sum(index_length) / power(1024, 2), 2) as used_index_mb, round(sum(data_length + index_length + data_free) / power(1024, 2), 2) as allocated_mb from information_schema.tables where table_schema = 'cinder' and table_type = 'BASE TABLE' group by table_schema, table_name order by used_table_mb desc;"

#HEAT
podman exec -u root -it $(podman ps | grep -i galera| awk {'print $12'}) mysql -e "select table_schema, table_name, round(sum(data_length) / power(1024, 2), 2) as used_table_mb, round(sum(index_length) / power(1024, 2), 2) as used_index_mb, round(sum(data_length + index_length + data_free) / power(1024, 2), 2) as allocated_mb from information_schema.tables where table_schema = 'heat' and table_type = 'BASE TABLE' group by table_schema, table_name order by used_table_mb desc;"

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.

Comments