`nova-manage db archive_deleted` cron job fails with `DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, u"Duplicate entry '2' for key 'PRIMARY'"`
Issue
Deleted instances do not get moved to the shadow tables via the regular cron job:
# cat /var/spool/cron/nova
# HEADER: This file was autogenerated at 2017-09-05 11:15:58 +0000 by puppet.
# HEADER: While it can still be managed manually, it is definitely not recommended.
# HEADER: Note particularly that the comments starting with 'Puppet Name' should
# HEADER: not be deleted, as doing so could cause duplicate cron jobs.
# Puppet Name: nova-manage db archive_deleted_rows
PATH=/bin:/usr/bin:/usr/sbin SHELL=/bin/sh
1 0 * * * nova-manage db archive_deleted_rows --max_rows 100 >>/var/log/nova/nova-rowsflush.log 2>&1
When running the command manually we see it failing with a DBDuplicateEntry
exception:
# su - nova -s /bin/bash -c "nova-manage --verbose db archive_deleted_rows --max_rows 100 --until-complete"
Option "verbose" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future.
An error has occurred:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1606, in main
ret = fn(*fn_args, **fn_kwargs)
File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 681, in archive_deleted_rows
run = db.archive_deleted_rows(max_rows)
File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 2040, in archive_deleted_rows
return IMPL.archive_deleted_rows(max_rows=max_rows)
File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 6564, in archive_deleted_rows
tablename, max_rows=max_rows - total_rows_archived)
File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 6513, in _archive_deleted_rows_for_table
conn.execute(insert)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute
return meth(self, multiparams, params)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context
context)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1337, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 200, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
context)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute
cursor.execute(statement, parameters)
File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute
result = self._query(query)
File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _query
conn.query(q)
File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 841, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1029, in _read_query_result
result.read()
File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1312, in read
first_packet = self.connection._read_packet()
File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 991, in _read_packet
packet.check_error()
File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 393, in check_error
err.raise_mysql_exception(self._data)
File "/usr/lib/python2.7/site-packages/pymysql/err.py", line 107, in raise_mysql_exception
raise errorclass(errno, errval)
DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, u"Duplicate entry '2' for key 'PRIMARY'") [SQL: u'INSERT INTO shadow_virtual_interfaces (created_at, updated_at, deleted_at, deleted, id, address, network_id, instance_uuid, uuid, tag) SELECT virtual_interfaces.created_at, virtual_interfaces.updated_at, virtual_interfaces.deleted_at, virtual_interfaces.deleted, virtual_interfaces.id, virtual_interfaces.address, virtual_interfaces.network_id, virtual_interfaces.instance_uuid, virtual_interfaces.uuid, virtual_interfaces.tag \nFROM virtual_interfaces \nWHERE virtual_interfaces.deleted != %(deleted_1)s ORDER BY virtual_interfaces.id \n LIMIT %(param_1)s'] [parameters: {u'param_1': 11, u'deleted_1': 0}]
Environment
Red Hat OpenStack Platform 9.0
Red Hat OpenStack Platform 10.0
Red Hat OpenStack Platform 11.0
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.