Placing a Gluster master storage domain into maintenance mode fails

Solution In Progress - Updated -

Issue

  • I fail to deactivate the master storage domain and move it to another storage domain.

  • When i put the current master storage domain into maintenance it fails and reconstructs the same storage domain.

  • Trying to put the master storage domain into maintenance mode fails.

  • The vdsm logs on the SPM host contain;

jsonrpc.Executor/0::ERROR::2016-07-21 05:35:39,989::sp::864::Storage.StoragePool::(masterMigrate) migration to new master failed
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/sp.py", line 853, in masterMigrate
    exclude=('./lost+found',))
  File "/usr/share/vdsm/storage/fileUtils.py", line 68, in tarCopy
    raise TarCopyFailed(tsrc.returncode, tdst.returncode, out, err)
TarCopyFailed: (1, 0, '', '')
  • Further debugging revealed that the tar` command was reporting file changed as we read it for files associated with active tasks on the SPM host.

Environment

  • Red Hat Enterprise Virtualization (RHEV) 3.6
  • Red Hat Enterprise Virtualization Hosts (RHEV-H) 7.2

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content