RHEV 3.0 - import problem

Latest response

Has anybody seen this? I have an exported VM - that exported perfectly as far as I can see - sitting on a RHEL NFS server. I detached the server from that domain and imported into a another RHEV instance also running 3.0 - just like the docs tell you. But for some reason, the SPM network on the host went to 98% and just sits there - it's been like that for 10 hours - and the domain it's coming into isn't having any space used up - so I'm pretty sure nothings happening.  The log complains about latency to the NFS host, but I can login to to the NFS host and snoop around - the network is fine, I'm sure of it.

 

This import should only take 3 or 4 hours at MOST - similar size about 2 hours.  The hypervisor isn't locked up either, it's running I can login to it etc.

 

What do I do?

 

1.  Put the Hypervisor node into Maintenance then powercycle from the menu? Will it even let me?

 

2.  Shutdown NFS services and maybe let it time out?

 

There is no option as far as I can see to STOP the import - which is somewhat lacking I think.

 

Suggestions?

 

-joe

Responses

I think this is an issue that requires investigation, which can be rather hard to do within this medium. Can you open a support case, with a full log collector attached?

I did open a case thanks Dan - I was just hoping maybe someone had seen it already.

 

Thanks anyway!

 

-joe

Hi Joe,

 

I really hate to send you off like this, but really, in order to understand what is happening in your setup, and of course fix it, it will take a lot of digging through logs, in order to map out the event timeline that lead to the current state of things. this is why a support case is much better suited than a group forum. Had this been a common and known issue, I'd definitely try and provide a solution right away, or would have provided a link to a KB article.

 

Unfortunately, what you describe sounds unique to me - in 4 years dealing with RHEV I've yet to hear of such an issue, which makes it even more important you work closely with support to resolve it.