RHEV 3.0: NFS ISO storage performance issues

Latest response

I install my VMs via an unique (e.g one per VM) small bootable image (around 40MB).
This works without any  issues.

However, trying several at ones causes issues. 
First it causes VMs to stop responding:
VM <VM name> is not responding.

Trying to mount a different ISO to a different VM when this have happend gives me this error:
Failed to change disk in <VM name>. 

At this point I can't install any machines at all.

To solve this I have to:
Shutdown the VM(s) that reported "not responding".

The most annoying part is that when this have happend the affected VM(s) is really hard to stop (from RHEVM. Haven't tried to kill the proc on the RHEVH).
It's like the whole VM freezes (console doesn't work, and the machine will be in powering down state for a long time).
When the VM finally goes to "down" state everything goes back to normal, i.e I can mount ISOs again.

Anyone else who experienced anything like it?

Responses

Note:
The server that exports the ISO NFS is also the the same server that runs RHEVM.
Is this something that can be the cause of this behavior?
Is this something that's not recommended to do?

Hi Pär, please accept our apologies for us not getting to this since the holidays. We'll get a response up asap- otherwise please let us know if you've already found some answers on your own.

Hi, 

If you are creating mulitple vm's of the same build then using a template would cut down on the creation time of the virtual machines. Also when you are trying to install multliple vm's at the same time what is the cpu utilization and memory utilization of the hypervisor ? 

Well the creation time isn't the issue here.
Haven't looked at the hypervisors resources but I would be chocked if that's the issue. It really feels like the iso storage somehow.
Also, a way to locally mount ISO's located on the hypervisor would be a great feature.

 

 

No worries.
And no, no answers found yet.

-Haven't looked at the hypervisors resources but I would be chocked if that's the issue. It really feels like the iso storage somehow.

I would like to see the performance of network interface on the hypervisor that is connected to the isodomain. 

-Also, a way to locally mount ISO's located on the hypervisor would be a great feature.

Current local storage is not an option for RHEV Hypervisors and the standard nfs mounting is the only way possible. 

-I would like to see the performance of network interface on the hypervisor that is connected to the isodomain.

Ok. Next time I encounter this issue I will take a closer look at utilization of resources.
Any other things I should consider looking at?

-Current local storage is not an option for RHEV Hypervisors and the standard nfs mounting is the only way possible. 

Yes I know, just saying a local iso storage on RHEV hypervisors would be a nice feature.

-Ok. Next time I encounter this issue I will take a closer look at utilization of resources.
-Any other things I should consider looking at?

Since you stated that your VM's showed a "not responding" status and with the behavior with the iso domain I would check your network connection to the storage for both. First I would look at if you are having any type of drop packets on the interfaces and then check the /var/lar/sa/sar* data for the network load % being placed on your ineterfaces.  You can add the rx/tx together for a rough estimate of the total load at any given time then compare that load to any type of drop packets happening.