Understanding KVM in RHEL 6

Latest response

I need some help in understanding how KVM works in RHEL 6.

 

I recently installed RHEL 6 and noticed that in RHEL 6 two processes are created for each VM. This is different from what I'm used to in RHEL 5.5 where there was only one process for each VM. Moreover when I "pin" my VM to specific CPU core, only one of the processes becomes "pinned" - the other one "travels" across the free cores... Can somebody explain to me what this 2nd process is about? And why it doesn't follow "virsh cpupin" configuration?

 

Another question is how can I "reserve CPU resources" for the specific VM. In other words, how can I be sure that specific VM instance gets at least 1 or 2 CPU cores _exclusively_ and no other VM or application on my host machine can "starve" it? I thought that this can be done using cgroups, but I can't find proper documentation on it...

 

Thank you in advance,

    Alex

Responses

Hi Alex,

 

Let me try to help you :)

 

>>

I recently installed RHEL 6 and noticed that in RHEL 6 two processes are created for each VM

>>

 

can you please confirm, the subjected VM has one 'Vcpu' defined in its configuration file ?

 

In KVM, there would be one process created for each VM. The virtual cpus are thread processes generated inside the qemu process. So if you have a VM with one Vcpu defied you could see the same behaviour.

 

>>

Another question is how can I "reserve CPU resources" for the specific VM

>>

 

This would be possible with 'virsh vcpupin' command .

 

 

vcpupin domain-id vcpu cpulist
           Pin domain VCPUs to host physical CPUs. The vcpu number must be provided and cpulist is a comma separated list of physical CPU numbers

 

 

Hope this helps .

 

Well, it's very weird... A week ago, when I initially configured my setup, I actually saw two instances of qemu process for each VM. But today, I booted the same server again, started two VMs, and I can see one qemu process for each VM... According to /var/log/yum.log, no updates were done to the system in this period... Very weird... But I guess let's forget about this...

 

Regarding "virsh vcpupin" - as far as I understand it only ensures that specific VM runs on specific cores. However it provides no means to ensure that other processes (on host) don't use these cores. 

 

If I run some network intensive task in my "pinned" VM (for example netperf), I see that not only the core that I "pinned" my VM two becomes 100% busy, but other cores become busy too. The exact number differs drastically depending on the specific VM's network configuration. In "bridged network" mode all three other cores become occupied at 20-30% average. In "PCI passthrough" mode only one other core becomes occupied at approx 15%.

I believe this is due to the "host side" of the networking stack and/or the hypervisor scheduler. But I can't find any means to control the core that this "host side" is runninng on.

 

So I wonder, even if I "pin" each VM to its own core, how can I ensure that "host side" of some other VM (or some other process on host) doesn't "steal" CPU resources from my VM?

 

Weren't "cgroups" developed to solve this specific problem? Unfortunately, besides a brief mention in release notes, I can't find any documentation on how to configure cgroups with KVM.

I believe the docs for cgroups is here http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/index.html under RHEL 6 resource management.

 

:)

Hey,

 

As I mentioned earlier 'vms' are same as a linux process in world of kvm.. so you have an option to pin it to physical CPUs as you wish..

 

 

In that way, you got different options

 

*) isolcpus + taskset ..

 

Refer below kbase for that

https://access.redhat.com/kb/docs/DOC-47334

 

Why I mentioned 'isolcpus' is mainly because you want to dedicate the CPU to vm and dont like to share for other processes.

 

so it brings one more sub option

 

 isolcpus + vcpupin .. :)

 

 

 

 

*) 'cgroups' ..

 

http://berrange.com/posts/2009/12/03/using-cgroups-with-libvirt-and-lxckvm-guests-in-fedora-12/

 

Above URL got enough information on how to do it ..

 

 

Once again I would like to mention that,  extra threads you are seeing in 'ps' outputs belongs  to 'Vcpus' and 'i/o' threads .. if you dont want to see that thread processes and would like to avoid confustion , then try below command :)

 

'ps -aux |grep qemu-kvm'

 

Hope this helps..