cgroups & killing processes

Latest response

Hi all,

I am playing with cgroups right now to test memory limits for group of processes.
It works very well except ... killing processes. I mean if process is trying to allocate too big chunk of memory
(over its limits) it is killed. Is it possible to configure cgroups to refuse giving memory to process
but not kill it ? What I mean is that syscalls like malloc etc should give an error instead (of course
the code should maintain failure but this is another topic ...) but the process shouldn't be killed.
Is it possible somehow ?

Responses

Przemyslaw,

I haven't yet used cgroups for memory restriction, but I plan to in the near future so very interested in your experiences.

Out of interest, is it the OOM killer that is issuing the kill? What do you see in the logs when the process is killed?

Hi,

it seems that OOM killer is terminating the process. I can change cgroup config to not kill the process and hang it (until some memory is freed) but it is still far from ideal solution. I would like to fail just memory allocation syscalls.

I suspect the answer is going to be along the lines of disabling memory overcommit and disabling the OOM killer.

There is a Red Hat solution for disabling OOM killer here:
https://access.redhat.com/solutions/20985

I think looking into setting the kernel tuning option (2) will let you better manage these processes:

vm.overcommit_memory

Some information of options here:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/s-memory-captun.html

Specifically:
2 - The kernel denies requests for memory equal to or larger than the sum of total available swap and the percentage of physical RAM specified in overcommit_ratio. This setting is best if you want a lesser risk of memory overcommitment.

Very interested to hear what RH have to say!

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.