Do not allow a user level process from consuming all memory, crippling system
Currently, if a user level process maps memory to the point that the OS is starved for memory, Linux will begin to terminate system level processes (like daemons) in a vain attempt to keep running, and to the point that the system becomes unusable (for example, killing 'sshd' so the system is inaccessible).
The kernel should deny memory requests by user level processes that exceed a calculated hard limit, thus preserving memory for OS level processes. This segmentation of the system's memory into a user space and system space will help the survivability of a system in the face of a runaway or poorly behaved user level process.
Responses
Please take a look at pam_limits(8), and most specifically /etc/security/limits.conf.
There is an example directive for "* hard rss 10000" that can be trivially modified to enforce RSS limitations on per "domain" basis (user, group, etc.)
The setrlimit() system call allows you to set limits programatically, and you can set them from user-space using ulimit as well.
You can turn off (or limit) memory overcommitment to achieve this. Just search for "overcommit" in the proc(5) manpage. To make the settings persistent across reboots, add them to /etc/sysctl.conf.
The downside: This is about *virtual* memory. If your box is plagued with processes allocating lots of memory they never use, users will start whining because they're locked out or can't start new processes even though there's plenty of free RAM.
But it will keep your systems from effectively locking up. Since root is granted a little extra space, you can log in and kill some processes once virtual memory is [almost] fully booked, instead of rebooting the system.