4.3. Per-group Division of CPU and Memory Resources

When a large amount of users use a single system, it is practical to provide certain users with more resources than others. Consider the following example: in a hypothetical company, there are three departments — finance, sales, and engineering. Because engineers use the system and its resources for more tasks than the other departments, it is logical that they have more resources available in case all departments are running CPU and memory intensive tasks.
Cgroups provide a way to limit the resources per each system group of users. For this example, assume that the following users have been created on the system:
~]$ grep home /etc/passwd
These users have been assigned to the following system groups:
~]$ grep -e "50[678]" /etc/group
For this example to work properly, you must have the libcgroup package installed. Using the /etc/cgconfig.conf and /etc/cgrules.conf files, you can create a hierarchy and a set of rules which determine the amount of resources for each user. To achieve this, follow the steps in Procedure 4.3, “Per-group CPU and memory resource management”.

Procedure 4.3. Per-group CPU and memory resource management

  1. In the /etc/cgconfig.conf file, configure the following subsystems to be mounted and cgroups to be created:
    mount {
        cpu     = /cgroup/cpu_and_mem;
        cpuacct = /cgroup/cpu_and_mem;
        memory  = /cgroup/cpu_and_mem;
    group finance {
            cpu {
            cpuacct {
            memory {
    group sales {
            cpu {
            cpuacct {
            memory {
    group engineering {
            cpu {
            cpuacct {
            memory {
    When loaded, the above configuration file mounts the cpu, cpuacct, and memory subsystems to a single cpu_and_mem cgroup. For more information on these subsystems, refer to Chapter 3, Subsystems and Tunable Parameters. Next, it creates a hierarchy in cpu_and_mem which contains three cgroups: sales, finance, and engineering. In each of these cgroups, custom parameters are set for each subsystem:
    • cpu — the cpu.shares parameter determines the share of CPU resources available to each process in all cgroups. Setting the parameter to 250, 250, and 500 in the finance, sales, and engineering cgroups respectively means that processes started in these groups will split the resources with a 1:1:2 ratio. Note that when a single process is running, it consumes as much CPU as necessary no matter which cgroup it is placed in. The CPU limitation only comes into effect when two or more processes compete for CPU resources.
    • cpuacct — the cpuacct.usage="0" parameter is used to reset values stored in the cpuacct.usage and cpuacct.usage_percpu files. These files report total CPU time (in nanoseconds) consumed by all processes in a cgroup.
    • memory — the memory.limit_in_bytes parameter represents the amount of memory that is made available to all processes within a certain cgroup. In our example, processes started in the finance cgroup have 2 GB of memory available, processes in the sales group have 4 GB of memory available, and processes in the engineering group have 8 GB of memory available. The memory.memsw.limit_in_bytes parameter specifies the total amount of memory and swap space processes may use. Should a process in the finance cgroup hit the 2 GB memory limit, it is allowed to use another 1 GB of swap space, thus totaling the configured 3 GB.
  2. To define the rules which the cgrulesengd daemon uses to move processes to specific cgroups, configure the /etc/cgrules.conf in the following way:
    #<user/group>         <controller(s)>         <cgroup>
    @finance              cpu,cpuacct,memory      finance
    @sales                cpu,cpuacct,memory      sales
    @engineering          cpu,cpuacct,memory      engineering
    The above configuration creates rules that assign a specific system group (for example, @finance) the resource controllers it may use (for example, cpu, cpuacct, memory) and a cgroup (for example, finance) which contains all processes originating from that system group.
    In our example, when the cgrulesengd daemon, started via the service cgred start command, detects a process that is started by a user that belongs to the finance system group (for example, jenn), that process is automatically moved to the /cgroup/cpu_and_mem/finance/tasks file and is subjected to the resource limitations set in the finance cgroup.
  3. Start the cgconfig service to create the hierarchy of cgroups and set the needed parameters in all created cgroups:
    ~]# service cgconfig start
    Starting cgconfig service:                                 [  OK  ]
    Start the cgred service to let the cgrulesengd daemon detect any processes started in system groups configured in the /etc/cgrules.conf file:
    ~]# service cgred start
    Starting CGroup Rules Engine Daemon:                       [  OK  ]
    Note that cgred is the name of the service that starts the cgrulesengd daemon.
  4. To make all of the changes above persistent across reboots, configure the cgconfig and cgred services to be started by default:
    ~]# chkconfig cgconfig on
    ~]# chkconfig cgred on
To test whether this setup works, execute a CPU or memory intensive process and observe the results, for example, using the top utility. To test the CPU resource management, execute the following dd command under each user:
~]$ dd if=/dev/zero of=/dev/null bs=1024k
The above command reads the /dev/zero and outputs it to the /dev/null in chunks of 1024 KB. When the top utility is launched, you can see results similar to these:
8201 peter     20   0  103m 1676  556 R 24.9  0.2   0:04.18 dd
8202 mike      20   0  103m 1672  556 R 24.9  0.2   0:03.47 dd
8199 jenn      20   0  103m 1676  556 R 12.6  0.2   0:02.87 dd
8200 john      20   0  103m 1676  556 R 12.6  0.2   0:02.20 dd
8197 martin    20   0  103m 1672  556 R 12.6  0.2   0:05.56 dd
8198 mark      20   0  103m 1672  556 R 12.3  0.2   0:04.28 dd
All processes have been correctly assigned to their cgroups and are only allowed to consume CPU resource made available to them. If all but two processes, which belong to the finance and engineering cgroups, are stopped, the remaining resources are evenly split between both processes:
8202 mike      20   0  103m 1676  556 R 66.4  0.2   0:06.35 dd
8200 john      20   0  103m 1672  556 R 33.2  0.2   0:05.08 dd

Alternative method

Because the cgrulesengd daemon moves a process to a cgroup only after the appropriate conditions set by the rules in /etc/cgrules.conf have been fulfilled, that process may be running for a few milliseconds in an incorrect cgroup. An alternative way to move processes to their specified cgroups is to use the pam_cgroup.so PAM module. This module moves processes to available cgroups according to rules defined in the /etc/cgrules.conf file. Follow the steps in Procedure 4.4, “Using a PAM module to move processes to cgroups” to configure the pam_cgroup.so PAM module. Note that the libcgroup-pam package that provides this module is available form the Optional subscription channel. Before subscribing to this channel please see the Scope of Coverage Details. If you decide to install packages from the channel, follow the steps documented in the article called How to access Optional and Supplementary channels, and -devel packages using Red Hat Subscription Manager (RHSM)? on Red Hat Customer Portal.

Procedure 4.4. Using a PAM module to move processes to cgroups

  1. Install the libcgroup-pam package from the optional Red Hat Enterprise Linux Yum repository:
    ~]# yum install libcgroup-pam --enablerepo=rhel-6-server-optional-rpms
  2. Ensure that the PAM module has been installed and exists:
    ~]# ls /lib64/security/pam_cgroup.so
    Note that on 32-bit systems, the module is placed in the /lib/security directory.
  3. Add the following line to the /etc/pam.d/su file to use the pam_cgroup.so module each time the su command is executed:
    session         optional        pam_cgroup.so
  4. Configure the /etc/cgconfig.conf and /etc/cgrules.conf files as in Procedure 4.4, “Using a PAM module to move processes to cgroups”.
  5. Log out all users that are affected by the cgroup settings in the /etc/cgrules.conf file to apply the above configuration.
When using the pam_cgroup.so PAM module, you may disable the cgred service.