cgroups - Limit, account for, and isolate resources usage

mail

cgcreate

Usage

Create a new control group

Flags

Flag Usage
-g controller:/path defines the control group(s) to be added :
  • controller is a list of controllers
  • /path is the relative path to control groups in the given controllers list

Example

See complete example
mail

cgdelete

Usage

Delete a control group

Example

See complete example
mail

cgexec

Usage

Run a task in the given control group

Example

See complete example
mail

cgget

Usage

Print parameter(s) of given control group(s)
mail

cgset

Usage

Set parameter(s) of given control group(s)

Example

See complete example
mail

lscgroup

Usage

List all control groups

Example

Create some groups so that we have something to list :

cgcreate -g cpu:/cpuGroup_Parent/cpuGroup_childA
cgcreate -g cpu:/cpuGroup_Parent/cpuGroup_childB

List groups :

lscgroup
list everything (pretty verbose)
lscgroup | grep cpu
cpu,cpuacct:/
	cpu,cpuacct:/cpuGroup_Parent
	cpu,cpuacct:/cpuGroup_Parent/cpuGroup_childB
	cpu,cpuacct:/cpuGroup_Parent/cpuGroup_childA
	cpuset:/
Because of grep, this also lists groups using the cpuset controller.
lscgroup cpu:/
List only the control groups using the cpu controller :
cpu,cpuacct:/
	cpu,cpuacct:/cpuGroup_Parent
	cpu,cpuacct:/cpuGroup_Parent/cpuGroup_childB
	cpu,cpuacct:/cpuGroup_Parent/cpuGroup_childA
lscgroup cpu:groupName
  • lscgroup cpu:/cpuGroup_Parent
    cpu,cpuacct:/cpuGroup_Parent/
    	cpu,cpuacct:/cpuGroup_Parent/cpuGroup_childB
    	cpu,cpuacct:/cpuGroup_Parent/cpuGroup_childA
  • lscgroup cpu:/cpuGroup_Parent/cpuGroup_childA
    cpu,cpuacct:/cpuGroup_Parent/cpuGroup_childA/
The / leading the path looks optional

Delete groups made just for the example :

cgdelete cpu:/cpuGroup_Parent/cpuGroup_childA
cgdelete cpu:/cpuGroup_Parent/cpuGroup_childB
cgdelete cpu:/cpuGroup_Parent
lscgroup cpu:/
cpu,cpuacct:/			i.e. "empty resultset" 
mail

cgroups

Description, definitions and the likes

  • cgroups (abbreviated from control groups) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes.
  • There are two versions of cgroups :
    • v1 appeared in 2007.
    • v2 is a redesign / rewrite, and appeared in Linux kernel 4.5 (2016).
    Unlike v1, cgroups v2 has only a single process hierarchy and discriminates between processes, not threads. (source)
    Despite cgroups v2 being "not-that-new", most articles / demos / ... (and examples below) are about cgroups v1. To know which version is currently implemented :
    mount | grep 'type cgroup'
    • v1 has cgroup type
    • v2 has cgroup2 type
  • cgroups features
  • a control group (aka cgroup) is a collection of processes that are bound by the same criteria and associated with a set of parameters or limits. These groups can be hierarchical, meaning that each group inherits limits from its parent group. The kernel provides access to multiple controllers (aka subsystems) through the cgroup interface :

/proc files :

/proc/cgroups
This file contains information about the controllers that are compiled into the kernel.
cat /proc/cgroups | tr '\t' '|' | column -s '|' -t
/proc/pid/cgroup
describes control groups to which the process with the corresponding PID belongs.

Setup :

  1. This will also install cgroup-tools as a dependency.
  2. ... and that's it !

Usage

cgroups can be configured 3 main ways (source) :
  1. Manually via the /sys/fs/cgroup pseudo filesystem :
    • requires lots of mount, mkdir and echo something >> some/file
    • complex and pretty error-prone
    • not reboot-proof
    • doesn't seem worth the headache considering other solutions below
  2. Via commands of the libcg library (see examples) :
    • easier than the manual approach
    • allows to manage things on-the-fly with its wrapper utilities such as cgcreate, cgclassify, cgexec, ...
    • not reboot-proof
  3. Via systemd commands and config files (see examples)
Remember you don't generally mix systemd and libcg implementations.

Example

With the libcg commands :

As a first basic test, we'll try to limit CPU usage. To do so, we need :
  • CPU-hungry commands. Those will do the job :
    • yes > /dev/null
    • cat /dev/urandom > /dev/null
    They can use up to 100% of a CPU, but it's still safe to run them on your workstation as you can stop them with a simple Ctrl-c.
  • a CPU monitoring utility. top or htop can do the job.
There is a "detail" that is not (well|at all)? explained in articles introducing to cgroups when it comes to limit the CPU usage. Articles show examples, but nothing works as described.
Most of these examples exhibit the cpu.shares setting without referring to its definition, then forget a VERY important information : the "CPU share" amount given to any "controlled group" is relative to the total shares given to all groups. In other words : if you use cpu.shares once, it can't work. Let me explain :
Scenario CPU share given to the cgroup of the process we're trying to limit CPU share given to any other cgroup Total shares defined Relative weight of shares given to the cgroup limiting the CPU usage
1 x (x >0) 0 (i.e. cpu.shares used only once) x 100%
2 x x 2x 50%
3 x 9x 10x 10%
Once you get this, it is pretty straightforward .

With cpu.shares :

  1. create 2 groups :
    cgcreate -g cpu:/cpuLimited
    cgcreate -g cpu:/cpuNotLimited
  2. define limits :
    cgset -r cpu.shares=10 cpuLimited
    cgset -r cpu.shares=10 cpuNotLimited
  3. execute commands within each group :
    cgexec -g cpu:/cpuLimited yes >/dev/null &
    cgexec -g cpu:/cpuNotLimited cat /dev/urandom > /dev/null &
  4. observe the result with htop : you'll see both processes using ~50% of the CPU each (you can kill them now)
    The only way the chosen numbers (here 10 / 10) matter is how they relate to each other. To check this, try with :
    • 100 / 900
    • 1000 / 9000
    In both cases, you should see the yes process limited to 10% of the CPU. Setting lower values like 1 / 9 should lead to the same behavior. However, these low values interfere with existing values, which ends behaving slightly differently (but still limited anyway).
    Most articles repeat things like By default, all groups inherit 1,024 shares or 100% of CPU time., then show examples where 1024 stands for 100% CPU, 512 for 50% of CPU, 338 for 33% and so on... Either I've not understood this correctly, or this explanation is pretty misleading as well as partially wrong.
  5. clean before leaving :
    cgdelete cpu:/cpuLimited
    cgdelete cpu:/cpuNotLimited

With cpu.cfs_period_us and cpu.cfs_quota_us (see also) :

These directives can be used to set a hard limit on CPU usage, even when no other process is running.
us is for "µs".
cpu.cfs_period_us
this defines a period of time, in microseconds. Let's choose 1s = 1000000µs
cpu.cfs_quota_us
this defines the amount of time (still in microseconds) during each period (aka cpu.cfs_period_us), when the corresponding group can use the CPU. So if you target 10% of CPU usage, this value has to be 10% of cpu.cfs_period_us = 0.1s = 100000µs.
  1. create a group :
    cgcreate -g cpu:/myGroup
  2. set values :
    cgset -r cpu.cfs_period_us=1000000 myGroup
    cgset -r cpu.cfs_quota_us=100000 myGroup
  3. run a CPU-hungry command :
    cgexec -g cpu:/myGroup cat /dev/urandom > /dev/null &
    and see what happens with htop
  4. clean before leaving :
    cgdelete cpu:/myGroup

Do the same with already running processes :

So far, we defined rules and ran processes by explicitly launching them within a control group limiting their CPU usage. Now, we're going to do it the opposite way, i.e. we'll apply rules to already running processes.
For both tests, you should run htop in a dedicated terminal.
  • With cpu.shares :
    1. simulate some already running processes :
      yes >/dev/null &
      cat /dev/urandom > /dev/null &
      [1] 1030
      [2] 1031
      You should already observe both processes eating 100% of the CPU (50% each)
    2. create 2 groups :
      cgcreate -g cpu:/cpuLimited
      cgcreate -g cpu:/cpuNotLimited
    3. attach processes to groups :
      cgclassify -g cpu:/cpuLimited $(pidof yes)
      cgclassify -g cpu:/cpuNotLimited $(pidof cat)
    4. observe /sys/fs/cgroup files :
      cat /sys/fs/cgroup/cpu/cpu{,Not}Limited/tasks
      1030
      1031
      • Each control group has a tasks file containing the PIDs of the processes it controls. cgclassify saves us from manually having to : echo $(pidof command) >> /sys/fs/cgroup/cpu/groupName/tasks
      • Since /sys/fs/cgroup is a pseudo filesystem, the tasks file will always appear 0-sized despite having data.
    5. Now processes are under control, let's set some limits :
      cgset -r cpu.shares=100 cpuLimited
      cgset -r cpu.shares=900 cpuNotLimited
      observe the result in htop. Try with different values :
      cgset -r cpu.shares=600 cpuLimited
      cgset -r cpu.shares=300 cpuNotLimited
    6. cgdelete -g cpu:/cpuLimited
      cgdelete -g cpu:/cpuNotLimited
      When deleting the control groups, we "release" the limiting conditions, and both processes now each take again 50% of the CPU.
  • With cpu.cfs_period_us and cpu.cfs_quota_us :
    1. Let's again simulate 1 already-running process (while still checking CPU usage in a dedicated terminal with htop) :
      cat /dev/urandom > /dev/null &
    2. create a control group :
      cgcreate -g cpu:/myGroup
    3. attach process to group :
      cgclassify -g cpu:/myGroup $(pidof cat)
    4. set limits :
      cgset -r cpu.cfs_period_us=1000000 myGroup
      cgset -r cpu.cfs_quota_us=100000 myGroup
      The CPU usage should be down to 10%.
    5. cgdelete -g cpu:/myGroup
  • Deleting a control group releases the constraints this group enforces.
  • Looks like it doesn't matter whether you cgclassify then cgset (i.e. "invite", then "restrict") or the other way around.

Persistent Groups :

So far, everything we did was configured on-the-fly and is not reboot-proof. We'll see now how to make persistent groups.

  1. It starts by editing /etc/cgconfig.conf
    Debian comes with no default /etc/cgconfig.conf, but a template is available as /usr/share/doc/cgroup-tools/examples/cgconfig.conf (source) :
    cp /usr/share/doc/cgroup-tools/examples/cgconfig.conf /etc/cgconfig.conf
  2. Add :
    group cpuLimited {
    	cpu {
    		cpu.shares = 100;
    		}
    	}
    
    group cpuNotLimited {
    	cpu {
    		cpu.shares = 900;
    		}
    	}
  3. Enable + start the service :
    systemctl enable cgconfig && systemctl start cgconfig
    If this fails saying :
    Failed to enable unit: File cgconfig.service: No such file or directory
    Use this command instead to load the configuration (source) :
    cgconfigparser -l /etc/cgconfig.conf
  4. start htop in a dedicated terminal
  5. then start processes :
    cgexec -g cpu:cpuLimited yes >/dev/null &
    cgexec -g cpu:cpuNotLimited cat /dev/urandom > /dev/null &
    You should observe the yes command limited to 10% in CPU usage, and cat using 90%.
You can do the same with the cpu.cfs_period_us and cpu.cfs_quota_us directives :
  1. group myGroup {
    	cpu {
    		cpu.cfs_period_us = 1000000;
    		cpu.cfs_quota_us  =  100000;
    		}
    	}
  2. load the configuration (this actually creates the /sys/fs/cgroup/cpu/myGroup directory and files)
  3. cgexec -g cpu:/myGroup cat /dev/urandom > /dev/null &
Clean before leaving :
cgdelete cpu:/{cpuNotLimited,cpuLimited,myGroup}
This {,,} construct is a shell trick that has nothing to do with cgroups .


http://www.linuxembedded.fr/2011/03/bien-utiliser-les-cgroups/

https://www.cloudsigma.com/howto-cgroups/

https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-i-linux-control-groups-and-process

https://oakbytes.wordpress.com/2012/09/02/cgroup-cpu-allocation-cpu-shares-examples/



cgclear cpu (looks like this unmounts A LOT (?))
systemctl restart cgroupfs-mount.service


grep cgroup /proc/filesystems
	==> lists whether cgroup is supported