• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1================
2Control Group v2
3================
4
5:Date: October, 2015
6:Author: Tejun Heo <tj@kernel.org>
7
8This is the authoritative documentation on the design, interface and
9conventions of cgroup v2.  It describes all userland-visible aspects
10of cgroup including core and specific controller behaviors.  All
11future changes must be reflected in this document.  Documentation for
12v1 is available under Documentation/cgroup-v1/.
13
14.. CONTENTS
15
16   1. Introduction
17     1-1. Terminology
18     1-2. What is cgroup?
19   2. Basic Operations
20     2-1. Mounting
21     2-2. Organizing Processes and Threads
22       2-2-1. Processes
23       2-2-2. Threads
24     2-3. [Un]populated Notification
25     2-4. Controlling Controllers
26       2-4-1. Enabling and Disabling
27       2-4-2. Top-down Constraint
28       2-4-3. No Internal Process Constraint
29     2-5. Delegation
30       2-5-1. Model of Delegation
31       2-5-2. Delegation Containment
32     2-6. Guidelines
33       2-6-1. Organize Once and Control
34       2-6-2. Avoid Name Collisions
35   3. Resource Distribution Models
36     3-1. Weights
37     3-2. Limits
38     3-3. Protections
39     3-4. Allocations
40   4. Interface Files
41     4-1. Format
42     4-2. Conventions
43     4-3. Core Interface Files
44   5. Controllers
45     5-1. CPU
46       5-1-1. CPU Interface Files
47     5-2. Memory
48       5-2-1. Memory Interface Files
49       5-2-2. Usage Guidelines
50       5-2-3. Memory Ownership
51     5-3. IO
52       5-3-1. IO Interface Files
53       5-3-2. Writeback
54     5-4. PID
55       5-4-1. PID Interface Files
56     5-5. RDMA
57       5-5-1. RDMA Interface Files
58     5-6. Misc
59       5-6-1. perf_event
60   6. Namespace
61     6-1. Basics
62     6-2. The Root and Views
63     6-3. Migration and setns(2)
64     6-4. Interaction with Other Namespaces
65   P. Information on Kernel Programming
66     P-1. Filesystem Support for Writeback
67   D. Deprecated v1 Core Features
68   R. Issues with v1 and Rationales for v2
69     R-1. Multiple Hierarchies
70     R-2. Thread Granularity
71     R-3. Competition Between Inner Nodes and Threads
72     R-4. Other Interface Issues
73     R-5. Controller Issues and Remedies
74       R-5-1. Memory
75
76
77Introduction
78============
79
80Terminology
81-----------
82
83"cgroup" stands for "control group" and is never capitalized.  The
84singular form is used to designate the whole feature and also as a
85qualifier as in "cgroup controllers".  When explicitly referring to
86multiple individual control groups, the plural form "cgroups" is used.
87
88
89What is cgroup?
90---------------
91
92cgroup is a mechanism to organize processes hierarchically and
93distribute system resources along the hierarchy in a controlled and
94configurable manner.
95
96cgroup is largely composed of two parts - the core and controllers.
97cgroup core is primarily responsible for hierarchically organizing
98processes.  A cgroup controller is usually responsible for
99distributing a specific type of system resource along the hierarchy
100although there are utility controllers which serve purposes other than
101resource distribution.
102
103cgroups form a tree structure and every process in the system belongs
104to one and only one cgroup.  All threads of a process belong to the
105same cgroup.  On creation, all processes are put in the cgroup that
106the parent process belongs to at the time.  A process can be migrated
107to another cgroup.  Migration of a process doesn't affect already
108existing descendant processes.
109
110Following certain structural constraints, controllers may be enabled or
111disabled selectively on a cgroup.  All controller behaviors are
112hierarchical - if a controller is enabled on a cgroup, it affects all
113processes which belong to the cgroups consisting the inclusive
114sub-hierarchy of the cgroup.  When a controller is enabled on a nested
115cgroup, it always restricts the resource distribution further.  The
116restrictions set closer to the root in the hierarchy can not be
117overridden from further away.
118
119
120Basic Operations
121================
122
123Mounting
124--------
125
126Unlike v1, cgroup v2 has only single hierarchy.  The cgroup v2
127hierarchy can be mounted with the following mount command::
128
129  # mount -t cgroup2 none $MOUNT_POINT
130
131cgroup2 filesystem has the magic number 0x63677270 ("cgrp").  All
132controllers which support v2 and are not bound to a v1 hierarchy are
133automatically bound to the v2 hierarchy and show up at the root.
134Controllers which are not in active use in the v2 hierarchy can be
135bound to other hierarchies.  This allows mixing v2 hierarchy with the
136legacy v1 multiple hierarchies in a fully backward compatible way.
137
138A controller can be moved across hierarchies only after the controller
139is no longer referenced in its current hierarchy.  Because per-cgroup
140controller states are destroyed asynchronously and controllers may
141have lingering references, a controller may not show up immediately on
142the v2 hierarchy after the final umount of the previous hierarchy.
143Similarly, a controller should be fully disabled to be moved out of
144the unified hierarchy and it may take some time for the disabled
145controller to become available for other hierarchies; furthermore, due
146to inter-controller dependencies, other controllers may need to be
147disabled too.
148
149While useful for development and manual configurations, moving
150controllers dynamically between the v2 and other hierarchies is
151strongly discouraged for production use.  It is recommended to decide
152the hierarchies and controller associations before starting using the
153controllers after system boot.
154
155During transition to v2, system management software might still
156automount the v1 cgroup filesystem and so hijack all controllers
157during boot, before manual intervention is possible. To make testing
158and experimenting easier, the kernel parameter cgroup_no_v1= allows
159disabling controllers in v1 and make them always available in v2.
160
161cgroup v2 currently supports the following mount options.
162
163  nsdelegate
164
165	Consider cgroup namespaces as delegation boundaries.  This
166	option is system wide and can only be set on mount or modified
167	through remount from the init namespace.  The mount option is
168	ignored on non-init namespace mounts.  Please refer to the
169	Delegation section for details.
170
171
172Organizing Processes and Threads
173--------------------------------
174
175Processes
176~~~~~~~~~
177
178Initially, only the root cgroup exists to which all processes belong.
179A child cgroup can be created by creating a sub-directory::
180
181  # mkdir $CGROUP_NAME
182
183A given cgroup may have multiple child cgroups forming a tree
184structure.  Each cgroup has a read-writable interface file
185"cgroup.procs".  When read, it lists the PIDs of all processes which
186belong to the cgroup one-per-line.  The PIDs are not ordered and the
187same PID may show up more than once if the process got moved to
188another cgroup and then back or the PID got recycled while reading.
189
190A process can be migrated into a cgroup by writing its PID to the
191target cgroup's "cgroup.procs" file.  Only one process can be migrated
192on a single write(2) call.  If a process is composed of multiple
193threads, writing the PID of any thread migrates all threads of the
194process.
195
196When a process forks a child process, the new process is born into the
197cgroup that the forking process belongs to at the time of the
198operation.  After exit, a process stays associated with the cgroup
199that it belonged to at the time of exit until it's reaped; however, a
200zombie process does not appear in "cgroup.procs" and thus can't be
201moved to another cgroup.
202
203A cgroup which doesn't have any children or live processes can be
204destroyed by removing the directory.  Note that a cgroup which doesn't
205have any children and is associated only with zombie processes is
206considered empty and can be removed::
207
208  # rmdir $CGROUP_NAME
209
210"/proc/$PID/cgroup" lists a process's cgroup membership.  If legacy
211cgroup is in use in the system, this file may contain multiple lines,
212one for each hierarchy.  The entry for cgroup v2 is always in the
213format "0::$PATH"::
214
215  # cat /proc/842/cgroup
216  ...
217  0::/test-cgroup/test-cgroup-nested
218
219If the process becomes a zombie and the cgroup it was associated with
220is removed subsequently, " (deleted)" is appended to the path::
221
222  # cat /proc/842/cgroup
223  ...
224  0::/test-cgroup/test-cgroup-nested (deleted)
225
226
227Threads
228~~~~~~~
229
230cgroup v2 supports thread granularity for a subset of controllers to
231support use cases requiring hierarchical resource distribution across
232the threads of a group of processes.  By default, all threads of a
233process belong to the same cgroup, which also serves as the resource
234domain to host resource consumptions which are not specific to a
235process or thread.  The thread mode allows threads to be spread across
236a subtree while still maintaining the common resource domain for them.
237
238Controllers which support thread mode are called threaded controllers.
239The ones which don't are called domain controllers.
240
241Marking a cgroup threaded makes it join the resource domain of its
242parent as a threaded cgroup.  The parent may be another threaded
243cgroup whose resource domain is further up in the hierarchy.  The root
244of a threaded subtree, that is, the nearest ancestor which is not
245threaded, is called threaded domain or thread root interchangeably and
246serves as the resource domain for the entire subtree.
247
248Inside a threaded subtree, threads of a process can be put in
249different cgroups and are not subject to the no internal process
250constraint - threaded controllers can be enabled on non-leaf cgroups
251whether they have threads in them or not.
252
253As the threaded domain cgroup hosts all the domain resource
254consumptions of the subtree, it is considered to have internal
255resource consumptions whether there are processes in it or not and
256can't have populated child cgroups which aren't threaded.  Because the
257root cgroup is not subject to no internal process constraint, it can
258serve both as a threaded domain and a parent to domain cgroups.
259
260The current operation mode or type of the cgroup is shown in the
261"cgroup.type" file which indicates whether the cgroup is a normal
262domain, a domain which is serving as the domain of a threaded subtree,
263or a threaded cgroup.
264
265On creation, a cgroup is always a domain cgroup and can be made
266threaded by writing "threaded" to the "cgroup.type" file.  The
267operation is single direction::
268
269  # echo threaded > cgroup.type
270
271Once threaded, the cgroup can't be made a domain again.  To enable the
272thread mode, the following conditions must be met.
273
274- As the cgroup will join the parent's resource domain.  The parent
275  must either be a valid (threaded) domain or a threaded cgroup.
276
277- When the parent is an unthreaded domain, it must not have any domain
278  controllers enabled or populated domain children.  The root is
279  exempt from this requirement.
280
281Topology-wise, a cgroup can be in an invalid state.  Please consider
282the following toplogy::
283
284  A (threaded domain) - B (threaded) - C (domain, just created)
285
286C is created as a domain but isn't connected to a parent which can
287host child domains.  C can't be used until it is turned into a
288threaded cgroup.  "cgroup.type" file will report "domain (invalid)" in
289these cases.  Operations which fail due to invalid topology use
290EOPNOTSUPP as the errno.
291
292A domain cgroup is turned into a threaded domain when one of its child
293cgroup becomes threaded or threaded controllers are enabled in the
294"cgroup.subtree_control" file while there are processes in the cgroup.
295A threaded domain reverts to a normal domain when the conditions
296clear.
297
298When read, "cgroup.threads" contains the list of the thread IDs of all
299threads in the cgroup.  Except that the operations are per-thread
300instead of per-process, "cgroup.threads" has the same format and
301behaves the same way as "cgroup.procs".  While "cgroup.threads" can be
302written to in any cgroup, as it can only move threads inside the same
303threaded domain, its operations are confined inside each threaded
304subtree.
305
306The threaded domain cgroup serves as the resource domain for the whole
307subtree, and, while the threads can be scattered across the subtree,
308all the processes are considered to be in the threaded domain cgroup.
309"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
310processes in the subtree and is not readable in the subtree proper.
311However, "cgroup.procs" can be written to from anywhere in the subtree
312to migrate all threads of the matching process to the cgroup.
313
314Only threaded controllers can be enabled in a threaded subtree.  When
315a threaded controller is enabled inside a threaded subtree, it only
316accounts for and controls resource consumptions associated with the
317threads in the cgroup and its descendants.  All consumptions which
318aren't tied to a specific thread belong to the threaded domain cgroup.
319
320Because a threaded subtree is exempt from no internal process
321constraint, a threaded controller must be able to handle competition
322between threads in a non-leaf cgroup and its child cgroups.  Each
323threaded controller defines how such competitions are handled.
324
325
326[Un]populated Notification
327--------------------------
328
329Each non-root cgroup has a "cgroup.events" file which contains
330"populated" field indicating whether the cgroup's sub-hierarchy has
331live processes in it.  Its value is 0 if there is no live process in
332the cgroup and its descendants; otherwise, 1.  poll and [id]notify
333events are triggered when the value changes.  This can be used, for
334example, to start a clean-up operation after all processes of a given
335sub-hierarchy have exited.  The populated state updates and
336notifications are recursive.  Consider the following sub-hierarchy
337where the numbers in the parentheses represent the numbers of processes
338in each cgroup::
339
340  A(4) - B(0) - C(1)
341              \ D(0)
342
343A, B and C's "populated" fields would be 1 while D's 0.  After the one
344process in C exits, B and C's "populated" fields would flip to "0" and
345file modified events will be generated on the "cgroup.events" files of
346both cgroups.
347
348
349Controlling Controllers
350-----------------------
351
352Enabling and Disabling
353~~~~~~~~~~~~~~~~~~~~~~
354
355Each cgroup has a "cgroup.controllers" file which lists all
356controllers available for the cgroup to enable::
357
358  # cat cgroup.controllers
359  cpu io memory
360
361No controller is enabled by default.  Controllers can be enabled and
362disabled by writing to the "cgroup.subtree_control" file::
363
364  # echo "+cpu +memory -io" > cgroup.subtree_control
365
366Only controllers which are listed in "cgroup.controllers" can be
367enabled.  When multiple operations are specified as above, either they
368all succeed or fail.  If multiple operations on the same controller
369are specified, the last one is effective.
370
371Enabling a controller in a cgroup indicates that the distribution of
372the target resource across its immediate children will be controlled.
373Consider the following sub-hierarchy.  The enabled controllers are
374listed in parentheses::
375
376  A(cpu,memory) - B(memory) - C()
377                            \ D()
378
379As A has "cpu" and "memory" enabled, A will control the distribution
380of CPU cycles and memory to its children, in this case, B.  As B has
381"memory" enabled but not "CPU", C and D will compete freely on CPU
382cycles but their division of memory available to B will be controlled.
383
384As a controller regulates the distribution of the target resource to
385the cgroup's children, enabling it creates the controller's interface
386files in the child cgroups.  In the above example, enabling "cpu" on B
387would create the "cpu." prefixed controller interface files in C and
388D.  Likewise, disabling "memory" from B would remove the "memory."
389prefixed controller interface files from C and D.  This means that the
390controller interface files - anything which doesn't start with
391"cgroup." are owned by the parent rather than the cgroup itself.
392
393
394Top-down Constraint
395~~~~~~~~~~~~~~~~~~~
396
397Resources are distributed top-down and a cgroup can further distribute
398a resource only if the resource has been distributed to it from the
399parent.  This means that all non-root "cgroup.subtree_control" files
400can only contain controllers which are enabled in the parent's
401"cgroup.subtree_control" file.  A controller can be enabled only if
402the parent has the controller enabled and a controller can't be
403disabled if one or more children have it enabled.
404
405
406No Internal Process Constraint
407~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
408
409Non-root cgroups can distribute domain resources to their children
410only when they don't have any processes of their own.  In other words,
411only domain cgroups which don't contain any processes can have domain
412controllers enabled in their "cgroup.subtree_control" files.
413
414This guarantees that, when a domain controller is looking at the part
415of the hierarchy which has it enabled, processes are always only on
416the leaves.  This rules out situations where child cgroups compete
417against internal processes of the parent.
418
419The root cgroup is exempt from this restriction.  Root contains
420processes and anonymous resource consumption which can't be associated
421with any other cgroups and requires special treatment from most
422controllers.  How resource consumption in the root cgroup is governed
423is up to each controller.
424
425Note that the restriction doesn't get in the way if there is no
426enabled controller in the cgroup's "cgroup.subtree_control".  This is
427important as otherwise it wouldn't be possible to create children of a
428populated cgroup.  To control resource distribution of a cgroup, the
429cgroup must create children and transfer all its processes to the
430children before enabling controllers in its "cgroup.subtree_control"
431file.
432
433
434Delegation
435----------
436
437Model of Delegation
438~~~~~~~~~~~~~~~~~~~
439
440A cgroup can be delegated in two ways.  First, to a less privileged
441user by granting write access of the directory and its "cgroup.procs",
442"cgroup.threads" and "cgroup.subtree_control" files to the user.
443Second, if the "nsdelegate" mount option is set, automatically to a
444cgroup namespace on namespace creation.
445
446Because the resource control interface files in a given directory
447control the distribution of the parent's resources, the delegatee
448shouldn't be allowed to write to them.  For the first method, this is
449achieved by not granting access to these files.  For the second, the
450kernel rejects writes to all files other than "cgroup.procs" and
451"cgroup.subtree_control" on a namespace root from inside the
452namespace.
453
454The end results are equivalent for both delegation types.  Once
455delegated, the user can build sub-hierarchy under the directory,
456organize processes inside it as it sees fit and further distribute the
457resources it received from the parent.  The limits and other settings
458of all resource controllers are hierarchical and regardless of what
459happens in the delegated sub-hierarchy, nothing can escape the
460resource restrictions imposed by the parent.
461
462Currently, cgroup doesn't impose any restrictions on the number of
463cgroups in or nesting depth of a delegated sub-hierarchy; however,
464this may be limited explicitly in the future.
465
466
467Delegation Containment
468~~~~~~~~~~~~~~~~~~~~~~
469
470A delegated sub-hierarchy is contained in the sense that processes
471can't be moved into or out of the sub-hierarchy by the delegatee.
472
473For delegations to a less privileged user, this is achieved by
474requiring the following conditions for a process with a non-root euid
475to migrate a target process into a cgroup by writing its PID to the
476"cgroup.procs" file.
477
478- The writer must have write access to the "cgroup.procs" file.
479
480- The writer must have write access to the "cgroup.procs" file of the
481  common ancestor of the source and destination cgroups.
482
483The above two constraints ensure that while a delegatee may migrate
484processes around freely in the delegated sub-hierarchy it can't pull
485in from or push out to outside the sub-hierarchy.
486
487For an example, let's assume cgroups C0 and C1 have been delegated to
488user U0 who created C00, C01 under C0 and C10 under C1 as follows and
489all processes under C0 and C1 belong to U0::
490
491  ~~~~~~~~~~~~~ - C0 - C00
492  ~ cgroup    ~      \ C01
493  ~ hierarchy ~
494  ~~~~~~~~~~~~~ - C1 - C10
495
496Let's also say U0 wants to write the PID of a process which is
497currently in C10 into "C00/cgroup.procs".  U0 has write access to the
498file; however, the common ancestor of the source cgroup C10 and the
499destination cgroup C00 is above the points of delegation and U0 would
500not have write access to its "cgroup.procs" files and thus the write
501will be denied with -EACCES.
502
503For delegations to namespaces, containment is achieved by requiring
504that both the source and destination cgroups are reachable from the
505namespace of the process which is attempting the migration.  If either
506is not reachable, the migration is rejected with -ENOENT.
507
508
509Guidelines
510----------
511
512Organize Once and Control
513~~~~~~~~~~~~~~~~~~~~~~~~~
514
515Migrating a process across cgroups is a relatively expensive operation
516and stateful resources such as memory are not moved together with the
517process.  This is an explicit design decision as there often exist
518inherent trade-offs between migration and various hot paths in terms
519of synchronization cost.
520
521As such, migrating processes across cgroups frequently as a means to
522apply different resource restrictions is discouraged.  A workload
523should be assigned to a cgroup according to the system's logical and
524resource structure once on start-up.  Dynamic adjustments to resource
525distribution can be made by changing controller configuration through
526the interface files.
527
528
529Avoid Name Collisions
530~~~~~~~~~~~~~~~~~~~~~
531
532Interface files for a cgroup and its children cgroups occupy the same
533directory and it is possible to create children cgroups which collide
534with interface files.
535
536All cgroup core interface files are prefixed with "cgroup." and each
537controller's interface files are prefixed with the controller name and
538a dot.  A controller's name is composed of lower case alphabets and
539'_'s but never begins with an '_' so it can be used as the prefix
540character for collision avoidance.  Also, interface file names won't
541start or end with terms which are often used in categorizing workloads
542such as job, service, slice, unit or workload.
543
544cgroup doesn't do anything to prevent name collisions and it's the
545user's responsibility to avoid them.
546
547
548Resource Distribution Models
549============================
550
551cgroup controllers implement several resource distribution schemes
552depending on the resource type and expected use cases.  This section
553describes major schemes in use along with their expected behaviors.
554
555
556Weights
557-------
558
559A parent's resource is distributed by adding up the weights of all
560active children and giving each the fraction matching the ratio of its
561weight against the sum.  As only children which can make use of the
562resource at the moment participate in the distribution, this is
563work-conserving.  Due to the dynamic nature, this model is usually
564used for stateless resources.
565
566All weights are in the range [1, 10000] with the default at 100.  This
567allows symmetric multiplicative biases in both directions at fine
568enough granularity while staying in the intuitive range.
569
570As long as the weight is in range, all configuration combinations are
571valid and there is no reason to reject configuration changes or
572process migrations.
573
574"cpu.weight" proportionally distributes CPU cycles to active children
575and is an example of this type.
576
577
578Limits
579------
580
581A child can only consume upto the configured amount of the resource.
582Limits can be over-committed - the sum of the limits of children can
583exceed the amount of resource available to the parent.
584
585Limits are in the range [0, max] and defaults to "max", which is noop.
586
587As limits can be over-committed, all configuration combinations are
588valid and there is no reason to reject configuration changes or
589process migrations.
590
591"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
592on an IO device and is an example of this type.
593
594
595Protections
596-----------
597
598A cgroup is protected to be allocated upto the configured amount of
599the resource if the usages of all its ancestors are under their
600protected levels.  Protections can be hard guarantees or best effort
601soft boundaries.  Protections can also be over-committed in which case
602only upto the amount available to the parent is protected among
603children.
604
605Protections are in the range [0, max] and defaults to 0, which is
606noop.
607
608As protections can be over-committed, all configuration combinations
609are valid and there is no reason to reject configuration changes or
610process migrations.
611
612"memory.low" implements best-effort memory protection and is an
613example of this type.
614
615
616Allocations
617-----------
618
619A cgroup is exclusively allocated a certain amount of a finite
620resource.  Allocations can't be over-committed - the sum of the
621allocations of children can not exceed the amount of resource
622available to the parent.
623
624Allocations are in the range [0, max] and defaults to 0, which is no
625resource.
626
627As allocations can't be over-committed, some configuration
628combinations are invalid and should be rejected.  Also, if the
629resource is mandatory for execution of processes, process migrations
630may be rejected.
631
632"cpu.rt.max" hard-allocates realtime slices and is an example of this
633type.
634
635
636Interface Files
637===============
638
639Format
640------
641
642All interface files should be in one of the following formats whenever
643possible::
644
645  New-line separated values
646  (when only one value can be written at once)
647
648	VAL0\n
649	VAL1\n
650	...
651
652  Space separated values
653  (when read-only or multiple values can be written at once)
654
655	VAL0 VAL1 ...\n
656
657  Flat keyed
658
659	KEY0 VAL0\n
660	KEY1 VAL1\n
661	...
662
663  Nested keyed
664
665	KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
666	KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
667	...
668
669For a writable file, the format for writing should generally match
670reading; however, controllers may allow omitting later fields or
671implement restricted shortcuts for most common use cases.
672
673For both flat and nested keyed files, only the values for a single key
674can be written at a time.  For nested keyed files, the sub key pairs
675may be specified in any order and not all pairs have to be specified.
676
677
678Conventions
679-----------
680
681- Settings for a single feature should be contained in a single file.
682
683- The root cgroup should be exempt from resource control and thus
684  shouldn't have resource control interface files.  Also,
685  informational files on the root cgroup which end up showing global
686  information available elsewhere shouldn't exist.
687
688- If a controller implements weight based resource distribution, its
689  interface file should be named "weight" and have the range [1,
690  10000] with 100 as the default.  The values are chosen to allow
691  enough and symmetric bias in both directions while keeping it
692  intuitive (the default is 100%).
693
694- If a controller implements an absolute resource guarantee and/or
695  limit, the interface files should be named "min" and "max"
696  respectively.  If a controller implements best effort resource
697  guarantee and/or limit, the interface files should be named "low"
698  and "high" respectively.
699
700  In the above four control files, the special token "max" should be
701  used to represent upward infinity for both reading and writing.
702
703- If a setting has a configurable default value and keyed specific
704  overrides, the default entry should be keyed with "default" and
705  appear as the first entry in the file.
706
707  The default value can be updated by writing either "default $VAL" or
708  "$VAL".
709
710  When writing to update a specific override, "default" can be used as
711  the value to indicate removal of the override.  Override entries
712  with "default" as the value must not appear when read.
713
714  For example, a setting which is keyed by major:minor device numbers
715  with integer values may look like the following::
716
717    # cat cgroup-example-interface-file
718    default 150
719    8:0 300
720
721  The default value can be updated by::
722
723    # echo 125 > cgroup-example-interface-file
724
725  or::
726
727    # echo "default 125" > cgroup-example-interface-file
728
729  An override can be set by::
730
731    # echo "8:16 170" > cgroup-example-interface-file
732
733  and cleared by::
734
735    # echo "8:0 default" > cgroup-example-interface-file
736    # cat cgroup-example-interface-file
737    default 125
738    8:16 170
739
740- For events which are not very high frequency, an interface file
741  "events" should be created which lists event key value pairs.
742  Whenever a notifiable event happens, file modified event should be
743  generated on the file.
744
745
746Core Interface Files
747--------------------
748
749All cgroup core files are prefixed with "cgroup."
750
751  cgroup.type
752
753	A read-write single value file which exists on non-root
754	cgroups.
755
756	When read, it indicates the current type of the cgroup, which
757	can be one of the following values.
758
759	- "domain" : A normal valid domain cgroup.
760
761	- "domain threaded" : A threaded domain cgroup which is
762          serving as the root of a threaded subtree.
763
764	- "domain invalid" : A cgroup which is in an invalid state.
765	  It can't be populated or have controllers enabled.  It may
766	  be allowed to become a threaded cgroup.
767
768	- "threaded" : A threaded cgroup which is a member of a
769          threaded subtree.
770
771	A cgroup can be turned into a threaded cgroup by writing
772	"threaded" to this file.
773
774  cgroup.procs
775	A read-write new-line separated values file which exists on
776	all cgroups.
777
778	When read, it lists the PIDs of all processes which belong to
779	the cgroup one-per-line.  The PIDs are not ordered and the
780	same PID may show up more than once if the process got moved
781	to another cgroup and then back or the PID got recycled while
782	reading.
783
784	A PID can be written to migrate the process associated with
785	the PID to the cgroup.  The writer should match all of the
786	following conditions.
787
788	- It must have write access to the "cgroup.procs" file.
789
790	- It must have write access to the "cgroup.procs" file of the
791	  common ancestor of the source and destination cgroups.
792
793	When delegating a sub-hierarchy, write access to this file
794	should be granted along with the containing directory.
795
796	In a threaded cgroup, reading this file fails with EOPNOTSUPP
797	as all the processes belong to the thread root.  Writing is
798	supported and moves every thread of the process to the cgroup.
799
800  cgroup.threads
801	A read-write new-line separated values file which exists on
802	all cgroups.
803
804	When read, it lists the TIDs of all threads which belong to
805	the cgroup one-per-line.  The TIDs are not ordered and the
806	same TID may show up more than once if the thread got moved to
807	another cgroup and then back or the TID got recycled while
808	reading.
809
810	A TID can be written to migrate the thread associated with the
811	TID to the cgroup.  The writer should match all of the
812	following conditions.
813
814	- It must have write access to the "cgroup.threads" file.
815
816	- The cgroup that the thread is currently in must be in the
817          same resource domain as the destination cgroup.
818
819	- It must have write access to the "cgroup.procs" file of the
820	  common ancestor of the source and destination cgroups.
821
822	When delegating a sub-hierarchy, write access to this file
823	should be granted along with the containing directory.
824
825  cgroup.controllers
826	A read-only space separated values file which exists on all
827	cgroups.
828
829	It shows space separated list of all controllers available to
830	the cgroup.  The controllers are not ordered.
831
832  cgroup.subtree_control
833	A read-write space separated values file which exists on all
834	cgroups.  Starts out empty.
835
836	When read, it shows space separated list of the controllers
837	which are enabled to control resource distribution from the
838	cgroup to its children.
839
840	Space separated list of controllers prefixed with '+' or '-'
841	can be written to enable or disable controllers.  A controller
842	name prefixed with '+' enables the controller and '-'
843	disables.  If a controller appears more than once on the list,
844	the last one is effective.  When multiple enable and disable
845	operations are specified, either all succeed or all fail.
846
847  cgroup.events
848	A read-only flat-keyed file which exists on non-root cgroups.
849	The following entries are defined.  Unless specified
850	otherwise, a value change in this file generates a file
851	modified event.
852
853	  populated
854		1 if the cgroup or its descendants contains any live
855		processes; otherwise, 0.
856
857  cgroup.max.descendants
858	A read-write single value files.  The default is "max".
859
860	Maximum allowed number of descent cgroups.
861	If the actual number of descendants is equal or larger,
862	an attempt to create a new cgroup in the hierarchy will fail.
863
864  cgroup.max.depth
865	A read-write single value files.  The default is "max".
866
867	Maximum allowed descent depth below the current cgroup.
868	If the actual descent depth is equal or larger,
869	an attempt to create a new child cgroup will fail.
870
871  cgroup.stat
872	A read-only flat-keyed file with the following entries:
873
874	  nr_descendants
875		Total number of visible descendant cgroups.
876
877	  nr_dying_descendants
878		Total number of dying descendant cgroups. A cgroup becomes
879		dying after being deleted by a user. The cgroup will remain
880		in dying state for some time undefined time (which can depend
881		on system load) before being completely destroyed.
882
883		A process can't enter a dying cgroup under any circumstances,
884		a dying cgroup can't revive.
885
886		A dying cgroup can consume system resources not exceeding
887		limits, which were active at the moment of cgroup deletion.
888
889
890Controllers
891===========
892
893CPU
894---
895
896.. note::
897
898	The interface for the cpu controller hasn't been merged yet
899
900The "cpu" controllers regulates distribution of CPU cycles.  This
901controller implements weight and absolute bandwidth limit models for
902normal scheduling policy and absolute bandwidth allocation model for
903realtime scheduling policy.
904
905
906CPU Interface Files
907~~~~~~~~~~~~~~~~~~~
908
909All time durations are in microseconds.
910
911  cpu.stat
912	A read-only flat-keyed file which exists on non-root cgroups.
913
914	It reports the following six stats:
915
916	- usage_usec
917	- user_usec
918	- system_usec
919	- nr_periods
920	- nr_throttled
921	- throttled_usec
922
923  cpu.weight
924	A read-write single value file which exists on non-root
925	cgroups.  The default is "100".
926
927	The weight in the range [1, 10000].
928
929  cpu.max
930	A read-write two value file which exists on non-root cgroups.
931	The default is "max 100000".
932
933	The maximum bandwidth limit.  It's in the following format::
934
935	  $MAX $PERIOD
936
937	which indicates that the group may consume upto $MAX in each
938	$PERIOD duration.  "max" for $MAX indicates no limit.  If only
939	one number is written, $MAX is updated.
940
941  cpu.rt.max
942	.. note::
943
944	   The semantics of this file is still under discussion and the
945	   interface hasn't been merged yet
946
947	A read-write two value file which exists on all cgroups.
948	The default is "0 100000".
949
950	The maximum realtime runtime allocation.  Over-committing
951	configurations are disallowed and process migrations are
952	rejected if not enough bandwidth is available.  It's in the
953	following format::
954
955	  $MAX $PERIOD
956
957	which indicates that the group may consume upto $MAX in each
958	$PERIOD duration.  If only one number is written, $MAX is
959	updated.
960
961  cpu.pressure
962	A read-only nested-key file which exists on non-root cgroups.
963
964	Shows pressure stall information for CPU. See
965	Documentation/accounting/psi.txt for details.
966
967
968Memory
969------
970
971The "memory" controller regulates distribution of memory.  Memory is
972stateful and implements both limit and protection models.  Due to the
973intertwining between memory usage and reclaim pressure and the
974stateful nature of memory, the distribution model is relatively
975complex.
976
977While not completely water-tight, all major memory usages by a given
978cgroup are tracked so that the total memory consumption can be
979accounted and controlled to a reasonable extent.  Currently, the
980following types of memory usages are tracked.
981
982- Userland memory - page cache and anonymous memory.
983
984- Kernel data structures such as dentries and inodes.
985
986- TCP socket buffers.
987
988The above list may expand in the future for better coverage.
989
990
991Memory Interface Files
992~~~~~~~~~~~~~~~~~~~~~~
993
994All memory amounts are in bytes.  If a value which is not aligned to
995PAGE_SIZE is written, the value may be rounded up to the closest
996PAGE_SIZE multiple when read back.
997
998  memory.current
999	A read-only single value file which exists on non-root
1000	cgroups.
1001
1002	The total amount of memory currently being used by the cgroup
1003	and its descendants.
1004
1005  memory.low
1006	A read-write single value file which exists on non-root
1007	cgroups.  The default is "0".
1008
1009	Best-effort memory protection.  If the memory usages of a
1010	cgroup and all its ancestors are below their low boundaries,
1011	the cgroup's memory won't be reclaimed unless memory can be
1012	reclaimed from unprotected cgroups.
1013
1014	Putting more memory than generally available under this
1015	protection is discouraged.
1016
1017  memory.high
1018	A read-write single value file which exists on non-root
1019	cgroups.  The default is "max".
1020
1021	Memory usage throttle limit.  This is the main mechanism to
1022	control memory usage of a cgroup.  If a cgroup's usage goes
1023	over the high boundary, the processes of the cgroup are
1024	throttled and put under heavy reclaim pressure.
1025
1026	Going over the high limit never invokes the OOM killer and
1027	under extreme conditions the limit may be breached.
1028
1029  memory.max
1030	A read-write single value file which exists on non-root
1031	cgroups.  The default is "max".
1032
1033	Memory usage hard limit.  This is the final protection
1034	mechanism.  If a cgroup's memory usage reaches this limit and
1035	can't be reduced, the OOM killer is invoked in the cgroup.
1036	Under certain circumstances, the usage may go over the limit
1037	temporarily.
1038
1039	This is the ultimate protection mechanism.  As long as the
1040	high limit is used and monitored properly, this limit's
1041	utility is limited to providing the final safety net.
1042
1043  memory.events
1044	A read-only flat-keyed file which exists on non-root cgroups.
1045	The following entries are defined.  Unless specified
1046	otherwise, a value change in this file generates a file
1047	modified event.
1048
1049	  low
1050		The number of times the cgroup is reclaimed due to
1051		high memory pressure even though its usage is under
1052		the low boundary.  This usually indicates that the low
1053		boundary is over-committed.
1054
1055	  high
1056		The number of times processes of the cgroup are
1057		throttled and routed to perform direct memory reclaim
1058		because the high memory boundary was exceeded.  For a
1059		cgroup whose memory usage is capped by the high limit
1060		rather than global memory pressure, this event's
1061		occurrences are expected.
1062
1063	  max
1064		The number of times the cgroup's memory usage was
1065		about to go over the max boundary.  If direct reclaim
1066		fails to bring it down, the cgroup goes to OOM state.
1067
1068	  oom
1069		The number of time the cgroup's memory usage was
1070		reached the limit and allocation was about to fail.
1071
1072		Depending on context result could be invocation of OOM
1073		killer and retrying allocation or failing alloction.
1074
1075		Failed allocation in its turn could be returned into
1076		userspace as -ENOMEM or siletly ignored in cases like
1077		disk readahead.  For now OOM in memory cgroup kills
1078		tasks iff shortage has happened inside page fault.
1079
1080	  oom_kill
1081		The number of processes belonging to this cgroup
1082		killed by any kind of OOM killer.
1083
1084  memory.stat
1085	A read-only flat-keyed file which exists on non-root cgroups.
1086
1087	This breaks down the cgroup's memory footprint into different
1088	types of memory, type-specific details, and other information
1089	on the state and past events of the memory management system.
1090
1091	All memory amounts are in bytes.
1092
1093	The entries are ordered to be human readable, and new entries
1094	can show up in the middle. Don't rely on items remaining in a
1095	fixed position; use the keys to look up specific values!
1096
1097	  anon
1098		Amount of memory used in anonymous mappings such as
1099		brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1100
1101	  file
1102		Amount of memory used to cache filesystem data,
1103		including tmpfs and shared memory.
1104
1105	  kernel_stack
1106		Amount of memory allocated to kernel stacks.
1107
1108	  slab
1109		Amount of memory used for storing in-kernel data
1110		structures.
1111
1112	  sock
1113		Amount of memory used in network transmission buffers
1114
1115	  shmem
1116		Amount of cached filesystem data that is swap-backed,
1117		such as tmpfs, shm segments, shared anonymous mmap()s
1118
1119	  file_mapped
1120		Amount of cached filesystem data mapped with mmap()
1121
1122	  file_dirty
1123		Amount of cached filesystem data that was modified but
1124		not yet written back to disk
1125
1126	  file_writeback
1127		Amount of cached filesystem data that was modified and
1128		is currently being written back to disk
1129
1130	  inactive_anon, active_anon, inactive_file, active_file, unevictable
1131		Amount of memory, swap-backed and filesystem-backed,
1132		on the internal memory management lists used by the
1133		page reclaim algorithm
1134
1135	  slab_reclaimable
1136		Part of "slab" that might be reclaimed, such as
1137		dentries and inodes.
1138
1139	  slab_unreclaimable
1140		Part of "slab" that cannot be reclaimed on memory
1141		pressure.
1142
1143	  pgfault
1144		Total number of page faults incurred
1145
1146	  pgmajfault
1147		Number of major page faults incurred
1148
1149	  workingset_refault
1150
1151		Number of refaults of previously evicted pages
1152
1153	  workingset_activate
1154
1155		Number of refaulted pages that were immediately activated
1156
1157	  workingset_nodereclaim
1158
1159		Number of times a shadow node has been reclaimed
1160
1161	  pgrefill
1162
1163		Amount of scanned pages (in an active LRU list)
1164
1165	  pgscan
1166
1167		Amount of scanned pages (in an inactive LRU list)
1168
1169	  pgsteal
1170
1171		Amount of reclaimed pages
1172
1173	  pgactivate
1174
1175		Amount of pages moved to the active LRU list
1176
1177	  pgdeactivate
1178
1179		Amount of pages moved to the inactive LRU lis
1180
1181	  pglazyfree
1182
1183		Amount of pages postponed to be freed under memory pressure
1184
1185	  pglazyfreed
1186
1187		Amount of reclaimed lazyfree pages
1188
1189  memory.swap.current
1190	A read-only single value file which exists on non-root
1191	cgroups.
1192
1193	The total amount of swap currently being used by the cgroup
1194	and its descendants.
1195
1196  memory.swap.max
1197	A read-write single value file which exists on non-root
1198	cgroups.  The default is "max".
1199
1200	Swap usage hard limit.  If a cgroup's swap usage reaches this
1201	limit, anonymous meomry of the cgroup will not be swapped out.
1202
1203  memory.pressure
1204	A read-only nested-key file which exists on non-root cgroups.
1205
1206	Shows pressure stall information for memory. See
1207	Documentation/accounting/psi.txt for details.
1208
1209
1210Usage Guidelines
1211~~~~~~~~~~~~~~~~
1212
1213"memory.high" is the main mechanism to control memory usage.
1214Over-committing on high limit (sum of high limits > available memory)
1215and letting global memory pressure to distribute memory according to
1216usage is a viable strategy.
1217
1218Because breach of the high limit doesn't trigger the OOM killer but
1219throttles the offending cgroup, a management agent has ample
1220opportunities to monitor and take appropriate actions such as granting
1221more memory or terminating the workload.
1222
1223Determining whether a cgroup has enough memory is not trivial as
1224memory usage doesn't indicate whether the workload can benefit from
1225more memory.  For example, a workload which writes data received from
1226network to a file can use all available memory but can also operate as
1227performant with a small amount of memory.  A measure of memory
1228pressure - how much the workload is being impacted due to lack of
1229memory - is necessary to determine whether a workload needs more
1230memory; unfortunately, memory pressure monitoring mechanism isn't
1231implemented yet.
1232
1233
1234Memory Ownership
1235~~~~~~~~~~~~~~~~
1236
1237A memory area is charged to the cgroup which instantiated it and stays
1238charged to the cgroup until the area is released.  Migrating a process
1239to a different cgroup doesn't move the memory usages that it
1240instantiated while in the previous cgroup to the new cgroup.
1241
1242A memory area may be used by processes belonging to different cgroups.
1243To which cgroup the area will be charged is in-deterministic; however,
1244over time, the memory area is likely to end up in a cgroup which has
1245enough memory allowance to avoid high reclaim pressure.
1246
1247If a cgroup sweeps a considerable amount of memory which is expected
1248to be accessed repeatedly by other cgroups, it may make sense to use
1249POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1250belonging to the affected files to ensure correct memory ownership.
1251
1252
1253IO
1254--
1255
1256The "io" controller regulates the distribution of IO resources.  This
1257controller implements both weight based and absolute bandwidth or IOPS
1258limit distribution; however, weight based distribution is available
1259only if cfq-iosched is in use and neither scheme is available for
1260blk-mq devices.
1261
1262
1263IO Interface Files
1264~~~~~~~~~~~~~~~~~~
1265
1266  io.stat
1267	A read-only nested-keyed file which exists on non-root
1268	cgroups.
1269
1270	Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1271	The following nested keys are defined.
1272
1273	  ======	===================
1274	  rbytes	Bytes read
1275	  wbytes	Bytes written
1276	  rios		Number of read IOs
1277	  wios		Number of write IOs
1278	  ======	===================
1279
1280	An example read output follows:
1281
1282	  8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353
1283	  8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252
1284
1285  io.weight
1286	A read-write flat-keyed file which exists on non-root cgroups.
1287	The default is "default 100".
1288
1289	The first line is the default weight applied to devices
1290	without specific override.  The rest are overrides keyed by
1291	$MAJ:$MIN device numbers and not ordered.  The weights are in
1292	the range [1, 10000] and specifies the relative amount IO time
1293	the cgroup can use in relation to its siblings.
1294
1295	The default weight can be updated by writing either "default
1296	$WEIGHT" or simply "$WEIGHT".  Overrides can be set by writing
1297	"$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1298
1299	An example read output follows::
1300
1301	  default 100
1302	  8:16 200
1303	  8:0 50
1304
1305  io.max
1306	A read-write nested-keyed file which exists on non-root
1307	cgroups.
1308
1309	BPS and IOPS based IO limit.  Lines are keyed by $MAJ:$MIN
1310	device numbers and not ordered.  The following nested keys are
1311	defined.
1312
1313	  =====		==================================
1314	  rbps		Max read bytes per second
1315	  wbps		Max write bytes per second
1316	  riops		Max read IO operations per second
1317	  wiops		Max write IO operations per second
1318	  =====		==================================
1319
1320	When writing, any number of nested key-value pairs can be
1321	specified in any order.  "max" can be specified as the value
1322	to remove a specific limit.  If the same key is specified
1323	multiple times, the outcome is undefined.
1324
1325	BPS and IOPS are measured in each IO direction and IOs are
1326	delayed if limit is reached.  Temporary bursts are allowed.
1327
1328	Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
1329
1330	  echo "8:16 rbps=2097152 wiops=120" > io.max
1331
1332	Reading returns the following::
1333
1334	  8:16 rbps=2097152 wbps=max riops=max wiops=120
1335
1336	Write IOPS limit can be removed by writing the following::
1337
1338	  echo "8:16 wiops=max" > io.max
1339
1340	Reading now returns the following::
1341
1342	  8:16 rbps=2097152 wbps=max riops=max wiops=max
1343
1344  io.pressure
1345	A read-only nested-key file which exists on non-root cgroups.
1346
1347	Shows pressure stall information for IO. See
1348	Documentation/accounting/psi.txt for details.
1349
1350
1351Writeback
1352~~~~~~~~~
1353
1354Page cache is dirtied through buffered writes and shared mmaps and
1355written asynchronously to the backing filesystem by the writeback
1356mechanism.  Writeback sits between the memory and IO domains and
1357regulates the proportion of dirty memory by balancing dirtying and
1358write IOs.
1359
1360The io controller, in conjunction with the memory controller,
1361implements control of page cache writeback IOs.  The memory controller
1362defines the memory domain that dirty memory ratio is calculated and
1363maintained for and the io controller defines the io domain which
1364writes out dirty pages for the memory domain.  Both system-wide and
1365per-cgroup dirty memory states are examined and the more restrictive
1366of the two is enforced.
1367
1368cgroup writeback requires explicit support from the underlying
1369filesystem.  Currently, cgroup writeback is implemented on ext2, ext4
1370and btrfs.  On other filesystems, all writeback IOs are attributed to
1371the root cgroup.
1372
1373There are inherent differences in memory and writeback management
1374which affects how cgroup ownership is tracked.  Memory is tracked per
1375page while writeback per inode.  For the purpose of writeback, an
1376inode is assigned to a cgroup and all IO requests to write dirty pages
1377from the inode are attributed to that cgroup.
1378
1379As cgroup ownership for memory is tracked per page, there can be pages
1380which are associated with different cgroups than the one the inode is
1381associated with.  These are called foreign pages.  The writeback
1382constantly keeps track of foreign pages and, if a particular foreign
1383cgroup becomes the majority over a certain period of time, switches
1384the ownership of the inode to that cgroup.
1385
1386While this model is enough for most use cases where a given inode is
1387mostly dirtied by a single cgroup even when the main writing cgroup
1388changes over time, use cases where multiple cgroups write to a single
1389inode simultaneously are not supported well.  In such circumstances, a
1390significant portion of IOs are likely to be attributed incorrectly.
1391As memory controller assigns page ownership on the first use and
1392doesn't update it until the page is released, even if writeback
1393strictly follows page ownership, multiple cgroups dirtying overlapping
1394areas wouldn't work as expected.  It's recommended to avoid such usage
1395patterns.
1396
1397The sysctl knobs which affect writeback behavior are applied to cgroup
1398writeback as follows.
1399
1400  vm.dirty_background_ratio, vm.dirty_ratio
1401	These ratios apply the same to cgroup writeback with the
1402	amount of available memory capped by limits imposed by the
1403	memory controller and system-wide clean memory.
1404
1405  vm.dirty_background_bytes, vm.dirty_bytes
1406	For cgroup writeback, this is calculated into ratio against
1407	total available memory and applied the same way as
1408	vm.dirty[_background]_ratio.
1409
1410
1411PID
1412---
1413
1414The process number controller is used to allow a cgroup to stop any
1415new tasks from being fork()'d or clone()'d after a specified limit is
1416reached.
1417
1418The number of tasks in a cgroup can be exhausted in ways which other
1419controllers cannot prevent, thus warranting its own controller.  For
1420example, a fork bomb is likely to exhaust the number of tasks before
1421hitting memory restrictions.
1422
1423Note that PIDs used in this controller refer to TIDs, process IDs as
1424used by the kernel.
1425
1426
1427PID Interface Files
1428~~~~~~~~~~~~~~~~~~~
1429
1430  pids.max
1431	A read-write single value file which exists on non-root
1432	cgroups.  The default is "max".
1433
1434	Hard limit of number of processes.
1435
1436  pids.current
1437	A read-only single value file which exists on all cgroups.
1438
1439	The number of processes currently in the cgroup and its
1440	descendants.
1441
1442Organisational operations are not blocked by cgroup policies, so it is
1443possible to have pids.current > pids.max.  This can be done by either
1444setting the limit to be smaller than pids.current, or attaching enough
1445processes to the cgroup such that pids.current is larger than
1446pids.max.  However, it is not possible to violate a cgroup PID policy
1447through fork() or clone(). These will return -EAGAIN if the creation
1448of a new process would cause a cgroup policy to be violated.
1449
1450
1451RDMA
1452----
1453
1454The "rdma" controller regulates the distribution and accounting of
1455of RDMA resources.
1456
1457RDMA Interface Files
1458~~~~~~~~~~~~~~~~~~~~
1459
1460  rdma.max
1461	A readwrite nested-keyed file that exists for all the cgroups
1462	except root that describes current configured resource limit
1463	for a RDMA/IB device.
1464
1465	Lines are keyed by device name and are not ordered.
1466	Each line contains space separated resource name and its configured
1467	limit that can be distributed.
1468
1469	The following nested keys are defined.
1470
1471	  ==========	=============================
1472	  hca_handle	Maximum number of HCA Handles
1473	  hca_object 	Maximum number of HCA Objects
1474	  ==========	=============================
1475
1476	An example for mlx4 and ocrdma device follows::
1477
1478	  mlx4_0 hca_handle=2 hca_object=2000
1479	  ocrdma1 hca_handle=3 hca_object=max
1480
1481  rdma.current
1482	A read-only file that describes current resource usage.
1483	It exists for all the cgroup except root.
1484
1485	An example for mlx4 and ocrdma device follows::
1486
1487	  mlx4_0 hca_handle=1 hca_object=20
1488	  ocrdma1 hca_handle=1 hca_object=23
1489
1490
1491Misc
1492----
1493
1494perf_event
1495~~~~~~~~~~
1496
1497perf_event controller, if not mounted on a legacy hierarchy, is
1498automatically enabled on the v2 hierarchy so that perf events can
1499always be filtered by cgroup v2 path.  The controller can still be
1500moved to a legacy hierarchy after v2 hierarchy is populated.
1501
1502
1503Namespace
1504=========
1505
1506Basics
1507------
1508
1509cgroup namespace provides a mechanism to virtualize the view of the
1510"/proc/$PID/cgroup" file and cgroup mounts.  The CLONE_NEWCGROUP clone
1511flag can be used with clone(2) and unshare(2) to create a new cgroup
1512namespace.  The process running inside the cgroup namespace will have
1513its "/proc/$PID/cgroup" output restricted to cgroupns root.  The
1514cgroupns root is the cgroup of the process at the time of creation of
1515the cgroup namespace.
1516
1517Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
1518complete path of the cgroup of a process.  In a container setup where
1519a set of cgroups and namespaces are intended to isolate processes the
1520"/proc/$PID/cgroup" file may leak potential system level information
1521to the isolated processes.  For Example::
1522
1523  # cat /proc/self/cgroup
1524  0::/batchjobs/container_id1
1525
1526The path '/batchjobs/container_id1' can be considered as system-data
1527and undesirable to expose to the isolated processes.  cgroup namespace
1528can be used to restrict visibility of this path.  For example, before
1529creating a cgroup namespace, one would see::
1530
1531  # ls -l /proc/self/ns/cgroup
1532  lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
1533  # cat /proc/self/cgroup
1534  0::/batchjobs/container_id1
1535
1536After unsharing a new namespace, the view changes::
1537
1538  # ls -l /proc/self/ns/cgroup
1539  lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
1540  # cat /proc/self/cgroup
1541  0::/
1542
1543When some thread from a multi-threaded process unshares its cgroup
1544namespace, the new cgroupns gets applied to the entire process (all
1545the threads).  This is natural for the v2 hierarchy; however, for the
1546legacy hierarchies, this may be unexpected.
1547
1548A cgroup namespace is alive as long as there are processes inside or
1549mounts pinning it.  When the last usage goes away, the cgroup
1550namespace is destroyed.  The cgroupns root and the actual cgroups
1551remain.
1552
1553
1554The Root and Views
1555------------------
1556
1557The 'cgroupns root' for a cgroup namespace is the cgroup in which the
1558process calling unshare(2) is running.  For example, if a process in
1559/batchjobs/container_id1 cgroup calls unshare, cgroup
1560/batchjobs/container_id1 becomes the cgroupns root.  For the
1561init_cgroup_ns, this is the real root ('/') cgroup.
1562
1563The cgroupns root cgroup does not change even if the namespace creator
1564process later moves to a different cgroup::
1565
1566  # ~/unshare -c # unshare cgroupns in some cgroup
1567  # cat /proc/self/cgroup
1568  0::/
1569  # mkdir sub_cgrp_1
1570  # echo 0 > sub_cgrp_1/cgroup.procs
1571  # cat /proc/self/cgroup
1572  0::/sub_cgrp_1
1573
1574Each process gets its namespace-specific view of "/proc/$PID/cgroup"
1575
1576Processes running inside the cgroup namespace will be able to see
1577cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
1578From within an unshared cgroupns::
1579
1580  # sleep 100000 &
1581  [1] 7353
1582  # echo 7353 > sub_cgrp_1/cgroup.procs
1583  # cat /proc/7353/cgroup
1584  0::/sub_cgrp_1
1585
1586From the initial cgroup namespace, the real cgroup path will be
1587visible::
1588
1589  $ cat /proc/7353/cgroup
1590  0::/batchjobs/container_id1/sub_cgrp_1
1591
1592From a sibling cgroup namespace (that is, a namespace rooted at a
1593different cgroup), the cgroup path relative to its own cgroup
1594namespace root will be shown.  For instance, if PID 7353's cgroup
1595namespace root is at '/batchjobs/container_id2', then it will see::
1596
1597  # cat /proc/7353/cgroup
1598  0::/../container_id2/sub_cgrp_1
1599
1600Note that the relative path always starts with '/' to indicate that
1601its relative to the cgroup namespace root of the caller.
1602
1603
1604Migration and setns(2)
1605----------------------
1606
1607Processes inside a cgroup namespace can move into and out of the
1608namespace root if they have proper access to external cgroups.  For
1609example, from inside a namespace with cgroupns root at
1610/batchjobs/container_id1, and assuming that the global hierarchy is
1611still accessible inside cgroupns::
1612
1613  # cat /proc/7353/cgroup
1614  0::/sub_cgrp_1
1615  # echo 7353 > batchjobs/container_id2/cgroup.procs
1616  # cat /proc/7353/cgroup
1617  0::/../container_id2
1618
1619Note that this kind of setup is not encouraged.  A task inside cgroup
1620namespace should only be exposed to its own cgroupns hierarchy.
1621
1622setns(2) to another cgroup namespace is allowed when:
1623
1624(a) the process has CAP_SYS_ADMIN against its current user namespace
1625(b) the process has CAP_SYS_ADMIN against the target cgroup
1626    namespace's userns
1627
1628No implicit cgroup changes happen with attaching to another cgroup
1629namespace.  It is expected that the someone moves the attaching
1630process under the target cgroup namespace root.
1631
1632
1633Interaction with Other Namespaces
1634---------------------------------
1635
1636Namespace specific cgroup hierarchy can be mounted by a process
1637running inside a non-init cgroup namespace::
1638
1639  # mount -t cgroup2 none $MOUNT_POINT
1640
1641This will mount the unified cgroup hierarchy with cgroupns root as the
1642filesystem root.  The process needs CAP_SYS_ADMIN against its user and
1643mount namespaces.
1644
1645The virtualization of /proc/self/cgroup file combined with restricting
1646the view of cgroup hierarchy by namespace-private cgroupfs mount
1647provides a properly isolated cgroup view inside the container.
1648
1649
1650Information on Kernel Programming
1651=================================
1652
1653This section contains kernel programming information in the areas
1654where interacting with cgroup is necessary.  cgroup core and
1655controllers are not covered.
1656
1657
1658Filesystem Support for Writeback
1659--------------------------------
1660
1661A filesystem can support cgroup writeback by updating
1662address_space_operations->writepage[s]() to annotate bio's using the
1663following two functions.
1664
1665  wbc_init_bio(@wbc, @bio)
1666	Should be called for each bio carrying writeback data and
1667	associates the bio with the inode's owner cgroup.  Can be
1668	called anytime between bio allocation and submission.
1669
1670  wbc_account_io(@wbc, @page, @bytes)
1671	Should be called for each data segment being written out.
1672	While this function doesn't care exactly when it's called
1673	during the writeback session, it's the easiest and most
1674	natural to call it as data segments are added to a bio.
1675
1676With writeback bio's annotated, cgroup support can be enabled per
1677super_block by setting SB_I_CGROUPWB in ->s_iflags.  This allows for
1678selective disabling of cgroup writeback support which is helpful when
1679certain filesystem features, e.g. journaled data mode, are
1680incompatible.
1681
1682wbc_init_bio() binds the specified bio to its cgroup.  Depending on
1683the configuration, the bio may be executed at a lower priority and if
1684the writeback session is holding shared resources, e.g. a journal
1685entry, may lead to priority inversion.  There is no one easy solution
1686for the problem.  Filesystems can try to work around specific problem
1687cases by skipping wbc_init_bio() or using bio_associate_blkcg()
1688directly.
1689
1690
1691Deprecated v1 Core Features
1692===========================
1693
1694- Multiple hierarchies including named ones are not supported.
1695
1696- All v1 mount options are not supported.
1697
1698- The "tasks" file is removed and "cgroup.procs" is not sorted.
1699
1700- "cgroup.clone_children" is removed.
1701
1702- /proc/cgroups is meaningless for v2.  Use "cgroup.controllers" file
1703  at the root instead.
1704
1705
1706Issues with v1 and Rationales for v2
1707====================================
1708
1709Multiple Hierarchies
1710--------------------
1711
1712cgroup v1 allowed an arbitrary number of hierarchies and each
1713hierarchy could host any number of controllers.  While this seemed to
1714provide a high level of flexibility, it wasn't useful in practice.
1715
1716For example, as there is only one instance of each controller, utility
1717type controllers such as freezer which can be useful in all
1718hierarchies could only be used in one.  The issue is exacerbated by
1719the fact that controllers couldn't be moved to another hierarchy once
1720hierarchies were populated.  Another issue was that all controllers
1721bound to a hierarchy were forced to have exactly the same view of the
1722hierarchy.  It wasn't possible to vary the granularity depending on
1723the specific controller.
1724
1725In practice, these issues heavily limited which controllers could be
1726put on the same hierarchy and most configurations resorted to putting
1727each controller on its own hierarchy.  Only closely related ones, such
1728as the cpu and cpuacct controllers, made sense to be put on the same
1729hierarchy.  This often meant that userland ended up managing multiple
1730similar hierarchies repeating the same steps on each hierarchy
1731whenever a hierarchy management operation was necessary.
1732
1733Furthermore, support for multiple hierarchies came at a steep cost.
1734It greatly complicated cgroup core implementation but more importantly
1735the support for multiple hierarchies restricted how cgroup could be
1736used in general and what controllers was able to do.
1737
1738There was no limit on how many hierarchies there might be, which meant
1739that a thread's cgroup membership couldn't be described in finite
1740length.  The key might contain any number of entries and was unlimited
1741in length, which made it highly awkward to manipulate and led to
1742addition of controllers which existed only to identify membership,
1743which in turn exacerbated the original problem of proliferating number
1744of hierarchies.
1745
1746Also, as a controller couldn't have any expectation regarding the
1747topologies of hierarchies other controllers might be on, each
1748controller had to assume that all other controllers were attached to
1749completely orthogonal hierarchies.  This made it impossible, or at
1750least very cumbersome, for controllers to cooperate with each other.
1751
1752In most use cases, putting controllers on hierarchies which are
1753completely orthogonal to each other isn't necessary.  What usually is
1754called for is the ability to have differing levels of granularity
1755depending on the specific controller.  In other words, hierarchy may
1756be collapsed from leaf towards root when viewed from specific
1757controllers.  For example, a given configuration might not care about
1758how memory is distributed beyond a certain level while still wanting
1759to control how CPU cycles are distributed.
1760
1761
1762Thread Granularity
1763------------------
1764
1765cgroup v1 allowed threads of a process to belong to different cgroups.
1766This didn't make sense for some controllers and those controllers
1767ended up implementing different ways to ignore such situations but
1768much more importantly it blurred the line between API exposed to
1769individual applications and system management interface.
1770
1771Generally, in-process knowledge is available only to the process
1772itself; thus, unlike service-level organization of processes,
1773categorizing threads of a process requires active participation from
1774the application which owns the target process.
1775
1776cgroup v1 had an ambiguously defined delegation model which got abused
1777in combination with thread granularity.  cgroups were delegated to
1778individual applications so that they can create and manage their own
1779sub-hierarchies and control resource distributions along them.  This
1780effectively raised cgroup to the status of a syscall-like API exposed
1781to lay programs.
1782
1783First of all, cgroup has a fundamentally inadequate interface to be
1784exposed this way.  For a process to access its own knobs, it has to
1785extract the path on the target hierarchy from /proc/self/cgroup,
1786construct the path by appending the name of the knob to the path, open
1787and then read and/or write to it.  This is not only extremely clunky
1788and unusual but also inherently racy.  There is no conventional way to
1789define transaction across the required steps and nothing can guarantee
1790that the process would actually be operating on its own sub-hierarchy.
1791
1792cgroup controllers implemented a number of knobs which would never be
1793accepted as public APIs because they were just adding control knobs to
1794system-management pseudo filesystem.  cgroup ended up with interface
1795knobs which were not properly abstracted or refined and directly
1796revealed kernel internal details.  These knobs got exposed to
1797individual applications through the ill-defined delegation mechanism
1798effectively abusing cgroup as a shortcut to implementing public APIs
1799without going through the required scrutiny.
1800
1801This was painful for both userland and kernel.  Userland ended up with
1802misbehaving and poorly abstracted interfaces and kernel exposing and
1803locked into constructs inadvertently.
1804
1805
1806Competition Between Inner Nodes and Threads
1807-------------------------------------------
1808
1809cgroup v1 allowed threads to be in any cgroups which created an
1810interesting problem where threads belonging to a parent cgroup and its
1811children cgroups competed for resources.  This was nasty as two
1812different types of entities competed and there was no obvious way to
1813settle it.  Different controllers did different things.
1814
1815The cpu controller considered threads and cgroups as equivalents and
1816mapped nice levels to cgroup weights.  This worked for some cases but
1817fell flat when children wanted to be allocated specific ratios of CPU
1818cycles and the number of internal threads fluctuated - the ratios
1819constantly changed as the number of competing entities fluctuated.
1820There also were other issues.  The mapping from nice level to weight
1821wasn't obvious or universal, and there were various other knobs which
1822simply weren't available for threads.
1823
1824The io controller implicitly created a hidden leaf node for each
1825cgroup to host the threads.  The hidden leaf had its own copies of all
1826the knobs with ``leaf_`` prefixed.  While this allowed equivalent
1827control over internal threads, it was with serious drawbacks.  It
1828always added an extra layer of nesting which wouldn't be necessary
1829otherwise, made the interface messy and significantly complicated the
1830implementation.
1831
1832The memory controller didn't have a way to control what happened
1833between internal tasks and child cgroups and the behavior was not
1834clearly defined.  There were attempts to add ad-hoc behaviors and
1835knobs to tailor the behavior to specific workloads which would have
1836led to problems extremely difficult to resolve in the long term.
1837
1838Multiple controllers struggled with internal tasks and came up with
1839different ways to deal with it; unfortunately, all the approaches were
1840severely flawed and, furthermore, the widely different behaviors
1841made cgroup as a whole highly inconsistent.
1842
1843This clearly is a problem which needs to be addressed from cgroup core
1844in a uniform way.
1845
1846
1847Other Interface Issues
1848----------------------
1849
1850cgroup v1 grew without oversight and developed a large number of
1851idiosyncrasies and inconsistencies.  One issue on the cgroup core side
1852was how an empty cgroup was notified - a userland helper binary was
1853forked and executed for each event.  The event delivery wasn't
1854recursive or delegatable.  The limitations of the mechanism also led
1855to in-kernel event delivery filtering mechanism further complicating
1856the interface.
1857
1858Controller interfaces were problematic too.  An extreme example is
1859controllers completely ignoring hierarchical organization and treating
1860all cgroups as if they were all located directly under the root
1861cgroup.  Some controllers exposed a large amount of inconsistent
1862implementation details to userland.
1863
1864There also was no consistency across controllers.  When a new cgroup
1865was created, some controllers defaulted to not imposing extra
1866restrictions while others disallowed any resource usage until
1867explicitly configured.  Configuration knobs for the same type of
1868control used widely differing naming schemes and formats.  Statistics
1869and information knobs were named arbitrarily and used different
1870formats and units even in the same controller.
1871
1872cgroup v2 establishes common conventions where appropriate and updates
1873controllers so that they expose minimal and consistent interfaces.
1874
1875
1876Controller Issues and Remedies
1877------------------------------
1878
1879Memory
1880~~~~~~
1881
1882The original lower boundary, the soft limit, is defined as a limit
1883that is per default unset.  As a result, the set of cgroups that
1884global reclaim prefers is opt-in, rather than opt-out.  The costs for
1885optimizing these mostly negative lookups are so high that the
1886implementation, despite its enormous size, does not even provide the
1887basic desirable behavior.  First off, the soft limit has no
1888hierarchical meaning.  All configured groups are organized in a global
1889rbtree and treated like equal peers, regardless where they are located
1890in the hierarchy.  This makes subtree delegation impossible.  Second,
1891the soft limit reclaim pass is so aggressive that it not just
1892introduces high allocation latencies into the system, but also impacts
1893system performance due to overreclaim, to the point where the feature
1894becomes self-defeating.
1895
1896The memory.low boundary on the other hand is a top-down allocated
1897reserve.  A cgroup enjoys reclaim protection when it and all its
1898ancestors are below their low boundaries, which makes delegation of
1899subtrees possible.  Secondly, new cgroups have no reserve per default
1900and in the common case most cgroups are eligible for the preferred
1901reclaim pass.  This allows the new low boundary to be efficiently
1902implemented with just a minor addition to the generic reclaim code,
1903without the need for out-of-band data structures and reclaim passes.
1904Because the generic reclaim code considers all cgroups except for the
1905ones running low in the preferred first reclaim pass, overreclaim of
1906individual groups is eliminated as well, resulting in much better
1907overall workload performance.
1908
1909The original high boundary, the hard limit, is defined as a strict
1910limit that can not budge, even if the OOM killer has to be called.
1911But this generally goes against the goal of making the most out of the
1912available memory.  The memory consumption of workloads varies during
1913runtime, and that requires users to overcommit.  But doing that with a
1914strict upper limit requires either a fairly accurate prediction of the
1915working set size or adding slack to the limit.  Since working set size
1916estimation is hard and error prone, and getting it wrong results in
1917OOM kills, most users tend to err on the side of a looser limit and
1918end up wasting precious resources.
1919
1920The memory.high boundary on the other hand can be set much more
1921conservatively.  When hit, it throttles allocations by forcing them
1922into direct reclaim to work off the excess, but it never invokes the
1923OOM killer.  As a result, a high boundary that is chosen too
1924aggressively will not terminate the processes, but instead it will
1925lead to gradual performance degradation.  The user can monitor this
1926and make corrections until the minimal memory footprint that still
1927gives acceptable performance is found.
1928
1929In extreme cases, with many concurrent allocations and a complete
1930breakdown of reclaim progress within the group, the high boundary can
1931be exceeded.  But even then it's mostly better to satisfy the
1932allocation from the slack available in other groups or the rest of the
1933system than killing the group.  Otherwise, memory.max is there to
1934limit this type of spillover and ultimately contain buggy or even
1935malicious applications.
1936
1937Setting the original memory.limit_in_bytes below the current usage was
1938subject to a race condition, where concurrent charges could cause the
1939limit setting to fail. memory.max on the other hand will first set the
1940limit to prevent new charges, and then reclaim and OOM kill until the
1941new limit is met - or the task writing to memory.max is killed.
1942
1943The combined memory+swap accounting and limiting is replaced by real
1944control over swap space.
1945
1946The main argument for a combined memory+swap facility in the original
1947cgroup design was that global or parental pressure would always be
1948able to swap all anonymous memory of a child group, regardless of the
1949child's own (possibly untrusted) configuration.  However, untrusted
1950groups can sabotage swapping by other means - such as referencing its
1951anonymous memory in a tight loop - and an admin can not assume full
1952swappability when overcommitting untrusted jobs.
1953
1954For trusted jobs, on the other hand, a combined counter is not an
1955intuitive userspace interface, and it flies in the face of the idea
1956that cgroup controllers should account and limit specific physical
1957resources.  Swap space is a resource like all others in the system,
1958and that's why unified hierarchy allows distributing it separately.
1959