Lines Matching +full:system +full:- +full:cache +full:- +full:controller
2 Memory Resource Controller
12 The Memory Resource Controller has generically been referred to as the
13 memory controller in this document. Do not confuse memory controller
14 used here with the memory controller that is used in hardware.
17 When we mention a cgroup (cgroupfs's directory) with memory controller,
18 we call it "memory cgroup". When you see git-log and source code, you'll
22 Benefits and Purpose of the memory controller
25 The memory controller isolates the memory behaviour of a group of tasks
26 from the rest of the system. The article on LWN [12]_ mentions some probable
27 uses of the memory controller. The memory controller can be used to
30 Memory-hungry applications can be isolated and limited to a smaller
37 rest of the system to ensure that burning does not fail due to lack
39 e. There are several other use cases; find one or use the controller just
42 Current Status: linux-2.6.34-mmotm(development version of 2010/April)
46 - accounting anonymous pages, file caches, swap caches usage and limiting them.
47 - pages are linked to per-memcg LRU exclusively, and there is no global LRU.
48 - optionally, memory+swap usage can be accounted and limited.
49 - hierarchical accounting
50 - soft limit
51 - moving (recharging) account at moving a task is selectable.
52 - usage threshold notifier
53 - memory pressure notifier
54 - oom-killer disable knob and oom-notifier
55 - Root cgroup has no limit controls.
59 <cgroup-v1-memory-kernel-extension>`)
131 The memory controller has a long history. A request for comments for the memory
132 controller was posted by Balbir Singh [1]_. At the time the RFC was posted
135 for memory control. The first RSS controller was posted by Balbir Singh [2]_
137 of the RSS controller. At OLS, at the resource management BoF, everyone
138 suggested that we handle both page cache and RSS together. Another request was
139 raised to allow user space handling of OOM. The current memory controller is
141 Cache Control [11]_.
151 The memory controller implementation has been divided into phases. These
154 1. Memory controller
155 2. mlock(2) controller
157 4. user mappings length controller
159 The memory controller is the first controller developed.
162 -----------
166 processes associated with the controller. Each cgroup has a memory controller
170 ---------------
172 .. code-block::
175 +--------------------+
178 +--------------------+
181 +---------------+ | +---------------+
184 +---------------+ | +---------------+
186 + --------------+
188 +---------------+ +------+--------+
189 | page +----------> page_cgroup|
191 +---------------+ +---------------+
195 Figure 1 shows the important aspects of the controller
206 If everything goes well, a page meta-data-structure called page_cgroup is
208 (*) page_cgroup structure is allocated at boot/memory-hotplug time.
211 ------------------------
213 All mapped anon pages (RSS) and cache pages (Page Cache) are accounted.
218 for earlier. A file page will be accounted for as Page Cache when it's
224 unmapped (by kswapd), they may exist as SwapCache in the system until they
226 A swapped-in page is accounted after adding into swapcache.
228 Note: The kernel does swapin-readahead and reads multiple swaps at once.
234 Note: we just account pages-on-LRU because our purpose is to control amount
235 of used pages; not-on-LRU pages tend to be out-of-control from VM view.
238 --------------------------
244 the cgroup that brought it in -- this will happen on memory pressure).
246 But see :ref:`section 8.2 <cgroup-v1-memory-movable-charges>` when moving a
251 --------------------------------------
258 - memory.memsw.usage_in_bytes.
259 - memory.memsw.limit_in_bytes.
264 Example: Assume a system with 4G of swap. A task which allocates 6G of memory
267 By using the memsw limit, you can avoid system OOM which can be caused by swap
273 The global LRU(kswapd) can swap out arbitrary pages. Swap-out means
282 When a cgroup hits memory.memsw.limit_in_bytes, it's useless to do swap-out
283 in this cgroup. Then, swap-out will not be done by cgroup routine and file
285 from it for sanity of the system's memory management state. You can't forbid
289 -----------
296 cgroup. (See :ref:`10. OOM Control <cgroup-v1-memory-oom-control>` below.)
299 pages that are selected for reclaiming come from the per-cgroup LRU
307 When panic_on_oom is set to "2", the whole system will panic.
310 (See :ref:`oom_control <cgroup-v1-memory-oom-control>` section)
313 -----------
318 mm->page_table_lock or split pte_lock
319 folio_memcg_lock (memcg->move_lock)
320 mapping->i_pages lock
321 lruvec->lru_lock.
323 Per-node-per-memcgroup LRU (cgroup's private LRU) is guarded by
324 lruvec->lru_lock; the folio LRU flag is cleared before
325 isolating a page from its LRU under lruvec->lru_lock.
327 .. _cgroup-v1-memory-kernel-extension:
330 -----------------------------------------------
332 With the Kernel memory extension, the Memory Controller is able to limit
333 the amount of kernel memory used by the system. Kernel memory is fundamentally
335 possible to DoS the system by consuming too much of this precious resource.
338 it can be disabled system-wide by passing cgroup.memory=nokmem to the kernel
353 -----------------------------------------------
362 of each kmem_cache is created every time the cache is touched by the first time
364 skipped while the cache is being created. All objects in a slab page should
366 different memcg during the page allocation by the cache.
370 thresholds. The Memory Controller allows them to be controlled individually
377 ----------------------
390 deployments where the total amount of memory per-cgroup is overcommitted.
392 box can still run out of non-reclaimable memory.
415 <cgroups-why-needed>` for the background information)::
417 # mount -t tmpfs none /sys/fs/cgroup
419 # mount -t cgroup none /sys/fs/cgroup/memory -o memory
441 We can write "-1" to reset the ``*.limit_in_bytes(unlimited)``.
455 availability of memory on the system. The user is required to re-read
473 Performance test is also important. To see pure memory controller's overhead,
477 Page-fault scalability is also important. At measuring parallel
478 page fault test, multi-process test may be better than multi-thread
482 Trying usual test under memory controller is always helpful.
484 .. _cgroup-v1-memory-test-troubleshoot:
487 -------------------
496 some of the pages cached in the cgroup (page cache pages).
499 <cgroup-v1-memory-oom-control>` (below) and seeing what happens will be
502 .. _cgroup-v1-memory-test-task-migration:
505 ------------------
513 See :ref:`8. "Move charges at task migration" <cgroup-v1-memory-move-charges>`
516 ---------------------
519 <cgroup-v1-memory-test-troubleshoot>` and :ref:`4.2
520 <cgroup-v1-memory-test-task-migration>`, a cgroup might have some charge
535 ---------------
545 charged file caches. Some out-of-use page caches may keep charged until
549 -------------
553 * per-memory cgroup local status
556 cache # of bytes of page cache memory.
557 rss # of bytes of anonymous and swap cache memory (includes
563 anon page(RSS) or cache page(Page Cache) to the cgroup.
570 writeback # of bytes of file/anon cache that are queued for syncing to
572 inactive_anon # of bytes of anonymous and swap cache memory on inactive
574 active_anon # of bytes of anonymous and swap cache memory on active
576 inactive_file # of bytes of file-backed memory and MADV_FREE anonymous
578 active_file # of bytes of file-backed memory on active LRU list.
612 Only anonymous and swap cache memory is listed as part of 'rss' stat.
620 cache.)
623 --------------
634 -----------
646 ------------------
652 If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP)
656 -------------
658 This is similar to numa_maps but operates on a per-memcg basis. This is
665 per-node page counts including "hierarchical_<counter>" which sums up all
681 The memory controller supports a deep hierarchy and hierarchical accounting.
700 ---------------------------------------
721 When the system detects memory contention or low memory, control groups
726 Please note that soft limits is a best-effort feature; it comes with
733 -------------
752 .. _cgroup-v1-memory-move-charges:
761 cgroups to allow fine-grained policy adjustments without having to
770 -------------
782 <cgroup-v1-memory-movable-charges>` for details.
785 Charges are moved only when you move mm->owner, in other words,
800 .. _cgroup-v1-memory-movable-charges:
803 --------------------------------------
810 +---+--------------------------------------------------------------------------+
815 +---+--------------------------------------------------------------------------+
824 +---+--------------------------------------------------------------------------+
827 --------
829 - All of moving charge operations are done under cgroup_mutex. It's not good
841 - create an eventfd using eventfd(2);
842 - open memory.usage_in_bytes or memory.memsw.usage_in_bytes;
843 - write string like "<event_fd> <fd of memory.usage_in_bytes> <threshold>" to
849 It's applicable for root and non-root cgroup.
851 .. _cgroup-v1-memory-oom-control:
866 - create an eventfd using eventfd(2)
867 - open memory.oom_control file
868 - write string like "<event_fd> <fd of memory.oom_control>" to
874 You can disable the OOM-killer by writing "1" to memory.oom_control file, as:
878 If OOM-killer is disabled, tasks under cgroup will hang/sleep
879 in memory cgroup's OOM-waitqueue when they request accountable memory.
895 - oom_kill_disable 0 or 1
896 (if 1, oom-killer is disabled)
897 - under_oom 0 or 1
899 - oom_kill integer counter
913 The "low" level means that the system is reclaiming memory for new
915 maintaining cache level. Upon notification, the program (typically
919 The "medium" level means that the system is experiencing medium memory
920 pressure, the system might be making swap, paging out active file caches,
923 resources that can be easily reconstructed or re-read from a disk.
925 The "critical" level means that the system is actively thrashing, it is
926 about to out of memory (OOM) or even the in-kernel OOM killer is on its
928 system. It might be too late to consult with vmstat or any other
932 events are not pass-through. For example, you have three cgroups: A->B->C. Now
936 excessive "broadcasting" of messages, which disturbs the system and which is
942 - "default": this is the default behavior specified above. This mode is the
946 - "hierarchy": events always propagate up to the root, similar to the default
951 - "local": events are pass-through, i.e. they only receive notifications when
960 specified by a comma-delimited string, i.e. "low,hierarchy" specifies
961 hierarchical, pass-through, notification for all ancestor memcgs. Notification
962 that is the default, non pass-through behavior, does not specify a mode.
963 "medium,local" specifies pass-through notification for the medium level.
968 - create an eventfd using eventfd(2);
969 - open memory.pressure_level;
970 - write string as "<event_fd> <fd of memory.pressure_level> <level[,mode]>"
992 (Expect a bunch of notifications, and eventually, the oom-killer will
998 1. Make per-cgroup scanner reclaim not-shared pages first
999 2. Teach controller to account for shared-pages
1006 Overall, the memory controller has been a stable controller and has been
1012 .. [1] Singh, Balbir. RFC: Memory Controller, http://lwn.net/Articles/206697/
1013 .. [2] Singh, Balbir. Memory Controller (RSS Control),
1017 .. [4] Emelianov, Pavel. RSS controller based on process cgroups (v2)
1019 .. [5] Emelianov, Pavel. RSS controller based on process cgroups (v3)
1025 8. Singh, Balbir. RSS controller v2 test results (lmbench),
1027 9. Singh, Balbir. RSS controller v2 AIM9 results
1029 10. Singh, Balbir. Memory controller v6 test results,
1030 https://lore.kernel.org/r/20070819094658.654.84837.sendpatchset@balbir-laptop
1032 .. [11] Singh, Balbir. Memory controller introduction (v6),
1033 https://lore.kernel.org/r/20070817084228.26003.12568.sendpatchset@balbir-laptop