Lines Matching +full:restricted +full:- +full:dma +full:- +full:pool
13 ------------------------------------------------------------------------------
27 - admin_reserve_kbytes
28 - compact_memory
29 - compaction_proactiveness
30 - compact_unevictable_allowed
31 - dirty_background_bytes
32 - dirty_background_ratio
33 - dirty_bytes
34 - dirty_expire_centisecs
35 - dirty_ratio
36 - dirtytime_expire_seconds
37 - dirty_writeback_centisecs
38 - drop_caches
39 - extfrag_threshold
40 - highmem_is_dirtyable
41 - hugetlb_shm_group
42 - laptop_mode
43 - legacy_va_layout
44 - lowmem_reserve_ratio
45 - max_map_count
46 - memory_failure_early_kill
47 - memory_failure_recovery
48 - min_free_kbytes
49 - min_slab_ratio
50 - min_unmapped_ratio
51 - mmap_min_addr
52 - mmap_rnd_bits
53 - mmap_rnd_compat_bits
54 - nr_hugepages
55 - nr_hugepages_mempolicy
56 - nr_overcommit_hugepages
57 - nr_trim_pages (only if CONFIG_MMU=n)
58 - numa_zonelist_order
59 - oom_dump_tasks
60 - oom_kill_allocating_task
61 - overcommit_kbytes
62 - overcommit_memory
63 - overcommit_ratio
64 - page-cluster
65 - panic_on_oom
66 - percpu_pagelist_fraction
67 - stat_interval
68 - stat_refresh
69 - numa_stat
70 - swappiness
71 - unprivileged_userfaultfd
72 - user_reserve_kbytes
73 - vfs_cache_pressure
74 - watermark_boost_factor
75 - watermark_scale_factor
76 - zone_reclaim_mode
123 Note that compaction has a non-trivial system-wide impact as pages
189 of a second. Data which has been dirty in-memory for longer than this
244 This is a non-destructive operation and will not free any dirty objects.
272 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in
275 of memory, values towards 1000 imply failures are due to fragmentation and -1
296 storage more effectively. Note this also comes with a risk of pre-mature
313 controlled by this knob are discussed in Documentation/admin-guide/laptops/laptop-mode.rst.
319 If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel
339 (The same argument applies to the old 16 megabyte ISA DMA region. This
346 If you have a machine which uses highmem or ISA DMA and your
357 in /proc/zoneinfo like followings. (This is an example of x86-64 box).
360 Node 0, zone DMA
377 In this example, if normal pages (index=2) are required to this DMA zone and
381 normal page requirement. If requirement is DMA zone(index=0), protection[0]
387 zone[i]->protection[j]
398 256 (if zone[i] means DMA or DMA32 zone)
407 The minimum value is 1 (1/1 -> 100%). The value less than 1 completely
415 may have. Memory map areas are used as a side-effect of calling
508 against all file-backed unmapped pages including swapcache pages and tmpfs
519 be restricted from mmapping. Since kernel null dereference bugs could
558 Change the minimum size of the hugepage pool.
560 See Documentation/admin-guide/mm/hugetlbpage.rst
566 Change the size of the hugepage pool at run-time on a specific
569 See Documentation/admin-guide/mm/hugetlbpage.rst
575 Change the maximum size of the hugepage pool. The maximum is
578 See Documentation/admin-guide/mm/hugetlbpage.rst
586 This value adjusts the excess page trimming behaviour of power-of-2 aligned
595 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
609 In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
610 ZONE_NORMAL -> ZONE_DMA
617 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
618 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.
622 out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
625 the DMA zone.
637 On 32-bit, the Normal zone needs to be preserved for allocations accessible
640 On 64-bit, devices that require DMA32/DMA are relatively rare, so "node"
650 Enables a system-wide task dump (excluding kernel threads) to be produced
651 when the kernel performs an OOM-killing and includes such information as
663 If this is set to non-zero, this information is shown whenever the
664 OOM killer actually kills a memory-hogging task.
672 This enables or disables killing the OOM-triggering task in
673 out-of-memory situations.
677 selects a rogue memory-hogging task that frees up a large amount of
680 If this is set to non-zero, the OOM killer simply kills the task that
681 triggered the out-of-memory condition. This avoids the expensive
717 programs that malloc() huge amounts of memory "just-in-case"
722 See Documentation/vm/overcommit-accounting.rst and
734 page-cluster
737 page-cluster controls the number of pages up to which consecutive pages
741 but consecutive on swap space - that means they were swapped out together.
743 It is a logarithmic value - setting it to zero means "1 page", setting
749 swap-intensive.
759 This enables or disables panic on out-of-memory feature.
765 If this is set to 1, the kernel panics when out-of-memory happens.
768 may be killed by oom-killer. No panic occurs in this case.
773 above-mentioned. Even oom happens under memory cgroup, the whole
788 This is the fraction of pages at most (high mark pcp->high) in each zone that
796 set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8)
813 Any read or write (by root only) flushes all the per-cpu vm statistics
817 As a side-effect, it also checks for negative totals (elsewhere reported
846 cache and swap-backed pages equally; lower values signify more
851 experimentation and will also be workload-dependent.
855 For in-memory swap, like zram or zswap, as well as hybrid setups that
862 file-backed pages is less than the high watermark in a zone.
905 lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
921 increase the success rate of future high-order allocations such as SLUB
931 (e.g. 2MB on 64-bit x86). A boost factor of 0 will disable the feature.