Lines Matching +full:dma +full:- +full:protection +full:- +full:control
13 ------------------------------------------------------------------------------
27 - admin_reserve_kbytes
28 - compact_memory
29 - compaction_proactiveness
30 - compact_unevictable_allowed
31 - dirty_background_bytes
32 - dirty_background_ratio
33 - dirty_bytes
34 - dirty_expire_centisecs
35 - dirty_ratio
36 - dirtytime_expire_seconds
37 - dirty_writeback_centisecs
38 - drop_caches
39 - extfrag_threshold
40 - highmem_is_dirtyable
41 - hugetlb_shm_group
42 - laptop_mode
43 - legacy_va_layout
44 - lowmem_reserve_ratio
45 - max_map_count
46 - memory_failure_early_kill
47 - memory_failure_recovery
48 - min_free_kbytes
49 - min_slab_ratio
50 - min_unmapped_ratio
51 - mmap_min_addr
52 - mmap_rnd_bits
53 - mmap_rnd_compat_bits
54 - nr_hugepages
55 - nr_hugepages_mempolicy
56 - nr_overcommit_hugepages
57 - nr_trim_pages (only if CONFIG_MMU=n)
58 - numa_zonelist_order
59 - oom_dump_tasks
60 - oom_kill_allocating_task
61 - overcommit_kbytes
62 - overcommit_memory
63 - overcommit_ratio
64 - page-cluster
65 - page_lock_unfairness
66 - panic_on_oom
67 - percpu_pagelist_high_fraction
68 - stat_interval
69 - stat_refresh
70 - numa_stat
71 - swappiness
72 - unprivileged_userfaultfd
73 - user_reserve_kbytes
74 - vfs_cache_pressure
75 - watermark_boost_factor
76 - watermark_scale_factor
77 - zone_reclaim_mode
125 Note that compaction has a non-trivial system-wide impact as pages
191 of a second. Data which has been dirty in-memory for longer than this
246 This is a non-destructive operation and will not free any dirty objects.
252 This file is not a means to control the growth of the various kernel caches
274 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in
277 of memory, values towards 1000 imply failures are due to fragmentation and -1
298 storage more effectively. Note this also comes with a risk of pre-mature
315 controlled by this knob are discussed in Documentation/admin-guide/laptops/laptop-mode.rst.
321 If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel
341 (The same argument applies to the old 16 megabyte ISA DMA region. This
348 If you have a machine which uses highmem or ISA DMA and your
357 But, these values are not used directly. The kernel calculates # of protection
358 pages for each zones from them. These are shown as array of protection pages
359 in /proc/zoneinfo like the following. (This is an example of x86-64 box).
360 Each zone has an array of protection pages like this::
362 Node 0, zone DMA
370 protection: (0, 2004, 2004, 2004)
379 In this example, if normal pages (index=2) are required to this DMA zone and
381 not be used because pages_free(1355) is smaller than watermark + protection[2]
382 (4 + 2004 = 2008). If this protection value is 0, this zone would be used for
383 normal page requirement. If requirement is DMA zone(index=0), protection[0]
386 zone[i]'s protection[j] is calculated by following expression::
389 zone[i]->protection[j]
400 256 (if zone[i] means DMA or DMA32 zone)
405 256 means 1/256. # of protection pages becomes about "0.39%" of total managed
409 The minimum value is 1 (1/1 -> 100%). The value less than 1 completely
410 disables protection of the pages.
417 may have. Memory map areas are used as a side-effect of calling
431 Control how to kill processes when uncorrected memory error (typically
436 no other up-to-date copy of the data it will kill to prevent any data
490 This insures that the slab growth stays under control even in NUMA
510 against all file-backed unmapped pages including swapcache pages and tmpfs
562 See Documentation/admin-guide/mm/hugetlbpage.rst
605 Change the size of the hugepage pool at run-time on a specific
608 See Documentation/admin-guide/mm/hugetlbpage.rst
617 See Documentation/admin-guide/mm/hugetlbpage.rst
625 This value adjusts the excess page trimming behaviour of power-of-2 aligned
634 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
648 In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
649 ZONE_NORMAL -> ZONE_DMA
656 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
657 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.
661 out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
664 the DMA zone.
676 On 32-bit, the Normal zone needs to be preserved for allocations accessible
679 On 64-bit, devices that require DMA32/DMA are relatively rare, so "node"
689 Enables a system-wide task dump (excluding kernel threads) to be produced
690 when the kernel performs an OOM-killing and includes such information as
702 If this is set to non-zero, this information is shown whenever the
703 OOM killer actually kills a memory-hogging task.
711 This enables or disables killing the OOM-triggering task in
712 out-of-memory situations.
716 selects a rogue memory-hogging task that frees up a large amount of
719 If this is set to non-zero, the OOM killer simply kills the task that
720 triggered the out-of-memory condition. This avoids the expensive
756 programs that malloc() huge amounts of memory "just-in-case"
761 See Documentation/mm/overcommit-accounting.rst and
773 page-cluster
776 page-cluster controls the number of pages up to which consecutive pages
780 but consecutive on swap space - that means they were swapped out together.
782 It is a logarithmic value - setting it to zero means "1 page", setting
788 swap-intensive.
806 This enables or disables panic on out-of-memory feature.
812 If this is set to 1, the kernel panics when out-of-memory happens.
815 may be killed by oom-killer. No panic occurs in this case.
820 above-mentioned. Even oom happens under memory cgroup, the whole
836 per-cpu page lists. It is an upper boundary that is divided depending
839 on per-cpu page lists. This entry only changes the value of hot per-cpu
841 each zone between per-cpu lists.
843 The batch value of each per-cpu page list remains the same regardless of
846 The initial value is zero. Kernel uses this value to set the high pcp->high
862 Any read or write (by root only) flushes all the per-cpu vm statistics
866 As a side-effect, it also checks for negative totals (elsewhere reported
892 This control is used to define the rough relative IO cost of swapping
895 cache and swap-backed pages equally; lower values signify more
900 experimentation and will also be workload-dependent.
904 For in-memory swap, like zram or zswap, as well as hybrid setups that
911 file-backed pages is less than the high watermark in a zone.
929 Another way to control permissions for userfaultfd is to use
931 Documentation/admin-guide/mm/userfaultfd.rst.
962 lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
978 increase the success rate of future high-order allocations such as SLUB
987 worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor