• Home
  • Raw
  • Download

Lines Matching +full:in +full:- +full:memory

11 For general info and legal blurb, please look in index.rst.
13 ------------------------------------------------------------------------------
15 This file contains the documentation for the sysctl files in
18 The files in this directory can be used to tune the operation
19 of the virtual memory (VM) subsystem of the Linux kernel and
23 files can be found in mm/swap.c.
25 Currently, these files are in /proc/sys/vm:
27 - admin_reserve_kbytes
28 - compact_memory
29 - compaction_proactiveness
30 - compact_unevictable_allowed
31 - dirty_background_bytes
32 - dirty_background_ratio
33 - dirty_bytes
34 - dirty_expire_centisecs
35 - dirty_ratio
36 - dirtytime_expire_seconds
37 - dirty_writeback_centisecs
38 - drop_caches
39 - enable_soft_offline
40 - extfrag_threshold
41 - highmem_is_dirtyable
42 - hugetlb_shm_group
43 - laptop_mode
44 - legacy_va_layout
45 - lowmem_reserve_ratio
46 - max_map_count
47 - mem_profiling (only if CONFIG_MEM_ALLOC_PROFILING=y)
48 - memory_failure_early_kill
49 - memory_failure_recovery
50 - min_free_kbytes
51 - min_slab_ratio
52 - min_unmapped_ratio
53 - mmap_min_addr
54 - mmap_rnd_bits
55 - mmap_rnd_compat_bits
56 - nr_hugepages
57 - nr_hugepages_mempolicy
58 - nr_overcommit_hugepages
59 - nr_trim_pages (only if CONFIG_MMU=n)
60 - numa_zonelist_order
61 - oom_dump_tasks
62 - oom_kill_allocating_task
63 - overcommit_kbytes
64 - overcommit_memory
65 - overcommit_ratio
66 - page-cluster
67 - page_lock_unfairness
68 - panic_on_oom
69 - percpu_pagelist_high_fraction
70 - stat_interval
71 - stat_refresh
72 - numa_stat
73 - swappiness
74 - unprivileged_userfaultfd
75 - user_reserve_kbytes
76 - vfs_cache_pressure
77 - watermark_boost_factor
78 - watermark_scale_factor
79 - zone_reclaim_mode
85 The amount of free memory in the system that should be reserved for users
90 That should provide enough for the admin to log in and kill a process,
94 for the full Virtual Memory Size of programs used to recover. Otherwise,
95 root may not be able to log in to recover the system.
108 Changing this takes effect whenever an application requests memory.
115 all zones are compacted such that free memory is available in contiguous
116 blocks where possible. This can be important for example in the allocation of
117 huge pages although processes will also directly compact memory as required.
122 This tunable takes a value in the range [0, 100] with a default value of
123 20. This tunable determines how aggressively compaction is done in the
127 Note that compaction has a non-trivial system-wide impact as pages
129 to latency spikes in unsuspecting applications. The kernel employs
142 acceptable trade for large contiguous free memory. Set to 0 to prevent
144 On CONFIG_PREEMPT_RT the default value is 0 in order to avoid a page fault, due
152 Contains the amount of dirty memory at which the background kernel
158 immediately taken into account to evaluate the dirty memory limits and the
165 Contains, as a percentage of total available memory that contains free pages
169 The total available memory is not equal to total system memory.
175 Contains the amount of dirty memory at which a process generating disk writes
180 account to evaluate the dirty memory limits and the other appears as 0 when
183 Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any
192 for writeout by the kernel flusher threads. It is expressed in 100'ths
193 of a second. Data which has been dirty in-memory for longer than this
200 Contains, as a percentage of total available memory that contains free pages
204 The total available memory is not equal to total system memory.
223 out to disk. This tunable expresses the interval between those wakeups, in
234 memory becomes free.
248 This is a non-destructive operation and will not free any dirty objects.
256 reclaimed by the kernel when memory is needed elsewhere on the system.
263 You may see informational messages in your kernel log when this file is
273 Correctable memory errors are very common on servers. Soft-offline is kernel's
274 solution for memory pages having (excessive) corrected memory errors.
276 For different types of page, soft-offline has different behaviors / costs.
278 - For a raw error page, soft-offline migrates the in-use page's content to
281 - For a page that is part of a transparent hugepage, soft-offline splits the
284 memory access performance.
286 - For a page that is part of a HugeTLB hugepage, soft-offline first migrates
292 physical memory) vs performance / capacity implications in transparent and
296 memory pages. When set to 1, kernel attempts to soft offline the pages
303 - Request to soft offline pages from RAS Correctable Errors Collector.
305 - On ARM, the request to soft offline pages from GHES driver.
307 - On PARISC, the request to soft offline pages from Page Deallocation Table.
312 This parameter affects whether the kernel will compact memory or direct
313 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in
314 debugfs shows what the fragmentation index for each order is in each zone in
316 of memory, values towards 1000 imply failures are due to fragmentation and -1
319 The kernel will not compact memory in a zone if the
328 This parameter controls whether the high memory is considered for dirty
330 only the amount of memory directly visible/usable by the kernel can
331 be dirtied. As a result, on systems with a large amount of memory and
335 Changing the value to non zero would allow more memory to be dirtied
337 storage more effectively. Note this also comes with a risk of pre-mature
339 only use the low memory and they can fill it up with dirty data without
347 shared memory segment using hugetlb page.
354 controlled by this knob are discussed in Documentation/admin-guide/laptops/laptop-mode.rst.
360 If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel
368 the kernel to allow process memory to be allocated from the "lowmem"
369 zone. This is because that memory could then be pinned via the mlock()
372 And on large highmem machines this lack of reclaimable lowmem memory
378 captured into pinned user memory.
385 in defending these lower zones.
398 in /proc/zoneinfo like the following. (This is an example of x86-64 box).
418 In this example, if normal pages (index=2) are required to this DMA zone and
428 zone[i]->protection[j]
448 The minimum value is 1 (1/1 -> 100%). The value less than 1 completely
455 This file contains the maximum number of memory map areas a process
456 may have. Memory map areas are used as a side-effect of calling
470 Enable memory profiling (when CONFIG_MEM_ALLOC_PROFILING=y)
472 1: Enable memory profiling.
474 0: Disable memory profiling.
476 Enabling memory profiling introduces a small performance overhead for all
477 memory allocations.
485 Control how to kill processes when uncorrected memory error (typically
486 a 2bit error in a memory module) is detected in the background by hardware
487 that cannot be handled by the kernel. In some cases (like the page
490 no other up-to-date copy of the data it will kill to prevent any data
513 Enable memory failure recovery (when supported by the platform)
517 0: Always panic on a memory failure.
525 watermark[WMARK_MIN] value for each lowmem zone in the system.
529 Some minimal amount of memory is needed to satisfy PF_MEMALLOC
541 A percentage of the total pages in each zone. On Zone reclaim
543 than this percentage of pages in a zone are reclaimable slab pages.
544 This insures that the slab growth stays under control even in NUMA
549 Note that slab reclaim is triggered in a per zone / node fashion.
550 The process of reclaiming slab memory is currently not node specific
559 This is a percentage of the total pages in each zone. Zone reclaim will
560 only occur if more than this percentage of pages are in a state that
564 against all file-backed unmapped pages including swapcache pages and tmpfs
576 accidentally operate based on the information in the first couple of pages
577 of memory userspace processes should not be allowed to write to them. By
580 vast majority of applications to work correctly and provide defense in depth
602 resulting from mmap allocations for applications run in
616 See Documentation/admin-guide/mm/hugetlbpage.rst
623 in include/linux/mm_types.h) is not power of two (an unusual system config could
624 result in this).
638 benefits of memory savings against the more overhead (~2x slower than before)
640 allocator. Another behavior to note is that if the system is under heavy memory
650 writing 0 to nr_hugepages will make any "in use" HugeTLB pages become surplus
652 in use. You would need to wait for those surplus pages to be released before
653 there are no optimized pages in the system.
659 Change the size of the hugepage pool at run-time on a specific
662 See Documentation/admin-guide/mm/hugetlbpage.rst
671 See Documentation/admin-guide/mm/hugetlbpage.rst
679 This value adjusts the excess page trimming behaviour of power-of-2 aligned
688 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
697 'where the memory is allocated from' is controlled by zonelists.
702 In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
703 ZONE_NORMAL -> ZONE_DMA
704 This means that a memory allocation request for GFP_KERNEL will
705 get memory from ZONE_DMA only when ZONE_NORMAL is not available.
707 In NUMA case, you can think of following 2 types of order.
710 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
711 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.
715 out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
730 On 32-bit, the Normal zone needs to be preserved for allocations accessible
733 On 64-bit, devices that require DMA32/DMA are relatively rare, so "node"
743 Enables a system-wide task dump (excluding kernel threads) to be produced
744 when the kernel performs an OOM-killing and includes such information as
752 the memory state information for each one. Such systems should not
753 be forced to incur a performance penalty in OOM conditions when the
756 If this is set to non-zero, this information is shown whenever the
757 OOM killer actually kills a memory-hogging task.
765 This enables or disables killing the OOM-triggering task in
766 out-of-memory situations.
770 selects a rogue memory-hogging task that frees up a large amount of
771 memory when killed.
773 If this is set to non-zero, the OOM killer simply kills the task that
774 triggered the out-of-memory condition. This avoids the expensive
778 is used in oom_kill_allocating_task.
797 This value contains a flag that enables memory overcommitment.
799 When this flag is 0, the kernel compares the userspace memory request
800 size against total memory plus swap and rejects obvious overcommits.
803 memory until it actually runs out.
806 policy that attempts to prevent any overcommit of memory.
810 programs that malloc() huge amounts of memory "just-in-case"
815 See Documentation/mm/overcommit-accounting.rst and
827 page-cluster
830 page-cluster controls the number of pages up to which consecutive pages
831 are read in from swap in a single attempt. This is the swap counterpart
833 The mentioned consecutivity is not in terms of virtual/physical addresses,
834 but consecutive on swap space - that means they were swapped out together.
836 It is a logarithmic value - setting it to zero means "1 page", setting
841 small benefits in tuning this to a different value if your workload is
842 swap-intensive.
846 that consecutive pages readahead would have brought in.
854 specified in this file (default is 5), the "fair lock handoff" semantics
860 This enables or disables panic on out-of-memory feature.
866 If this is set to 1, the kernel panics when out-of-memory happens.
868 and those nodes become memory exhaustion status, one process
869 may be killed by oom-killer. No panic occurs in this case.
870 Because other nodes' memory may be free. This means system total status
874 above-mentioned. Even oom happens under memory cgroup, the whole
889 This is the fraction of pages in each zone that are can be stored to
890 per-cpu page lists. It is an upper boundary that is divided depending
892 that we do not allow more than 1/8th of pages in each zone to be stored
893 on per-cpu page lists. This entry only changes the value of hot per-cpu
895 each zone between per-cpu lists.
897 The batch value of each per-cpu page list remains the same regardless of
900 The initial value is zero. Kernel uses this value to set the high pcp->high
916 Any read or write (by root only) flushes all the per-cpu vm statistics
920 As a side-effect, it also checks for negative totals (elsewhere reported
921 as 0) and "fails" with EINVAL if any are found, with a warning in dmesg.
948 assumes equal IO cost and will thus apply memory pressure to the page
949 cache and swap-backed pages equally; lower values signify more
952 Keep in mind that filesystem IO patterns under memory pressure tend to
954 experimentation and will also be workload-dependent.
958 For in-memory swap, like zram or zswap, as well as hybrid setups that
965 file-backed pages is less than the high watermark in a zone.
971 This flag controls the mode in which unprivileged users can use the
973 to handle page faults in user mode only. In this case, users without
974 SYS_CAP_PTRACE must pass UFFD_USER_MODE_ONLY in order for userfaultfd to
985 Documentation/admin-guide/mm/userfaultfd.rst.
991 min(3% of current process size, user_reserve_kbytes) of free memory.
992 This is intended to prevent a user from starting a single memory hogging
998 all free memory with a single process, minus admin_reserve_kbytes.
999 Any subsequent attempts to execute a command will result in
1000 "fork: Cannot allocate memory".
1002 Changing this takes effect whenever an application requests memory.
1009 the memory which is used for caching of directory and inode objects.
1015 never reclaim dentries and inodes due to memory pressure and this can easily
1016 lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
1028 This factor controls the level of reclaim when memory is being fragmented.
1031 The intent is that compaction has less work to do in the future and to
1032 increase the success rate of future high-order allocations such as SLUB
1036 parameter, the unit is in fractions of 10,000. The default value of
1037 15,000 means that up to 150% of the high watermark will be reclaimed in the
1039 is determined by the number of fragmentation events that occurred in the
1041 worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor
1049 amount of memory left in a node/system before kswapd is woken up and
1050 how much memory needs to be free before kswapd goes back to sleep.
1052 The unit is in fractions of 10,000. The default value of 10 means the
1053 distances between watermarks are 0.1% of the available memory in the
1054 node/system. The maximum value is 3000, or 30% of memory.
1059 too small for the allocation bursts occurring in the system. This knob
1067 reclaim memory when a zone runs out of memory. If it is set to zero then no
1069 in the system.
1086 and that accessing remote memory would cause a measurable performance
1094 since it cannot use all of system memory to buffer the outgoing writes
1095 anymore but it preserve the memory on other nodes so that the performance
1099 node unless explicitly overridden by memory policies or cpuset