Lines Matching +full:use +full:- +full:minimum +full:- +full:ecc
1 # SPDX-License-Identifier: GPL-2.0-only
25 This option is best suited for non-NUMA systems with
56 memory hot-plug systems. This is normal.
60 hot-plug and hot-remove.
72 Now, kswapd wake up monitor use it.
80 File-LRU is a mechanism that put file page in global lru list,
97 background. When the use of swap pages reaches the watermark
111 Memory reclaim delay accounting. Never use it as a kernel module.
130 # Both the NUMA code and DISCONTIGMEM use arrays of pg_data_t's
188 # Keep arch NUMA mapping infrastructure post-init.
204 bool "Allow for memory hot-add"
223 See Documentation/admin-guide/mm/memory-hotplug.rst for more information.
225 Say Y here if you want all hot-plugged memory blocks to appear in
227 Say N here if you want the default policy to keep all hot-plugged
236 # Heavily threaded applications may benefit from splitting the mm-wide
240 # ARM's adjust_pte (unused if VIPT) depends on mm-wide page_table_lock.
241 # PA-RISC 7xxx's spinlock_t would enlarge struct page from 32 to 44 bytes.
243 # a per-page lock leads to problems when multiple tables need to be locked
293 linux-mm@kvack.org.
366 Recommended for use with KVM, or with other duplicative applications.
383 Programs which use vm86 functionality or have some need to map
403 special hardware support and typically ECC memory.
417 allocator for chunks in 2^N*PAGE_SIZE amounts - which is frequently
426 long-term mappings means that the space is wasted.
429 (/proc/sys/vm/nr_trim_pages) which specifies the minimum number of
436 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
444 Transparent Hugepages allows the kernel to use huge pages and
491 # UP and nommu archs use km based percpu allocator
501 Cleancache can be thought of as a page-granularity victim cache
504 memory. So when the PFRA "evicts" a page, it first attempts to use
508 time-varying size. And when a cleancache-enabled
515 are reduced to a single pointer-compare-against-NULL resulting
528 time-varying size. When space in transcendent memory is available,
530 available, all frontswap calls are reduced to a single pointer-
531 compare-against-NULL resulting in a negligible performance hit
543 subsystems to allocate big physically-contiguous blocks of memory.
545 be allocated from it. This way, the kernel can use the memory for
583 allocations with __GFP_CMA flag will use cma areas prior to
594 soft-dirty bit on pte-s. This bit it set when someone writes
598 See Documentation/admin-guide/mm/soft-dirty.rst for more details.
607 compress them into a dynamically allocated RAM-based memory pool.
640 Use the Deflate algorithm as the default compression algorithm.
646 Use the LZO algorithm as the default compression algorithm.
652 Use the 842 algorithm as the default compression algorithm.
658 Use the LZ4 algorithm as the default compression algorithm.
664 Use the LZ4HC algorithm as the default compression algorithm.
670 Use the zstd algorithm as the default compression algorithm.
702 Use the zbud allocator as the default allocator.
708 Use the z3fold allocator as the default allocator.
714 Use the zsmalloc allocator as the default allocator.
763 zsmalloc is a slab-based memory allocator designed to store
766 non-standard allocator interface where a handle, not a pointer, is
784 int "Maximum user stack size for 32-bit processes (MB)"
789 This is the maximum stack size in Megabytes in the VM layout of 32-bit
822 See Documentation/admin-guide/mm/idle_page_tracking.rst for
840 "device-physical" addresses which is needed for using a DAX
896 bool "Read-only THP for filesystems (EXPERIMENTAL)"
900 Allow khugepaged to put read-only file-backed pages in THP.
936 # For lmkd to trigger in-kernel lowmem info
960 # Use rss_threshold to monitoring RSS