Lines Matching +full:page +full:- +full:based
1 # SPDX-License-Identifier: GPL-2.0-only
33 compress them into a dynamically allocated RAM-based memory pool.
65 If exclusive loads are enabled, when a page is loaded from zswap,
69 This avoids having two copies of the same page in memory
70 (compressed and uncompressed) after faulting in a page from zswap.
71 The cost is that if the page was never dirtied and needs to be
72 swapped out again, it will be re-compressed.
84 available at the following LWN page:
187 page. While this design limits storage density, it has simple and
197 page. It is a ZBUD derivative so the simplicity and determinism are
205 zsmalloc is a slab-based memory allocator designed to store
220 int "Maximum number of physical pages per-zspage"
226 that a zmalloc page (zspage) can consist of. The optimal zspage
253 If you cannot migrate to SLUB, please contact linux-mm@kvack.org
322 sanity-checking than others. This option is most effective with
336 Try running: slabinfo -DA
355 normal kmalloc allocation and makes kmalloc randomly pick one based
369 bool "Page allocator randomization"
372 Randomization of the page allocator improves the average
373 utilization of a direct-mapped memory-side-cache. See section
376 the presence of a memory-side-cache. There are also incidental
377 security benefits as it reduces the predictability of page
380 order of pages is selected based on cache utilization benefits
386 after runtime detection of a direct-mapped memory-side-cache.
397 also breaks ancient binaries (including anything libc5 based).
402 On non-ancient distros (post-2000 ones) N is usually a safe choice.
417 ELF-FDPIC binfmt's brk and stack allocator.
421 userspace. Since that isn't generally a problem on no-MMU systems,
424 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
445 This option is best suited for non-NUMA systems with
461 memory hot-plug systems. This is normal.
465 hot-plug and hot-remove.
535 # Keep arch NUMA mapping infrastructure post-init.
581 See Documentation/admin-guide/mm/memory-hotplug.rst for more information.
583 Say Y here if you want all hot-plugged memory blocks to appear in
585 Say N here if you want the default policy to keep all hot-plugged
604 # Heavily threaded applications may benefit from splitting the mm-wide
608 # ARM's adjust_pte (unused if VIPT) depends on mm-wide page_table_lock.
609 # PA-RISC 7xxx's spinlock_t would enlarge struct page from 32 to 44 bytes.
610 # SPARC32 allocates multiple pte tables within a single page, and therefore
611 # a per-page lock leads to problems when multiple tables need to be locked
613 # DEBUG_SPINLOCK and DEBUG_LOCK_ALLOC spinlock_t also enlarge struct page.
656 reliably. The page allocator relies on compaction heavily and
661 linux-mm@kvack.org.
670 # support for free page reporting
672 bool "Free page reporting"
675 Free page reporting allows for the incremental acquisition of
681 # support for page migration
684 bool "Page migration"
692 pages as migration can relocate pages to satisfy a huge page
708 HUGETLB_PAGE_ORDER when there are multiple HugeTLB page sizes available
734 bool "Enable KSM for page merging"
741 the many instances by a single page with that content, so
794 allocator for chunks in 2^N*PAGE_SIZE amounts - which is frequently
803 long-term mappings means that the space is wasted.
813 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
830 applications by speeding up page faults during memory
867 XXX: For now, swap cluster backing transparent huge page
873 bool "Read-only THP for filesystems (EXPERIMENTAL)"
877 Allow khugepaged to put read-only file-backed pages in THP.
886 # UP and nommu archs use km based percpu allocator
912 subsystems to allocate big physically-contiguous blocks of memory.
960 soft-dirty bit on pte-s. This bit it set when someone writes
961 into a page just as regular dirty bit, but unlike the latter
964 See Documentation/admin-guide/mm/soft-dirty.rst for more details.
970 int "Default maximum user stack size for 32-bit processes (MB)"
975 This is the maximum stack size in Megabytes in the VM layout of 32-bit
1000 This adds PG_idle and PG_young flags to 'struct page'. PTE Accessed
1005 bool "Enable idle page tracking"
1014 See Documentation/admin-guide/mm/idle_page_tracking.rst for
1024 checking, an architecture-agnostic way to find the stack pointer
1056 "device-physical" addresses which is needed for using a DAX
1062 # Helpers to mirror range of the CPU page tables of a process into device page
1094 Enable the definition of PG_arch_x page flags with x > 1. Only
1095 suitable for 64-bit architectures with CONFIG_FLATMEM or
1097 enough room for additional bits in page->flags.
1105 on EXPERT systems. /proc/vmstat will only show page counts
1116 bool "Enable infrastructure for get_user_pages()-related unit tests"
1120 to make ioctl calls that can launch kernel-based unit tests for
1125 the non-_fast variants.
1127 There is also a sub-test that allows running dump_page() on any
1129 range of user-space addresses. These pages are either pinned via
1172 # struct io_mapping based helper. Selected by drivers that need them
1186 not mapped to other processes and other kernel page tables.
1207 handle page faults in userland.
1227 file-backed memory types like shmem and hugetlbfs.
1229 # multi-gen LRU {
1231 bool "Multi-Gen LRU"
1233 # make sure folio->flags has enough spare bits
1237 Documentation/admin-guide/mm/multigen_lru.rst for details.
1243 This option enables the multi-gen LRU by default.
1252 This option has a per-memcg and per-node memory overhead.
1262 Allow per-vma locking during page fault handling.
1265 handling page faults instead of taking mmap_lock.