Searched full:pages (Results 1 – 25 of 338) sorted by relevance
12345678910>>...14
| /Documentation/admin-guide/mm/ |
| D | hugetlbpage.rst | 2 HugeTLB Pages 28 persistent hugetlb pages in the kernel's huge page pool. It also displays 30 and surplus huge pages in the pool of huge pages of default size. 46 is the size of the pool of huge pages. 48 is the number of huge pages in the pool that are not yet 51 is short for "reserved," and is the number of huge pages for 53 but no allocation has yet been made. Reserved huge pages 55 huge page from the pool of huge pages at fault time. 57 is short for "surplus," and is the number of huge pages in 59 maximum number of surplus huge pages is controlled by [all …]
|
| D | zswap.rst | 8 Zswap is a lightweight compressed cache for swap pages. It takes pages that are 26 Zswap evicts pages from compressed cache on an LRU basis to the backing swap 40 When zswap is disabled at runtime it will stop storing pages that are 42 back into memory all of the pages stored in the compressed pool. The 43 pages stored in zswap will remain in the compressed pool until they are 45 pages out of the compressed pool, a swapoff on the swap device(s) will 46 fault back into memory all swapped out pages, including those in the 52 Zswap receives pages for compression from the swap subsystem and is able to 53 evict pages from its own compressed pool on an LRU basis and write them back to 60 pages are freed. The pool is not preallocated. By default, a zpool [all …]
|
| D | idle_page_tracking.rst | 8 The idle page tracking feature allows to track which memory pages are being 37 Only accesses to user memory pages are tracked. These are pages mapped to a 38 process address space, page cache and buffer pages, swap cache pages. For other 39 page types (e.g. SLAB pages) an attempt to mark a page idle is silently ignored, 40 and hence such pages are never reported idle. 42 For huge pages the idle flag is set only on the head page, so one has to read 43 ``/proc/kpageflags`` in order to correctly count idle huge pages. 50 That said, in order to estimate the amount of pages that are not used by a 53 1. Mark all the workload's pages as idle by setting corresponding bits in 54 ``/sys/kernel/mm/page_idle/bitmap``. The pages can be found by reading [all …]
|
| D | ksm.rst | 18 which have been registered with it, looking for pages of identical 21 content). The amount of pages that KSM daemon scans in a single pass 25 KSM only merges anonymous (private) pages, never pagecache (file) pages. 26 KSM's merged pages were originally locked into kernel memory, but can now 27 be swapped out just like other user pages (but sharing is broken when they 45 to cancel that advice and restore unshared pages: whereupon KSM 55 cannot contain any pages which KSM could actually merge; even if 80 how many pages to scan before ksmd goes to sleep 95 specifies if pages from different NUMA nodes can be merged. 96 When set to 0, ksm merges only pages which physically reside [all …]
|
| D | concepts.rst | 41 The physical system memory is divided into page frames, or pages. The 48 pages. These mappings are described by page tables that allow 53 addresses of actual pages used by the software. The tables at higher 54 levels contain physical addresses of the pages belonging to the lower 64 Huge Pages 75 Many modern CPU architectures allow mapping of the memory pages 77 it is possible to map 2M and even 1G pages using entries in the second 78 and the third level page tables. In Linux such pages are called 79 `huge`. Usage of huge pages significantly reduces pressure on TLB, 83 memory with the huge pages. The first one is `HugeTLB filesystem`, or [all …]
|
| D | pagemap.rst | 37 swap. Unmapped pages return a null PFN. This allows determining 38 precisely which pages are mapped (or in swap) and comparing mapped 39 pages between processes. 101 An order N block has 2^N physically contiguous pages, with the BUDDY flag 104 A compound page with order N consists of 2^N physically contiguous pages. 107 pages are hugeTLB pages (Documentation/admin-guide/mm/hugetlbpage.rst), 109 However in this interface, only huge/giga pages are made visible 120 Identical memory pages dynamically shared between one or more processes. 122 Contiguous pages which construct THP of any size and mapped by any granularity. 159 not a candidate for LRU page reclaims, e.g. ramfs pages, [all …]
|
| D | transhuge.rst | 11 using huge pages for the backing of virtual memory with huge pages 51 increments of a power-of-2 number of pages. mTHP can back anonymous 66 collapses sequences of basic pages into PMD-sized huge pages. 150 pages unless hugepages are immediately available. Clearly if we spend CPU 152 use hugepages later instead of regular pages. This isn't always 166 allocation failure and directly reclaim pages and compact 173 to reclaim pages and wake kcompactd to compact memory so that 175 of khugepaged to then install the THP pages later. 181 pages and wake kcompactd to compact memory so that THP is 207 "underused". A THP is underused if the number of zero-filled pages in [all …]
|
| /Documentation/mm/ |
| D | unevictable-lru.rst | 34 main memory will have over 32 million 4k pages in a single node. When a large 35 fraction of these pages are not evictable for any reason [see below], vmscan 37 of pages that are evictable. This can result in a situation where all CPUs are 41 The unevictable list addresses the following classes of unevictable pages: 51 The infrastructure may also be able to handle other conditions that make pages 104 lru_list enum element). The memory controller tracks the movement of pages to 108 not attempt to reclaim pages on the unevictable list. This has a couple of 111 (1) Because the pages are "hidden" from reclaim on the unevictable list, the 112 reclaim process can be more efficient, dealing only with pages that have a 115 (2) On the other hand, if too many of the pages charged to the control group [all …]
|
| D | free_page_reporting.rst | 6 lists of pages that are currently unused by the system. This is useful in 8 notify the hypervisor that it is no longer using certain pages in memory. 20 pages to the driver. The API will start reporting pages 2 seconds after 24 Pages reported will be stored in the scatterlist passed to the reporting 26 While pages are being processed by the report function they will not be 28 the pages will be returned to the free area from which they were obtained. 36 left off in terms of reporting free pages.
|
| D | page_migration.rst | 5 Page migration allows moving the physical location of pages between 8 system rearranges the physical location of those pages. 10 Also see Documentation/mm/hmm.rst for migrating pages to or from device 14 by moving pages near to the processor where the process accessing that memory 18 pages are located through the MF_MOVE and MF_MOVE_ALL options while setting 19 a new memory policy via mbind(). The pages of a process can also be relocated 21 migrate_pages() function call takes two sets of nodes and moves pages of a 28 pages of a process are located. See also the numa_maps documentation in the 33 administrator may detect the situation and move the pages of the process 36 through user space processes that move pages. A special function call [all …]
|
| D | balance.rst | 25 mapped pages from the direct mapped pool, instead of falling back on 27 or not). A similar argument applies to highmem and direct mapped pages. 28 OTOH, if there is a lot of free dma pages, it is preferable to satisfy 33 _total_ number of free pages fell below 1/64 th of total memory. With the 42 at init time how many free pages we should aim for while balancing any 55 fancy, we could assign different weights to free pages in different 59 it becomes less significant to consider the free dma pages while 68 fall back into regular zone. This also makes sure that HIGHMEM pages 77 highmem pages. kswapd looks at the zone_wake_kswapd field in the zone 86 the number of pages falls below watermark[WMARK_MIN], the hysteric field [all …]
|
| D | zsmalloc.rst | 9 (0-order) pages, it would suffer from very high fragmentation -- 13 To overcome these issues, zsmalloc allocates a bunch of 0-order pages 15 pages act as a single higher-order page i.e. an object can span 0-order 16 page boundaries. The code refers to these linked pages as a single entity 83 the number of pages allocated for the class 85 the number of 0-order pages to make a zspage 87 the approximate number of pages class compaction can free 99 Each zspage can contain up to ZSMALLOC_CHAIN_SIZE physical (0-order) pages. 104 characteristics in terms of the number of pages per zspage and the number 121 Size class #100 consists of zspages with 2 physical pages each, which can [all …]
|
| D | z3fold.rst | 5 z3fold is a special purpose allocator for storing compressed pages. 6 It is designed to store up to three compressed pages per physical page. 13 * z3fold can hold up to 3 compressed pages in its page 18 stores an integral number of compressed pages per page, but it can store 19 up to 3 pages unlike zbud which can store at most 2. Therefore the
|
| D | transhuge.rst | 15 can continue working on the regular pages or regular pte mappings. 18 regular pages should be gracefully allocated instead and mixed in 24 backed by regular pages should be relocated on hugepages 29 to avoid unmovable pages to fragment all the memory but such a tweak 38 head or tail pages as usual (exactly as they would do on 107 Refcounts and transparent huge pages 111 pages: 115 - ->_refcount in tail pages is always zero: get_page_unless_zero() never 116 succeeds on tail pages. 123 - map/unmap of individual pages with PTE entry increment/decrement [all …]
|
| D | vmemmap_dedup.rst | 17 HugeTLB pages consist of multiple base page size pages and is supported by many 19 details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB are 21 consists of 512 base pages and a 1GB HugeTLB page consists of 262144 base pages. 27 is the compound_head field, and this field is the same for all tail pages. 29 By removing redundant ``struct page`` for HugeTLB pages, memory can be returned 32 Different architectures support different HugeTLB pages. For example, the 34 architectures. Because arm64 supports 4k, 16k, and 64k base pages and 51 structs which size is (unit: pages):: 74 = 8 (pages) 89 = PAGE_SIZE / 8 * 8 (pages) [all …]
|
| D | hwpoison.rst | 16 High level machine check handler. Handles pages reported by the 20 This focusses on pages detected as corrupted in the background. 27 Handles page cache pages in various states. The tricky part 43 pages. 72 Note some pages are always handled as late kill. 116 some early filtering to avoid corrupted unintended pages in test suites. 128 Only handle memory failures to pages associated with the file 134 Limit injection to pages owned by memgroup. Specified by inode 148 page-types -p `pidof usemem` --hwpoison # poison its pages 151 When specified, only poison pages if ((page_flags & mask) == [all …]
|
| D | multigen_lru.rst | 24 group of pages with similar access recency. Generations establish a 36 Fast paths reduce code complexity and runtime overhead. Unmapped pages 37 do not require TLB flushes; clean pages do not require writeback. 45 attainable. Specifically, pages in the same generation can be 52 The protection of hot pages and the selection of cold pages are based 82 Evictable pages are divided into multiple generations for each 87 pages can be evicted regardless of swap constraints. These three 116 ``MIN_NR_GENS``. The aging promotes hot pages to the youngest 118 demotion of cold pages happens consequently when it increments 136 unmapped clean pages, which are the best bet. The eviction sorts a [all …]
|
| /Documentation/admin-guide/sysctl/ |
| D | vm.rst | 88 admin_reserve_kbytes defaults to min(3% of free pages, 8MB) 117 huge pages although processes will also directly compact memory as required. 127 Note that compaction has a non-trivial system-wide impact as pages 140 allowed to examine the unevictable lru (mlocked pages) for pages to compact. 143 compaction from moving pages that are unevictable. Default value is 1. 165 Contains, as a percentage of total available memory that contains free pages 166 and reclaimable pages, the number of pages at which the background kernel 183 Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any 200 Contains, as a percentage of total available memory that contains free pages 201 and reclaimable pages, the number of pages at which a process which is [all …]
|
| /Documentation/arch/x86/ |
| D | sgx.rst | 37 SGX utilizes an *Enclave Page Cache (EPC)* to store pages that are associated 39 Unlike pages used for regular memory, pages can only be accessed from outside of 56 Regular EPC pages contain the code and data of an enclave. 59 Thread Control Structure pages define the entry points to an enclave and 63 Version Array pages contain 512 slots, each of which can contain a version 69 The processor tracks EPC pages in a hardware metadata structure called the 95 pages and establish enclave page permissions. 108 adding and removing of enclave pages. When an enclave accesses an address 151 use since the reset, enclave pages may be in an inconsistent state. This might 153 reinitializes all enclave pages so that they can be allocated and re-used. [all …]
|
| /Documentation/ABI/testing/ |
| D | sysfs-kernel-mm-ksm | 22 pages_shared: how many shared pages are being used. 27 pages_to_scan: how many present pages to scan before ksmd goes 30 pages_unshared: how many pages unique but repeatedly checked 33 pages_volatile: how many pages changing too fast to be placed 39 - write 2 to disable ksm and unmerge all its pages. 50 Description: Control merging pages across different NUMA nodes. 52 When it is set to 0 only pages from the same node are merged, 53 otherwise pages from all nodes can be merged together (default).
|
| /Documentation/virt/kvm/x86/ |
| D | mmu.rst | 66 pages, pae, pse, pse36, cr0.wp, and 1GB pages. Emulated hardware also 118 Shadow pages 125 A nonleaf spte allows the hardware mmu to reach the leaf pages and 126 is not related to a translation directly. It points to other shadow pages. 131 Leaf ptes point at guest pages. 150 Shadow pages contain the following information: 156 Examples include real mode translation, large guest pages backed by small 157 host pages, and gpa->hpa translations when NPT or EPT is active. 166 so multiple shadow pages are needed to shadow one guest page. 167 For first-level shadow pages, role.quadrant can be 0 or 1 and denotes the [all …]
|
| /Documentation/devicetree/bindings/mtd/ |
| D | amlogic,meson-nand.yaml | 67 amlogic,boot-pages: 70 Number of pages starting from offset 0, where a special ECC 73 Also scrambling mode is enabled for such pages. 78 Interval between pages, accessed by the ROM code. For example 79 we have 8 pages [0, 7]. Pages 0,2,4,6 are accessed by the 81 of pages - 1,3,5,7 are read/written without this mode. 88 amlogic,boot-pages: [nand-is-boot-medium, "amlogic,boot-page-step"] 89 amlogic,boot-page-step: [nand-is-boot-medium, "amlogic,boot-pages"]
|
| /Documentation/core-api/ |
| D | pin_user_pages.rst | 35 In other words, use pin_user_pages*() for DMA-pinned pages, and 40 multiple threads and call sites are free to pin the same struct pages, via both 55 pages* array, and the function then pins pages by incrementing each by a special 64 severely by huge pages, because each tail page adds a refcount to the 68 This also means that huge pages and large folios do not suffer 79 but the caller passed in a non-null struct pages* array, then the function 80 sets FOLL_GET for you, and proceeds to pin pages by incrementing the refcount 89 Tracking dma-pinned pages 93 pages: 115 * Because of that limitation, special handling is applied to the zero pages [all …]
|
| /Documentation/bpf/ |
| D | syscall_api.rst | 5 The primary info for the bpf syscall is available in the `man-pages`_ 10 .. _man-pages: https://www.kernel.org/doc/man-pages/ 11 .. _bpf(2): https://man7.org/linux/man-pages/man2/bpf.2.html
|
| /Documentation/trace/ |
| D | events-kmem.rst | 26 justified, particularly if kmalloc slab pages are getting significantly 55 a simple indicator of page allocator activity. Pages may be allocated from 58 If pages are allocated directly from the buddy allocator, the 68 When pages are freed in batch, the also mm_page_free_batched is triggered. 69 Broadly speaking, pages are taken off the LRU lock in bulk and 82 for order-0 pages, reduces contention on the zone->lock and reduces the 85 When a per-CPU list is empty or pages of the wrong type are allocated, 90 When the per-CPU list is too full, a number of pages are freed, each one 93 The individual nature of the events is so that pages can be tracked 94 between allocation and freeing. A number of drain or refill pages that occur [all …]
|
12345678910>>...14