Searched full:pages (Results 1 – 25 of 232) sorted by relevance
12345678910
| /Documentation/admin-guide/mm/ |
| D | hugetlbpage.rst | 4 HugeTLB Pages 30 persistent hugetlb pages in the kernel's huge page pool. It also displays 32 and surplus huge pages in the pool of huge pages of default size. 48 is the size of the pool of huge pages. 50 is the number of huge pages in the pool that are not yet 53 is short for "reserved," and is the number of huge pages for 55 but no allocation has yet been made. Reserved huge pages 57 huge page from the pool of huge pages at fault time. 59 is short for "surplus," and is the number of huge pages in 61 maximum number of surplus huge pages is controlled by [all …]
|
| D | idle_page_tracking.rst | 10 The idle page tracking feature allows to track which memory pages are being 39 Only accesses to user memory pages are tracked. These are pages mapped to a 40 process address space, page cache and buffer pages, swap cache pages. For other 41 page types (e.g. SLAB pages) an attempt to mark a page idle is silently ignored, 42 and hence such pages are never reported idle. 44 For huge pages the idle flag is set only on the head page, so one has to read 45 ``/proc/kpageflags`` in order to correctly count idle huge pages. 52 That said, in order to estimate the amount of pages that are not used by a 55 1. Mark all the workload's pages as idle by setting corresponding bits in 56 ``/sys/kernel/mm/page_idle/bitmap``. The pages can be found by reading [all …]
|
| D | ksm.rst | 20 which have been registered with it, looking for pages of identical 23 content). The amount of pages that KSM daemon scans in a single pass 27 KSM only merges anonymous (private) pages, never pagecache (file) pages. 28 KSM's merged pages were originally locked into kernel memory, but can now 29 be swapped out just like other user pages (but sharing is broken when they 47 to cancel that advice and restore unshared pages: whereupon KSM 57 cannot contain any pages which KSM could actually merge; even if 82 how many pages to scan before ksmd goes to sleep 94 specifies if pages from different NUMA nodes can be merged. 95 When set to 0, ksm merges only pages which physically reside [all …]
|
| D | concepts.rst | 43 The physical system memory is divided into page frames, or pages. The 50 pages. These mappings are described by page tables that allow 55 addresses of actual pages used by the software. The tables at higher 56 levels contain physical addresses of the pages belonging to the lower 66 Huge Pages 77 Many modern CPU architectures allow mapping of the memory pages 79 it is possible to map 2M and even 1G pages using entries in the second 80 and the third level page tables. In Linux such pages are called 81 `huge`. Usage of huge pages significantly reduces pressure on TLB, 85 memory with the huge pages. The first one is `HugeTLB filesystem`, or [all …]
|
| D | transhuge.rst | 13 using huge pages for the backing of virtual memory with huge pages 53 collapses sequences of basic pages into huge pages. 109 pages unless hugepages are immediately available. Clearly if we spend CPU 111 use hugepages later instead of regular pages. This isn't always 125 allocation failure and directly reclaim pages and compact 132 to reclaim pages and wake kcompactd to compact memory so that 134 of khugepaged to then install the THP pages later. 140 pages and wake kcompactd to compact memory so that THP is 179 You can also control how many pages khugepaged should scan at each 194 The khugepaged progress can be seen in the number of pages collapsed:: [all …]
|
| D | cma_debugfs.rst | 17 - [RO] order_per_bit: Order of pages represented by one bit. 19 - [WO] alloc: Allocate N pages from that CMA area. For example:: 23 would try to allocate 5 pages from the cma-2 area. 25 - [WO] free: Free N pages from that CMA area, similar to the above.
|
| D | pagemap.rst | 36 swap. Unmapped pages return a null PFN. This allows determining 37 precisely which pages are mapped (or in swap) and comparing mapped 38 pages between processes. 99 An order N block has 2^N physically contiguous pages, with the BUDDY flag 102 A compound page with order N consists of 2^N physically contiguous pages. 105 pages are hugeTLB pages 108 However in this interface, only huge/giga pages are made visible 119 identical memory pages dynamically shared between one or more processes 121 contiguous pages which construct transparent hugepages 158 not a candidate for LRU page reclaims, e.g. ramfs pages, [all …]
|
| /Documentation/vm/ |
| D | zswap.rst | 10 Zswap is a lightweight compressed cache for swap pages. It takes pages that are 34 Zswap evicts pages from compressed cache on an LRU basis to the backing swap 46 When zswap is disabled at runtime it will stop storing pages that are 48 back into memory all of the pages stored in the compressed pool. The 49 pages stored in zswap will remain in the compressed pool until they are 51 pages out of the compressed pool, a swapoff on the swap device(s) will 52 fault back into memory all swapped out pages, including those in the 58 Zswap receives pages for compression through the Frontswap API and is able to 59 evict pages from its own compressed pool on an LRU basis and write them back to 66 pages are freed. The pool is not preallocated. By default, a zpool [all …]
|
| D | unevictable-lru.rst | 15 pages. 30 pages and to hide these pages from vmscan. This mechanism is based on a patch 36 main memory will have over 32 million 4k pages in a single zone. When a large 37 fraction of these pages are not evictable for any reason [see below], vmscan 39 of pages that are evictable. This can result in a situation where all CPUs are 43 The unevictable list addresses the following classes of unevictable pages: 51 The infrastructure may also be able to handle other conditions that make pages 66 The Unevictable LRU infrastructure maintains unevictable pages on an additional 69 (1) We get to "treat unevictable pages just like we treat other pages in the 74 (2) We want to be able to migrate unevictable pages between nodes for memory [all …]
|
| D | balance.rst | 27 mapped pages from the direct mapped pool, instead of falling back on 29 or not). A similar argument applies to highmem and direct mapped pages. 30 OTOH, if there is a lot of free dma pages, it is preferable to satisfy 35 _total_ number of free pages fell below 1/64 th of total memory. With the 44 at init time how many free pages we should aim for while balancing any 57 fancy, we could assign different weights to free pages in different 61 it becomes less significant to consider the free dma pages while 70 fall back into regular zone. This also makes sure that HIGHMEM pages 79 highmem pages. kswapd looks at the zone_wake_kswapd field in the zone 88 the number of pages falls below watermark[WMARK_MIN], the hysteric field [all …]
|
| D | page_migration.rst | 7 Page migration allows the moving of the physical location of pages between 10 system rearranges the physical location of those pages. 13 by moving pages near to the processor where the process accessing that memory 17 pages are located through the MF_MOVE and MF_MOVE_ALL options while setting 18 a new memory policy via mbind(). The pages of process can also be relocated 20 migrate_pages function call takes two sets of nodes and moves pages of a 27 pages of a process are located. See also the numa_maps documentation in the 32 administrator may detect the situation and move the pages of the process 35 through user space processes that move pages. A special function call 36 "move_pages" allows the moving of individual pages within a process. [all …]
|
| D | transhuge.rst | 17 can continue working on the regular pages or regular pte mappings. 20 regular pages should be gracefully allocated instead and mixed in 26 backed by regular pages should be relocated on hugepages 31 to avoid unmovable pages to fragment all the memory but such a tweak 40 head or tail pages as usual (exactly as they would do on 56 In case you can't handle compound pages if they're returned by 114 Refcounts and transparent huge pages 118 pages: 122 - ->_refcount in tail pages is always zero: get_page_unless_zero() never 123 succeeds on tail pages. [all …]
|
| D | z3fold.rst | 7 z3fold is a special purpose allocator for storing compressed pages. 8 It is designed to store up to three compressed pages per physical page. 15 * z3fold can hold up to 3 compressed pages in its page 20 stores an integral number of compressed pages per page, but it can store 21 up to 3 pages unlike zbud which can store at most 2. Therefore the
|
| D | hwpoison.rst | 18 High level machine check handler. Handles pages reported by the 22 This focusses on pages detected as corrupted in the background. 29 Handles page cache pages in various states. The tricky part 45 pages. 76 Note some pages are always handled as late kill. 120 some early filtering to avoid corrupted unintended pages in test suites. 131 Only handle memory failures to pages associated with the file 137 Limit injection to pages owned by memgroup. Specified by inode 151 page-types -p `pidof usemem` --hwpoison # poison its pages 154 When specified, only poison pages if ((page_flags & mask) == [all …]
|
| D | frontswap.rst | 7 Frontswap provides a "transcendent memory" interface for swap pages. 9 swapped pages are saved in RAM (or a RAM-like device) instead of a swap disk. 30 An "init" prepares the device to receive frontswap pages associated 36 from transcendent memory and an "invalidate_area" will remove ALL pages 52 store frontswap pages to more completely manage its memory usage. 84 providing a clean, dynamic interface to read and write swap pages to 88 useful for write-balancing for some RAM-like devices). Swap pages (and 89 evicted page-cache pages) are a great use for this kind of slower-than-RAM- 92 and write -- and indirectly "name" -- the pages. 98 In the single kernel case, aka "zcache", pages are compressed and [all …]
|
| D | cleancache.rst | 15 pages that the kernel's pageframe replacement algorithm (PFRA) would like 41 Most important, cleancache is "ephemeral". Pages which are copied into 44 Thus, as its name implies, cleancache is not suitable for dirty pages. 45 Cleancache has complete discretion over what pages to preserve and what 46 pages to discard and when. 56 an "invalidate_inode" will invalidate all pages associated with the specified 58 all pages in all files specified by the given pool id and also surrender 66 same UUID will receive the same pool id, thus allowing the pages to 121 effectiveness of the pagecache. Clean pagecache pages are 123 addressable to the kernel); fetching those pages later avoids "refaults" [all …]
|
| D | hugetlbfs_reserv.rst | 10 Huge pages as described at :ref:`hugetlbpage` are typically 11 preallocated for application use. These huge pages are instantiated in a 12 task's address space at page fault time if the VMA indicates huge pages are 16 of huge pages at mmap() time. The idea is that if there were not enough 17 huge pages to cover the mapping, the mmap() would fail. This was first 19 were enough free huge pages to cover the mapping. Like most things in the 21 'reserve' huge pages at mmap() time to ensure that huge pages would be 36 This is a global (per-hstate) count of reserved huge pages. Reserved 37 huge pages are only available to the task which reserved them. 38 Therefore, the number of huge pages generally available is computed [all …]
|
| D | zsmalloc.rst | 11 (0-order) pages, it would suffer from very high fragmentation -- 15 To overcome these issues, zsmalloc allocates a bunch of 0-order pages 17 pages act as a single higher-order page i.e. an object can span 0-order 18 page boundaries. The code refers to these linked pages as a single entity 68 the number of pages allocated for the class 70 the number of 0-order pages to make a zspage
|
| /Documentation/ABI/testing/ |
| D | sysfs-kernel-mm-ksm | 22 pages_shared: how many shared pages are being used. 27 pages_to_scan: how many present pages to scan before ksmd goes 30 pages_unshared: how many pages unique but repeatedly checked 33 pages_volatile: how many pages changing too fast to be placed 38 write 2 to disable ksm and unmerge all its pages. 49 Description: Control merging pages across different NUMA nodes. 51 When it is set to 0 only pages from the same node are merged, 52 otherwise pages from all nodes can be merged together (default).
|
| /Documentation/virt/kvm/ |
| D | mmu.txt | 52 pages, pae, pse, pse36, cr0.wp, and 1GB pages. Emulated hardware also 102 Shadow pages 109 A nonleaf spte allows the hardware mmu to reach the leaf pages and 110 is not related to a translation directly. It points to other shadow pages. 115 Leaf ptes point at guest pages. 131 Shadow pages contain the following information: 137 Examples include real mode translation, large guest pages backed by small 138 host pages, and gpa->hpa translations when NPT or EPT is active. 147 so multiple shadow pages are needed to shadow one guest page. 148 For first-level shadow pages, role.quadrant can be 0 or 1 and denotes the [all …]
|
| /Documentation/trace/ |
| D | events-kmem.rst | 26 justified, particularly if kmalloc slab pages are getting significantly 55 a simple indicator of page allocator activity. Pages may be allocated from 58 If pages are allocated directly from the buddy allocator, the 68 When pages are freed in batch, the also mm_page_free_batched is triggered. 69 Broadly speaking, pages are taken off the LRU lock in bulk and 82 for order-0 pages, reduces contention on the zone->lock and reduces the 85 When a per-CPU list is empty or pages of the wrong type are allocated, 90 When the per-CPU list is too full, a number of pages are freed, each one 93 The individual nature of the events is so that pages can be tracked 94 between allocation and freeing. A number of drain or refill pages that occur [all …]
|
| /Documentation/admin-guide/sysctl/ |
| D | vm.rst | 85 admin_reserve_kbytes defaults to min(3% of free pages, 8MB) 121 huge pages although processes will also directly compact memory as required. 128 allowed to examine the unevictable lru (mlocked pages) for pages to compact. 131 compaction from moving pages that are unevictable. Default value is 1. 150 Contains, as a percentage of total available memory that contains free pages 151 and reclaimable pages, the number of pages at which the background kernel 168 Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any 185 Contains, as a percentage of total available memory that contains free pages 186 and reclaimable pages, the number of pages at which a process which is 195 When a lazytime inode is constantly having its pages dirtied, the inode with [all …]
|
| /Documentation/ |
| D | nommu-mmap.txt | 21 In the MMU case: VM regions backed by arbitrary pages; copy-on-write 25 pages. 36 In the MMU case: VM regions backed by pages read from file; changes to 61 In the MMU case: like the non-PROT_WRITE case, except that the pages in 64 the mapping's backing pages. The page is then backed by swap instead. 71 In the MMU case: VM regions backed by pages read from file; changes to 72 pages written back to file; writes to file reflected into pages backing 83 sequence by providing a contiguous sequence of pages to map. In that 93 blockdev must be able to provide a contiguous run of pages without 123 Linux man pages (ver 2.22 or later). [all …]
|
| /Documentation/admin-guide/hw-vuln/ |
| D | multihit.rst | 43 into pages of a given size. Page tables translate virtual addresses to physical 81 * - KVM: Mitigation: Split huge pages 104 non-executable pages. This forces all iTLB entries to be 4K, and removes 107 In order to mitigate the vulnerability, KVM initially marks all huge pages 108 as non-executable. If the guest attempts to execute in one of those pages, 109 the page is broken down into 4K pages, which are then marked executable. 115 (non-nested) page tables. For simplicity, KVM will make large pages 121 The KVM hypervisor mitigation mechanism for marking huge pages as 130 non-executable huge pages in Linux kernel KVM module. All huge 131 pages in the EPT are marked as non-executable. [all …]
|
| /Documentation/bpf/ |
| D | index.rst | 15 The primary info for the bpf syscall is available in the `man-pages`_ 52 .. _man-pages: https://www.kernel.org/doc/man-pages/ 53 .. _bpf(2): http://man7.org/linux/man-pages/man2/bpf.2.html
|
12345678910