Home
last modified time | relevance | path

Searched full:page (Results 1 – 25 of 545) sorted by relevance

12345678910>>...22

/Documentation/mm/
Dpage_migration.rst2 Page migration
5 Page migration allows moving the physical location of pages between
13 The main intent of page migration is to reduce the latency of memory accesses
17 Page migration allows a process to manually relocate the node on which its
23 Page migration functions are provided by the numactl package by Andi Kleen
26 which provides an interface similar to other NUMA functionality for page
29 proc(5) man page.
35 manual page migration support. Automatic page migration may be implemented
52 Page migration allows the preservation of the relative location of pages
58 Page migration occurs in several steps. First a high level
[all …]
Dpage_tables.rst4 Page Tables
13 Page tables map virtual addresses as seen by the CPU into physical addresses
16 Linux defines page tables as a hierarchy which is currently five levels in
21 by the underlying physical page frame. The **page frame number** or **pfn**
22 is the physical address of the page (as seen on the external memory bus)
26 the last page of physical memory the external address bus of the CPU can
29 With a page granularity of 4KB and a address range of 32 bits, pfn 0 is at
34 As you can see, with 4KB pages the page base address uses bits 12-31 of the
36 `PAGE_SIZE` is usually defined in terms of the page shift as `(1 << PAGE_SHIFT)`
39 sizes. When Linux was created, 4KB pages and a single page table called
[all …]
Dvmemmap_dedup.rst13 The ``struct page`` structures are used to describe a physical page frame. By
14 default, there is a one-to-one mapping from a page frame to its corresponding
15 ``struct page``.
17 HugeTLB pages consist of multiple base page size pages and is supported by many
20 currently supported. Since the base page size on x86 is 4KB, a 2MB HugeTLB page
21 consists of 512 base pages and a 1GB HugeTLB page consists of 262144 base pages.
22 For each base page, there is a corresponding ``struct page``.
24 Within the HugeTLB subsystem, only the first 4 ``struct page`` are used to
25 contain unique information about a HugeTLB page. ``__NR_USED_SUBPAGE`` provides
26 this upper limit. The only 'useful' information in the remaining ``struct page``
[all …]
Dsplit_page_table_lock.rst2 Split page table lock
5 Originally, mm->page_table_lock spinlock protected all page tables of the
6 mm_struct. But this approach leads to poor page fault scalability of
8 scalability, split page table lock was introduced.
10 With split page table lock we have separate per-table lock to serialize
40 Split page table lock for PTE tables is enabled compile-time if
44 Split page table lock for PMD tables is enabled, if it's enabled for PTE
47 Hugetlb and split page table lock
50 Hugetlb can support several page sizes. We use split lock only for PMD
56 takes pmd split lock for PMD_SIZE page, mm->page_table_lock
[all …]
Dpage_frags.rst2 Page fragments
5 A page fragment is an arbitrary-length arbitrary-offset area of memory
6 which resides within a 0 or higher order compound page. Multiple
7 fragments within that page are individually refcounted, in the page's
11 simple allocation framework for page fragments. This is used by the
16 In order to make use of the page fragment APIs a backing page fragment
18 and tracks allows multiple calls to make use of a cached page. The
34 Many network device drivers use a similar methodology for allocating page
35 fragments, but the page fragments are cached at the ring or descriptor
37 way of tearing down a page cache. For this reason __page_frag_cache_drain
[all …]
Dhmm.rst7 of this being specialized struct page for such memory (see sections 5 to 7 of
21 CPU page-table mirroring works and the purpose of HMM in this context. The
108 space by duplicating the CPU page table in the device page table so the same
112 To achieve this, HMM offers a set of helpers to populate the device page table
113 while keeping track of CPU page table updates. Device page table updates are
114 not as easy as CPU page table updates. To update the device page table, you must
122 allows allocating a struct page for each page of device memory. Those pages
125 looks like a page that is swapped out to disk from the CPU point of view. Using a
126 struct page gives the easiest and cleanest integration with existing mm
131 Note that any CPU access to a device page triggers a page fault and a migration
[all …]
Dpage_owner.rst2 page owner: Tracking about who allocated each page
8 page owner is for the tracking about who allocated each page.
11 and order of pages is stored into certain storage for each page.
15 Although we already have tracepoint for tracing page allocation/free,
16 using it for analyzing who allocate each page is rather complex. We need
22 page owner can also be used for various purposes. For example, accurate
24 each page. It is already implemented and activated if page owner is
32 page owner is disabled by default. So, if you'd like to use it, you need
34 with page owner and page owner is disabled in runtime due to not enabling
37 memory overhead. And, page owner inserts just two unlikely branches into
[all …]
Dmmu_notifier.rst1 When do you need to notify inside page table lock ?
6 the page table lock. But that notification is not necessary in all cases.
9 thing like ATS/PASID to get the IOMMU to walk the CPU page table to access a
11 those secondary TLB while holding page table lock when clearing a pte/pmd:
13 A) page backing address is free before mmu_notifier_invalidate_range_end()
14 B) a page table entry is updated to point to a new page (COW, write fault
15 on zero page, __replace_page(), ...)
18 a page that might now be used by some completely different task.
23 - take page table lock
24 - clear page table entry and notify ([pmd/pte]p_huge_clear_flush_notify())
[all …]
Dhugetlbfs_reserv.rst10 in a task's address space at page fault time if the VMA indicates huge pages
11 are to be used. If no huge page exists at page fault time, the task is sent
12 a SIGBUS and often dies an unhappy death. Shortly after huge page support
20 available for page faults in that mapping. The description below attempts to
21 describe how huge page reserve processing is done in the v4.10 kernel.
50 There is one reserve map for each huge page mapping in the system.
60 The 'from' and 'to' fields of the file region structure are huge page
72 reserves) has unmapped a page from this task (the child)
74 Page Flags
75 The PagePrivate page flag is used to indicate that a huge page
[all …]
Dtranshuge.rst40 address of the page and its temporary pinning to release after the I/O
41 is complete, so they won't ever notice the fact the page is huge. But
42 if any driver is going to mangle over the page structure of the tail
43 page (like for checking page->mapping or other bits that are relevant
44 for the head page and not the tail page), it should be updated to jump
45 to check head page instead. Taking a reference on any head/tail page would
46 prevent the page from being split by anyone.
68 calling split_huge_page(page). This is what the Linux VM does before
70 if the page is pinned and you must handle this correctly.
99 page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the
[all …]
Dmemory-model.rst20 All the memory models track the status of physical page frames using
21 struct page arranged in one or more arrays.
24 mapping between the physical page frame number (PFN) and the
25 corresponding `struct page`.
28 helpers that allow the conversion from PFN to `struct page` and vice
40 have entries in the `mem_map` array. The `struct page` objects
46 memory to the page allocator.
53 With FLATMEM, the conversion between a PFN and the `struct page` is
57 The `ARCH_PFN_OFFSET` defines the first page frame number for
104 corresponding `struct page` - a "classic sparse" and "sparse
[all …]
/Documentation/admin-guide/mm/
Dpagemap.rst2 Examining Process Page Tables
6 userspace programs to examine the page tables and related information by
12 physical frame each virtual page is mapped to. It contains one 64-bit
13 value for each virtual page, containing the following data (from
16 * Bits 0-54 page frame number (PFN) if present
21 * Bit 56 page exclusively mapped (since 4.2)
24 * Bit 58 pte is a guard region (since 6.15) (see madvise (2) man page)
26 * Bit 61 page is file-page or shared-anon (since 3.5)
27 * Bit 62 page swapped
28 * Bit 63 page present
[all …]
Didle_page_tracking.rst2 Idle Page Tracking
8 The idle page tracking feature allows to track which memory pages are being
21 The idle page tracking API is located at ``/sys/kernel/mm/page_idle``.
25 The file implements a bitmap where each bit corresponds to a memory page. The
26 bitmap is represented by an array of 8-byte integers, and the page at PFN #i is
28 set, the corresponding page is idle.
30 A page is considered idle if it has not been accessed since it was marked idle
33 To mark a page idle one has to set the bit corresponding to
34 the page by writing to the file. A value written to the file is OR-ed with the
38 process address space, page cache and buffer pages, swap cache pages. For other
[all …]
Dtranshuge.rst12 that supports the automatic promotion and demotion of page sizes and
19 in the examples below we presume that the basic page size is 4K and
20 the huge page size is 2M, although the actual numbers may vary
26 requiring larger clear-page copy-page in page faults which is a
28 single page fault for each 2M virtual region touched by userland (so
49 ability to allocate memory in blocks that are bigger than a base page
54 those outlined above: Page faults are significantly reduced (by a
56 prominent because the size of each page isn't as huge as the PMD-sized
57 variant and there is less memory to clear in each page fault. Some
84 lived page allocations even for hugepage unaware applications that
[all …]
Dhugetlbpage.rst9 the Linux kernel. This support is built on top of multiple page size support
11 support 4K and 2M (1G if architecturally supported) page sizes, ia64
12 architecture supports multiple page sizes 4K, 8K, 64K, 256K, 1M, 4M, 16M,
19 Users can use the huge page support in Linux kernel by either using the mmap
28 persistent hugetlb pages in the kernel's huge page pool. It also displays
29 default huge page size and information about the number of free, reserved
31 The huge page size is needed for generating the proper alignment and
32 size of the arguments to system calls that map huge page regions.
55 huge page from the pool of huge pages at fault time.
62 with each hugetlb page is enabled, the number of surplus huge pages
[all …]
Duserfaultfd.rst10 memory page faults, something otherwise only the kernel code could do.
19 regions of virtual memory with it. Then, any page faults which occur within the
38 Vmas are not suitable for page- (or hugepage) granular fault tracking
58 handle kernel page faults have been a useful tool for exploiting the kernel).
63 - Any user can always create a userfaultfd which traps userspace page faults
67 - In order to also trap kernel page faults for the address space, either the
80 to /dev/userfaultfd can always create userfaultfds that trap kernel page faults;
99 events, except page fault notifications, may be generated:
102 other than page faults are supported. These events are described in more
117 existing page contents from userspace.
[all …]
/Documentation/trace/
Dring-buffer-design.rst40 - A page outside the ring buffer used solely (for the most part)
44 - a pointer to the page that the reader will use next
47 - a pointer to the page that will be written to next
50 - a pointer to the page with the last finished non-nested write.
110 At initialization a reader page is allocated for the reader that is not
114 to the same page.
116 The reader page is initialized to have its next pointer pointing to
117 the head page, and its previous pointer pointing to a page before
118 the head page.
120 The reader has its own page to use. At start up time, this page is
[all …]
/Documentation/virt/kvm/x86/
Dmmu.rst29 so that swapping, page migration, page merging, transparent
44 pfn host page frame number
52 pte page table entry (used also to refer generically to paging structure
121 The principal data structure is the shadow page, 'struct kvm_mmu_page'. A
122 shadow page contains 512 sptes, which can be either leaf or nonleaf sptes. A
123 shadow page may contain a mix of leaf and nonleaf sptes.
147 (*) the guest hypervisor will encode the ngva->gpa translation into its page
152 The level in the shadow paging hierarchy that this shadow page belongs to.
155 If set, leaf sptes reachable from this page are for a linear range.
161 If clear, this page corresponds to a guest page table denoted by the gfn
[all …]
/Documentation/ABI/testing/
Dsysfs-memory-page-offline6 Soft-offline the memory page containing the physical address
8 physical address of the page. The kernel will then attempt
11 on the bad page list and never be reused.
14 Normally it's the base page size of the kernel, but
17 The page must be still accessible, not poisoned. The
28 Hard-offline the memory page containing the physical
30 specifying the physical address of the page. The
31 kernel will then attempt to hard-offline the page, by
32 trying to drop the page or killing any owner or
34 any processes owning the page. The kernel will avoid
[all …]
/Documentation/core-api/
Dcachetlb.rst26 page tables. Meaning that if the software page tables change, it is
28 Therefore when software page table changes occur, the kernel will
29 invoke one of the following flush methods _after_ the page table
35 any previous page table modification whatsoever will be
38 This is usually invoked when the kernel page tables are
45 any previous page table modifications for the address space
50 page table operations such as what happens during
58 interface must make sure that any previous page table
68 a suitably efficient method for removing multiple page
84 page table modification for address space 'vma->vm_mm' for
[all …]
Dpin_user_pages.rst42 other, not the struct page(s).
64 severely by huge pages, because each tail page adds a refcount to the
65 head page. And in fact, testing revealed that, without a separate pincount
66 field, refcount overflows were seen in some huge page stress tests.
81 of each page by +1.::
95 * An actual reference count, per struct page, is required. This is because
96 multiple processes may pin and unpin a page.
98 * False positives (reporting that a page is dma-pinned, when in fact it is not)
101 * struct page may not be increased in size for this, and all fields are already
104 * Given the above, we can overload the page->_refcount field by using, sort of,
[all …]
/Documentation/translations/zh_CN/mm/
Dmemory-model.rst25 所有的内存模型都使用排列在一个或多个数组中的 `struct page` 来跟踪物理页
28 无论选择哪种内存模型,物理页框号(PFN)和相应的 `struct page` 之间都存
32 帮助函数,允许从PFN到 `struct page` 的转换,反之亦然。
41 于大多数架构,孔隙在 `mem_map` 数组中都有条目。与孔洞相对应的 `struct page`
51 使用FLATMEM,PFN和 `struct page` 之间的转换是直接的。 `PFN - ARCH_PFN_OFFSET`
64 体表示,它包含 `section_mem_map` ,从逻辑上讲,它是一个指向 `struct page`
88 通过SPARSEMEM,有两种可能的方式将PFN转换为相应的 `struct page` --"classic sparse"和
96 作。有一个全局的 `struct page *vmemmap` 指针,指向一个虚拟连续的 `struct page`
97 对象阵列。PFN是该数组的一个索引,`struct page` 从 `vmemmap` 的偏移量是该页的PFN。
104 虚拟映射的内存映射允许将持久性内存设备的 `struct page` 对象存储在这些设备上预先分
[all …]
/Documentation/arch/powerpc/
Dultravisor.rst196 * Normal page: Page backed by normal memory and available to
199 * Shared page: A page backed by normal memory and available to both
200 the Hypervisor/QEMU and the SVM (i.e page has mappings in SVM and
206 * Secure page: Page backed by secure memory and only available to
240 Some ultracalls involve transferring a page of data between Ultravisor
269 Encrypt and move the contents of a page from secure memory to normal
279 uint64_t dest_ra, /* real address of destination page */
282 uint64_t order) /* page size order */
296 * U_BUSY if page cannot be currently paged-out.
301 Encrypt the contents of a secure-page and make it available to
[all …]
/Documentation/netlink/specs/
Dnetdev.yaml118 name: page-pool
122 doc: Unique ID of a Page Pool instance.
131 May be reported as 0 if the page pool was allocated for a netdev
132 which got destroyed already (page pools may outlast their netdevs
140 doc: Id of NAPI using this Page Pool instance.
149 Number of outstanding references to this page pool (allocated
151 socket receive queues, driver receive ring, page pool recycling
152 ring, the page pool cache, etc.
162 Seconds in CLOCK_BOOTTIME of when Page Pool was detached by
163 the driver. Once detached Page Pool can no longer be used to
[all …]
/Documentation/hwmon/
Dpmbus-core.rst50 pages (see the PMBus specification for details on multi-page PMBus devices).
166 int (*read_byte_data)(struct i2c_client *client, int page, int reg);
168 Read byte from page <page>, register <reg>.
169 <page> may be -1, which means "current page".
174 int (*read_word_data)(struct i2c_client *client, int page, int phase,
177 Read word from page <page>, phase <phase>, register <reg>. If the chip does not
183 int (*write_word_data)(struct i2c_client *client, int page, int reg,
186 Write word to page <page>, register <reg>.
190 int (*write_byte)(struct i2c_client *client, int page, u8 value);
192 Write byte to page <page>, register <reg>.
[all …]

12345678910>>...22