Lines Matching +full:in +full:- +full:memory
1 .. SPDX-License-Identifier: GPL-2.0
6 Physical Memory Model
9 Physical memory in a system may be addressed in different ways. The
10 simplest case is when the physical memory starts at address 0 and
15 different memory banks are attached to different CPUs.
17 Linux abstracts this diversity using one of the three memory models:
19 memory models it supports, what the default memory model is and
24 although it is still in use by several architectures.
26 All the memory models track the status of physical page frames using
27 struct page arranged in one or more arrays.
29 Regardless of the selected memory model, there exists one-to-one
33 Each memory model defines :c:func:`pfn_to_page` and :c:func:`page_to_pfn`
40 The simplest memory model is FLATMEM. This model is suitable for
41 non-NUMA systems with contiguous, or mostly contiguous, physical
42 memory.
44 In the FLATMEM memory model, there is a global `mem_map` array that
45 maps the entire physical memory. For most architectures, the holes
46 have entries in the `mem_map` array. The `struct page` objects
52 memory to the page allocator.
55 actual physical pages. In such case, the architecture specific
56 :c:func:`pfn_valid` implementation should take the holes in the
60 straightforward: `PFN - ARCH_PFN_OFFSET` is an index to the
64 systems with physical memory starting at address different from 0.
69 The DISCONTIGMEM model treats the physical memory as a collection of
71 constructs an independent memory management subsystem represented by
79 each node in the system to initialize the `pg_data_t` object and its
82 Every `node_mem_map` behaves exactly as FLATMEM's `mem_map` -
83 every physical page frame in a node has a `struct page` entry in the
88 The conversion between a PFN and the `struct page` in the
95 :c:func:`page_to_nid` is generic as it uses the node number encoded in
96 page->flags.
106 SPARSEMEM is the most versatile memory model available in Linux and it
107 is the only memory model that supports several advanced features such
108 as hot-plug and hot-remove of the physical memory, alternative memory
109 maps for non-volatile memory devices and deferred initialization of
110 the memory map for larger systems.
112 The SPARSEMEM model presents the physical memory as a collection of
128 NR\_MEM\_SECTIONS = 2 ^ {(MAX\_PHYSMEM\_BITS - SECTION\_SIZE\_BITS)}
130 The `mem_section` objects are arranged in a two-dimensional array
141 all the memory sections.
144 initialize the memory sections and the memory maps.
147 corresponding `struct page` - a "classic sparse" and "sparse
151 The classic sparse encodes the section number of a page in page->flags
155 The sparse vmemmap uses a virtually mapped memory map to optimize
163 addresses that will map the physical pages containing the memory
164 map and make sure that `vmemmap` points to that range. In addition,
166 that will allocate the physical memory and create page tables for the
167 virtual memory map. If an architecture does not have any special
169 :c:func:`vmemmap_populate_basepages` provided by the generic memory
172 The virtually mapped memory map allows storing `struct page` objects
173 for persistent memory devices in pre-allocated storage on those
178 allocate memory map on the persistent memory device.
187 to keep the memory pinned for active use. `ZONE_DEVICE`, via
188 :c:func:`devm_memremap_pages`, performs just enough memory hotplug to
192 free memory and the page's `struct list_head lru` space is repurposed
193 for back referencing to the host device / driver that mapped the memory.
195 While `SPARSEMEM` presents memory as a collection of sections,
196 optionally collected into memory blocks, `ZONE_DEVICE` users have a need
198 `ZONE_DEVICE` memory is never marked online it is subsequently never
199 subject to its memory ranges being exposed through the sysfs memory
200 hotplug api on memory block boundaries. The implementation relies on
201 this lack of user-api constraint to allow sub-section sized memory
202 ranges to be specified to :c:func:`arch_add_memory`, the top-half of
203 memory hotplug. Sub-section support allows for 2MB as the cross-arch
208 * pmem: Map platform persistent memory to be used as a direct-I/O target
211 * hmm: Extend `ZONE_DEVICE` with `->page_fault()` and `->page_free()`
212 event callbacks to allow a device-driver to coordinate memory management
213 events related to device-memory, typically GPU memory. See
216 * p2pdma: Create `struct page` objects to allow peer devices in a
217 PCI/-E topology to coordinate direct-DMA operations between themselves,
218 i.e. bypass host memory.