Home
last modified time | relevance | path

Searched full:memory (Results 1 – 25 of 1749) sorted by relevance

12345678910>>...70

/Documentation/admin-guide/mm/
Dmemory-hotplug.rst2 Memory Hot(Un)Plug
5 This document describes generic Linux support for memory hot(un)plug with
13 Memory hot(un)plug allows for increasing and decreasing the size of physical
14 memory available to a machine at runtime. In the simplest case, it consists of
18 Memory hot(un)plug is used for various purposes:
20 - The physical memory available to a machine can be adjusted at runtime, up- or
21 downgrading the memory capacity. This dynamic memory resizing, sometimes
26 example is replacing failing memory modules.
28 - Reducing energy consumption either by physically unplugging memory modules or
29 by logically unplugging (parts of) memory modules from Linux.
[all …]
Dconcepts.rst5 The memory management in Linux is a complex system that evolved over the
7 systems from MMU-less microcontrollers to supercomputers. The memory
16 Virtual Memory Primer
19 The physical memory in a computer system is a limited resource and
20 even for systems that support memory hotplug there is a hard limit on
21 the amount of memory that can be installed. The physical memory is not
27 All this makes dealing directly with physical memory quite complex and
28 to avoid this complexity a concept of virtual memory was developed.
30 The virtual memory abstracts the details of physical memory from the
32 physical memory (demand paging) and provides a mechanism for the
[all …]
Dnumaperf.rst2 NUMA Memory Performance
8 Some platforms may have multiple types of memory attached to a compute
9 node. These disparate memory ranges may share some characteristics, such
13 A system supports such heterogeneous memory by grouping each memory type
15 characteristics. Some memory may share the same node as a CPU, and others
16 are provided as memory only nodes. While memory only nodes do not provide
19 nodes with local memory and a memory only node for each of compute node::
30 A "memory initiator" is a node containing one or more devices such as
31 CPUs or separate memory I/O devices that can initiate memory requests.
32 A "memory target" is a node containing one or more physical address
[all …]
/Documentation/devicetree/bindings/memory-controllers/fsl/
Dfsl,ddr.yaml4 $id: http://devicetree.org/schemas/memory-controllers/fsl/fsl,ddr.yaml#
7 title: Freescale DDR memory controller
15 pattern: "^memory-controller@[0-9a-f]+$"
21 - fsl,qoriq-memory-controller-v4.4
22 - fsl,qoriq-memory-controller-v4.5
23 - fsl,qoriq-memory-controller-v4.7
24 - fsl,qoriq-memory-controller-v5.0
25 - const: fsl,qoriq-memory-controller
27 - fsl,bsc9132-memory-controller
28 - fsl,mpc8536-memory-controller
[all …]
/Documentation/ABI/testing/
Dsysfs-devices-memory1 What: /sys/devices/system/memory
5 The /sys/devices/system/memory contains a snapshot of the
6 internal state of the kernel memory blocks. Files could be
9 Users: hotplug memory add/remove tools
12 What: /sys/devices/system/memory/memoryX/removable
16 The file /sys/devices/system/memory/memoryX/removable is a
17 legacy interface used to indicated whether a memory block is
19 "1" if and only if the kernel supports memory offlining.
20 Users: hotplug memory remove tools
24 What: /sys/devices/system/memory/memoryX/phys_device
[all …]
Dsysfs-kernel-mm-memory-tiers3 Contact: Linux memory management mailing list <linux-mm@kvack.org>
4 Description: A collection of all the memory tiers allocated.
6 Individual memory tier details are contained in subdirectories
7 named by the abstract distance of the memory tier.
15 Contact: Linux memory management mailing list <linux-mm@kvack.org>
16 Description: Directory with details of a specific memory tier
19 memory tier, memtierN, where N is derived based on abstract distance.
21 A smaller value of N implies a higher (faster) memory tier in the
24 nodelist: NUMA nodes that are part of this memory tier.
/Documentation/mm/
Dmemory-model.rst4 Physical Memory Model
7 Physical memory in a system may be addressed in different ways. The
8 simplest case is when the physical memory starts at address 0 and
13 different memory banks are attached to different CPUs.
15 Linux abstracts this diversity using one of the two memory models:
17 memory models it supports, what the default memory model is and
20 All the memory models track the status of physical page frames using
23 Regardless of the selected memory model, there exists one-to-one
27 Each memory model defines :c:func:`pfn_to_page` and :c:func:`page_to_pfn`
34 The simplest memory model is FLATMEM. This model is suitable for
[all …]
Dnuma.rst12 or more CPUs, local memory, and/or IO buses. For brevity and to
26 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible
30 Memory access time and effective memory bandwidth varies depending on how far
31 away the cell containing the CPU or IO bus making the memory access is from the
32 cell containing the target memory. For example, access to memory by CPUs
34 bandwidths than accesses to memory on other, remote cells. NUMA platforms
39 memory bandwidth. However, to achieve scalable memory bandwidth, system and
40 application software must arrange for a large majority of the memory references
41 [cache misses] to be to "local" memory--memory on the same cell, if any--or
42 to the closest cell with memory.
[all …]
Dhmm.rst2 Heterogeneous Memory Management (HMM)
5 Provide infrastructure and helpers to integrate non-conventional memory (device
6 memory like GPU on board memory) into regular kernel path, with the cornerstone
7 of this being specialized struct page for such memory (see sections 5 to 7 of
10 HMM also provides optional helpers for SVM (Share Virtual Memory), i.e.,
18 related to using device specific memory allocators. In the second section, I
22 fifth section deals with how device memory is represented inside the kernel.
28 Problems of using a device specific memory allocator
31 Devices with a large amount of on board memory (several gigabytes) like GPUs
32 have historically managed their memory through dedicated driver specific APIs.
[all …]
/Documentation/arch/arm64/
Dkdump.rst2 crashkernel memory reservation on arm64
9 reserved memory is needed to pre-load the kdump kernel and boot such
12 That reserved memory for kdump is adapted to be able to minimally
19 Through the kernel parameters below, memory can be reserved accordingly
21 large chunk of memomy can be found. The low memory reservation needs to
22 be considered if the crashkernel is reserved from the high memory area.
28 Low memory and high memory
31 For kdump reservations, low memory is the memory area under a specific
34 vmcore dumping can be ignored. On arm64, the low memory upper bound is
37 whole system RAM is low memory. Outside of the low memory described
[all …]
/Documentation/admin-guide/cgroup-v1/
Dmemory.rst2 Memory Resource Controller
12 The Memory Resource Controller has generically been referred to as the
13 memory controller in this document. Do not confuse memory controller
14 used here with the memory controller that is used in hardware.
17 When we mention a cgroup (cgroupfs's directory) with memory controller,
18 we call it "memory cgroup". When you see git-log and source code, you'll
22 Benefits and Purpose of the memory controller
25 The memory controller isolates the memory behaviour of a group of tasks
27 uses of the memory controller. The memory controller can be used to
30 Memory-hungry applications can be isolated and limited to a smaller
[all …]
/Documentation/core-api/
Dmemory-hotplug.rst4 Memory hotplug
7 Memory hotplug event notifier
12 There are six types of notification defined in ``include/linux/memory.h``:
15 Generated before new memory becomes available in order to be able to
16 prepare subsystems to handle memory. The page allocator is still unable
17 to allocate from the new memory.
23 Generated when memory has successfully brought online. The callback may
24 allocate pages from the new memory.
27 Generated to begin the process of offlining memory. Allocations are no
28 longer possible from the memory but some of the memory to be offlined
[all …]
Dmemory-allocation.rst4 Memory Allocation Guide
7 Linux provides a variety of APIs for memory allocation. You can
14 Most of the memory allocation APIs use GFP flags to express how that
15 memory should be allocated. The GFP acronym stands for "get free
16 pages", the underlying memory allocation function.
19 makes the question "How should I allocate memory?" not that easy to
32 The GFP flags control the allocators behavior. They tell what memory
34 memory, whether the memory can be accessed by the userspace etc. The
39 * Most of the time ``GFP_KERNEL`` is what you need. Memory for the
40 kernel data structures, DMAable memory, inode cache, all these and
[all …]
Dswiotlb.rst7 swiotlb is a memory buffer allocator used by the Linux kernel DMA layer. It is
8 typically used when a device doing DMA can't directly access the target memory
10 the DMA layer calls swiotlb to allocate a temporary memory buffer that conforms
11 to the limitations. The DMA is done to/from this temporary memory buffer, and
13 memory buffer. This approach is generically called "bounce buffering", and the
14 temporary memory buffer is called a "bounce buffer".
25 memory buffer, doing bounce buffering is slower than doing DMA directly to the
26 original memory buffer, and it consumes more CPU resources. So it is used only
32 limitations. As physical memory sizes grew beyond 4 GiB, some devices could
33 only provide 32-bit DMA addresses. By allocating bounce buffer memory below
[all …]
/Documentation/userspace-api/media/v4l/
Ddev-mem2mem.rst6 Video Memory-To-Memory Interface
9 A V4L2 memory-to-memory device can compress, decompress, transform, or
10 otherwise convert video data from one format into another format, in memory.
11 Such memory-to-memory devices set the ``V4L2_CAP_VIDEO_M2M`` or
12 ``V4L2_CAP_VIDEO_M2M_MPLANE`` capability. Examples of memory-to-memory
16 A memory-to-memory video node acts just like a normal video node, but it
17 supports both output (sending frames from memory to the hardware)
19 memory) stream I/O. An application will have to setup the stream I/O for
23 Memory-to-memory devices function as a shared resource: you can
32 One of the most common memory-to-memory device is the codec. Codecs
[all …]
/Documentation/devicetree/bindings/firmware/
Dgunyah-cma-mem.yaml7 title: Contiguous memory allocator for Virtual Machines
14 gunyah-cma-mem is a CMA memory manager that allows VMMs to use
15 contiguous memory to backup Virtual Machines running on Gunyah. These
17 like memory encryption.
23 memory-region:
27 Describes the specific reserved memory region that this allocator
28 will allocate memory from for a Virtual Machine. Refer to
29 Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
32 memory-region-names:
35 - description: Name of the memory-region to be used by VMM for operation
[all …]
/Documentation/devicetree/bindings/soc/fsl/
Dfsl,qman-fqd.yaml7 title: QMan Private Memory Nodes
13 QMan requires two contiguous range of physical memory used for the backing store
15 This memory is reserved/allocated as a node under the /reserved-memory node.
17 BMan requires a contiguous range of physical memory used for the backing store
18 for BMan Free Buffer Proxy Records (FBPR). This memory is reserved/allocated as
19 a node under the /reserved-memory node.
21 The QMan FQD memory node must be named "qman-fqd"
22 The QMan PFDR memory node must be named "qman-pfdr"
23 The BMan FBPR memory node must be named "bman-fbpr"
25 The following constraints are relevant to the FQD and PFDR private memory:
[all …]
/Documentation/devicetree/bindings/reserved-memory/
Dxen,shared-memory.txt1 * Xen hypervisor reserved-memory binding
3 Expose one or more memory regions as reserved-memory to the guest
5 to be a shared memory area across multiple virtual machines for
8 For each of these pre-shared memory regions, a range is exposed under
9 the /reserved-memory node as a child node. Each range sub-node is named
13 compatible = "xen,shared-memory-v1"
16 the base guest physical address and size of the shared memory region
20 memory region used for the mapping in the borrower VM.
23 a string that identifies the shared memory region as specified in
/Documentation/admin-guide/mm/damon/
Dreclaim.rst8 be used for proactive and lightweight reclamation under light memory pressure.
10 to be selectively used for different level of memory pressure and requirements.
15 On general memory over-committed systems, proactively reclaiming cold pages
16 helps saving memory and reducing latency spikes that incurred by the direct
20 Free Pages Reporting [3]_ based memory over-commit virtualization systems are
22 memory to host, and the host reallocates the reported memory to other guests.
23 As a result, the memory of the systems are fully utilized. However, the
24 guests could be not so memory-frugal, mainly because some kernel subsystems and
25 user-space applications are designed to use as much memory as available. Then,
26 guests could report only small amount of memory as free to host, results in
[all …]
/Documentation/arch/powerpc/
Dfirmware-assisted-dump.rst14 - Fadump uses the same firmware interfaces and memory reservation model
16 - Unlike phyp dump, FADump exports the memory dump through /proc/vmcore
21 - Unlike phyp dump, FADump allows user to release all the memory reserved
35 - Once the dump is copied out, the memory that held the dump
44 - The first kernel registers the sections of memory with the
46 These registered sections of memory are reserved by the first
50 low memory regions (boot memory) from source to destination area.
54 The term 'boot memory' means size of the low memory chunk
56 booted with restricted memory. By default, the boot memory
58 Alternatively, user can also specify boot memory size
[all …]
/Documentation/devicetree/bindings/memory-controllers/
Dnuvoton,npcm-memory-controller.yaml4 $id: http://devicetree.org/schemas/memory-controllers/nuvoton,npcm-memory-controller.yaml#
7 title: Nuvoton NPCM Memory Controller
14 The Nuvoton BMC SoC supports DDR4 memory with or without ECC (error correction
17 The memory controller supports single bit error correction, double bit error
18 detection (in-line ECC in which a section (1/8th) of the memory device used to
21 Note, the bootloader must configure ECC mode for the memory controller.
26 - nuvoton,npcm750-memory-controller
27 - nuvoton,npcm845-memory-controller
46 mc: memory-controller@f0824000 {
47 compatible = "nuvoton,npcm750-memory-controller";
Dnvidia,tegra210-emc.yaml4 $id: http://devicetree.org/schemas/memory-controllers/nvidia,tegra210-emc.yaml#
7 title: NVIDIA Tegra210 SoC External Memory Controller
15 sent from the memory controller.
26 - description: external memory clock
36 memory-region:
39 phandle to a reserved memory region describing the table of EMC
42 nvidia,memory-controller:
45 phandle of the memory controller node
52 - nvidia,memory-controller
61 reserved-memory {
[all …]
/Documentation/dev-tools/
Dkmemleak.rst1 Kernel Memory Leak Detector
4 Kmemleak provides a way of detecting possible kernel memory leaks in a
9 Valgrind tool (``memcheck --leak-check``) to detect the memory leaks in
16 thread scans the memory every 10 minutes (by default) and prints the
22 To display the details of all the possible scanned memory leaks::
26 To trigger an intermediate memory scan::
30 To clear the list of all current possible memory leaks::
41 Memory scanning parameters can be modified at run-time by writing to the
51 start the automatic memory scanning thread (default)
53 stop the automatic memory scanning thread
[all …]
/Documentation/devicetree/bindings/pmem/
Dpmem-region.txt1 Device-tree bindings for persistent memory regions
4 Persistent memory refers to a class of memory devices that are:
6 a) Usable as main system memory (i.e. cacheable), and
9 Given b) it is best to think of persistent memory as a kind of memory mapped
11 persistent regions separately to the normal memory pool. To aid with that this
13 memory regions exist inside the physical address space.
24 range should be mappable as normal system memory would be
36 backed by non-persistent memory. This lets the OS know that it
41 is backed by non-volatile memory.
48 * 0x5000 to 0x5fff that is backed by non-volatile memory.
[all …]
/Documentation/driver-api/
Dntb.rst6 the separate memory systems of two or more computers to the same PCI-Express
8 registers and memory translation windows, as well as non common features like
15 Memory windows allow translated read and write access to the peer memory.
38 Primary purpose of NTB is to share some peace of memory between at least two
40 mainly used to perform the proper memory window initialization. Typically
41 there are two types of memory window interfaces supported by the NTB API:
48 Memory: Local NTB Port: Peer NTB Port: Peer MMIO:
51 | memory | _v____________ | ______________
52 | (addr) |<======| MW xlat addr |<====| MW base addr |<== memory-mapped IO
55 So typical scenario of the first type memory window initialization looks:
[all …]

12345678910>>...70