Searched +full:memory +full:- +full:to +full:- +full:memory (Results 1 – 25 of 1032) sorted by relevance
12345678910>>...42
| /Documentation/admin-guide/mm/ |
| D | memory-hotplug.rst | 2 Memory Hot(Un)Plug 5 This document describes generic Linux support for memory hot(un)plug with 13 Memory hot(un)plug allows for increasing and decreasing the size of physical 14 memory available to a machine at runtime. In the simplest case, it consists of 18 Memory hot(un)plug is used for various purposes: 20 - The physical memory available to a machine can be adjusted at runtime, up- or 21 downgrading the memory capacity. This dynamic memory resizing, sometimes 22 referred to as "capacity on demand", is frequently used with virtual machines 25 - Replacing hardware, such as DIMMs or whole NUMA nodes, without downtime. One 26 example is replacing failing memory modules. [all …]
|
| D | concepts.rst | 5 The memory management in Linux is a complex system that evolved over the 6 years and included more and more functionality to support a variety of 7 systems from MMU-less microcontrollers to supercomputers. The memory 12 address to a physical address. 16 Virtual Memory Primer 19 The physical memory in a computer system is a limited resource and 20 even for systems that support memory hotplug there is a hard limit on 21 the amount of memory that can be installed. The physical memory is not 27 All this makes dealing directly with physical memory quite complex and 28 to avoid this complexity a concept of virtual memory was developed. [all …]
|
| D | numaperf.rst | 2 NUMA Memory Performance 8 Some platforms may have multiple types of memory attached to a compute 9 node. These disparate memory ranges may share some characteristics, such 13 A system supports such heterogeneous memory by grouping each memory type 15 characteristics. Some memory may share the same node as a CPU, and others 16 are provided as memory only nodes. While memory only nodes do not provide 17 CPUs, they may still be local to one or more compute nodes relative to 19 nodes with local memory and a memory only node for each of compute node:: 21 +------------------+ +------------------+ 22 | Compute Node 0 +-----+ Compute Node 1 | [all …]
|
| D | numa_memory_policy.rst | 2 NUMA Memory Policy 5 What is NUMA Memory Policy? 8 In the Linux kernel, "memory policy" determines from which node the kernel will 9 allocate memory in a NUMA system or in an emulated NUMA system. Linux has 10 supported platforms with Non-Uniform Memory Access architectures since 2.4.?. 11 The current memory policy support was added to Linux 2.6 around May 2004. This 12 document attempts to describe the concepts and APIs of the 2.6 memory policy 15 Memory policies should not be confused with cpusets 16 (``Documentation/admin-guide/cgroup-v1/cpusets.rst``) 18 memory may be allocated by a set of processes. Memory policies are a [all …]
|
| /Documentation/mm/ |
| D | memory-model.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 Physical Memory Model 7 Physical memory in a system may be addressed in different ways. The 8 simplest case is when the physical memory starts at address 0 and 9 spans a contiguous range up to the maximal address. It could be, 13 different memory banks are attached to different CPUs. 15 Linux abstracts this diversity using one of the two memory models: 17 memory models it supports, what the default memory model is and 18 whether it is possible to manually override that default. 20 All the memory models track the status of physical page frames using [all …]
|
| D | numa.rst | 12 or more CPUs, local memory, and/or IO buses. For brevity and to 17 Each of the 'cells' may be viewed as an SMP [symmetric multi-processor] subset 18 of the system--although some components necessary for a stand-alone SMP system 20 connected together with some sort of system interconnect--e.g., a crossbar or 21 point-to-point link are common types of NUMA system interconnects. Both of 22 these types of interconnects can be aggregated to create NUMA platforms with 26 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible 27 to and accessible from any CPU attached to any cell and cache coherency 30 Memory access time and effective memory bandwidth varies depending on how far 31 away the cell containing the CPU or IO bus making the memory access is from the [all …]
|
| D | hmm.rst | 2 Heterogeneous Memory Management (HMM) 5 Provide infrastructure and helpers to integrate non-conventional memory (device 6 memory like GPU on board memory) into regular kernel path, with the cornerstone 7 of this being specialized struct page for such memory (see sections 5 to 7 of 10 HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., 11 allowing a device to transparently access program addresses coherently with 13 for the device. This is becoming mandatory to simplify the use of advanced 14 heterogeneous computing where GPU, DSP, or FPGA are used to perform various 18 related to using device specific memory allocators. In the second section, I 19 expose the hardware limitations that are inherent to many platforms. The third [all …]
|
| D | page_tables.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 Paged virtual memory was invented along with virtual memory as a concept in 9 virtual memory. The feature migrated to newer computers and became a de facto 10 feature of all Unix-like systems as time went by. In 1985 the feature was 14 as seen on the external memory bus. 18 map this to the restrictions of the hardware. 20 The physical address corresponding to the virtual address is often referenced 22 is the physical address of the page (as seen on the external memory bus) 25 Physical memory address 0 will be *pfn 0* and the highest pfn will be 26 the last page of physical memory the external address bus of the CPU can [all …]
|
| /Documentation/arch/arm64/ |
| D | kdump.rst | 2 crashkernel memory reservation on arm64 7 Kdump mechanism is used to capture a corrupted kernel vmcore so that 8 it can be subsequently analyzed. In order to do this, a preliminarily 9 reserved memory is needed to pre-load the kdump kernel and boot such 12 That reserved memory for kdump is adapted to be able to minimally 19 Through the kernel parameters below, memory can be reserved accordingly 21 large chunk of memomy can be found. The low memory reservation needs to 22 be considered if the crashkernel is reserved from the high memory area. 24 - crashkernel=size@offset 25 - crashkernel=size [all …]
|
| /Documentation/admin-guide/cgroup-v1/ |
| D | memory.rst | 2 Memory Resource Controller 8 here but make sure to check the current code if you need a deeper 12 The Memory Resource Controller has generically been referred to as the 13 memory controller in this document. Do not confuse memory controller 14 used here with the memory controller that is used in hardware. 17 When we mention a cgroup (cgroupfs's directory) with memory controller, 18 we call it "memory cgroup". When you see git-log and source code, you'll 19 see patch's title and function names tend to use "memcg". 22 Benefits and Purpose of the memory controller 25 The memory controller isolates the memory behaviour of a group of tasks [all …]
|
| /Documentation/core-api/ |
| D | memory-hotplug.rst | 4 Memory hotplug 7 Memory hotplug event notifier 10 Hotplugging events are sent to a notification queue. 12 There are six types of notification defined in ``include/linux/memory.h``: 15 Generated before new memory becomes available in order to be able to 16 prepare subsystems to handle memory. The page allocator is still unable 17 to allocate from the new memory. 23 Generated when memory has successfully brought online. The callback may 24 allocate pages from the new memory. 27 Generated to begin the process of offlining memory. Allocations are no [all …]
|
| D | memory-allocation.rst | 4 Memory Allocation Guide 7 Linux provides a variety of APIs for memory allocation. You can 11 `alloc_pages`. It is also possible to use more specialized allocators, 14 Most of the memory allocation APIs use GFP flags to express how that 15 memory should be allocated. The GFP acronym stands for "get free 16 pages", the underlying memory allocation function. 19 makes the question "How should I allocate memory?" not that easy to 32 The GFP flags control the allocators behavior. They tell what memory 33 zones can be used, how hard the allocator should try to find free 34 memory, whether the memory can be accessed by the userspace etc. The [all …]
|
| D | swiotlb.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 swiotlb is a memory buffer allocator used by the Linux kernel DMA layer. It is 8 typically used when a device doing DMA can't directly access the target memory 10 the DMA layer calls swiotlb to allocate a temporary memory buffer that conforms 11 to the limitations. The DMA is done to/from this temporary memory buffer, and 13 memory buffer. This approach is generically called "bounce buffering", and the 14 temporary memory buffer is called a "bounce buffer". 18 the normal DMA map, unmap, and sync APIs when programming a device to do DMA. 19 These APIs use the device DMA attributes and kernel-wide settings to determine 25 memory buffer, doing bounce buffering is slower than doing DMA directly to the [all …]
|
| /Documentation/ABI/testing/ |
| D | sysfs-devices-memory | 1 What: /sys/devices/system/memory 5 The /sys/devices/system/memory contains a snapshot of the 6 internal state of the kernel memory blocks. Files could be 7 added or removed dynamically to represent hot-add/remove 9 Users: hotplug memory add/remove tools 10 http://www.ibm.com/developerworks/wikis/display/LinuxP/powerpc-utils 12 What: /sys/devices/system/memory/memoryX/removable 16 The file /sys/devices/system/memory/memoryX/removable is a 17 legacy interface used to indicated whether a memory block is 18 likely to be offlineable or not. Newer kernel versions return [all …]
|
| /Documentation/userspace-api/media/v4l/ |
| D | dev-mem2mem.rst | 1 .. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later 6 Video Memory-To-Memory Interface 9 A V4L2 memory-to-memory device can compress, decompress, transform, or 10 otherwise convert video data from one format into another format, in memory. 11 Such memory-to-memory devices set the ``V4L2_CAP_VIDEO_M2M`` or 12 ``V4L2_CAP_VIDEO_M2M_MPLANE`` capability. Examples of memory-to-memory 14 converting from YUV to RGB). 16 A memory-to-memory video node acts just like a normal video node, but it 17 supports both output (sending frames from memory to the hardware) 19 memory) stream I/O. An application will have to setup the stream I/O for [all …]
|
| /Documentation/devicetree/bindings/firmware/ |
| D | gunyah-cma-mem.yaml | 1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 3 --- 4 $id: http://devicetree.org/schemas/firmware/gunyah-cma-mem.yaml# 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 7 title: Contiguous memory allocator for Virtual Machines 10 - Prakruthi Deepak Heragu <quic_pheragu@quicinc.com> 11 - Elliot Berman <quic_eberman@quicinc.com> 14 gunyah-cma-mem is a CMA memory manager that allows VMMs to use 15 contiguous memory to backup Virtual Machines running on Gunyah. These 16 regions are pre-defined by the firmware to have special attributes [all …]
|
| /Documentation/arch/powerpc/ |
| D | firmware-assisted-dump.rst | 2 Firmware-Assisted Dump 7 The goal of firmware-assisted dump is to enable the dump of 8 a crashed system, and to do so from a fully-reset system, and 9 to minimize the total elapsed time until the system is back 12 - Firmware-Assisted Dump (FADump) infrastructure is intended to replace 14 - Fadump uses the same firmware interfaces and memory reservation model 16 - Unlike phyp dump, FADump exports the memory dump through /proc/vmcore 19 - Unlike phyp dump, userspace tool does not need to refer any sysfs 21 - Unlike phyp dump, FADump allows user to release all the memory reserved 23 - Once enabled through kernel boot parameter, FADump can be [all …]
|
| /Documentation/admin-guide/mm/damon/ |
| D | reclaim.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 DAMON-based Reclamation 7 DAMON-based Reclamation (DAMON_RECLAIM) is a static kernel module that aimed to 8 be used for proactive and lightweight reclamation under light memory pressure. 9 It doesn't aim to replace the LRU-list based page_granularity reclamation, but 10 to be selectively used for different level of memory pressure and requirements. 15 On general memory over-committed systems, proactively reclaiming cold pages 16 helps saving memory and reducing latency spikes that incurred by the direct 20 Free Pages Reporting [3]_ based memory over-commit virtualization systems are 22 memory to host, and the host reallocates the reported memory to other guests. [all …]
|
| /Documentation/dev-tools/ |
| D | kmemleak.rst | 1 Kernel Memory Leak Detector 4 Kmemleak provides a way of detecting possible kernel memory leaks in a 5 way similar to a `tracing garbage collector 9 Valgrind tool (``memcheck --leak-check``) to detect the memory leaks in 10 user-space applications. 13 ----- 15 CONFIG_DEBUG_KMEMLEAK in "Kernel hacking" has to be enabled. A kernel 16 thread scans the memory every 10 minutes (by default) and prints the 20 # mount -t debugfs nodev /sys/kernel/debug/ 22 To display the details of all the possible scanned memory leaks:: [all …]
|
| D | kasan.rst | 1 .. SPDX-License-Identifier: GPL-2.0 8 -------- 10 Kernel Address Sanitizer (KASAN) is a dynamic memory safety error detector 11 designed to find out-of-bounds and use-after-free bugs. 16 2. Software Tag-Based KASAN 17 3. Hardware Tag-Based KASAN 20 debugging, similar to userspace ASan. This mode is supported on many CPU 21 architectures, but it has significant performance and memory overheads. 23 Software Tag-Based KASAN or SW_TAGS KASAN, enabled with CONFIG_KASAN_SW_TAGS, 24 can be used for both debugging and dogfood testing, similar to userspace HWASan. [all …]
|
| /Documentation/driver-api/ |
| D | ntb.rst | 5 NTB (Non-Transparent Bridge) is a type of PCI-Express bridge chip that connects 6 the separate memory systems of two or more computers to the same PCI-Express 8 registers and memory translation windows, as well as non common features like 9 scratchpad and message registers. Scratchpad registers are read-and-writable 13 special status bits to make sure the information isn't rewritten by another 14 peer. Doorbell registers provide a way for peers to send interrupt events. 15 Memory windows allow translated read and write access to the peer memory. 21 clients interested in NTB features to discover NTB the devices supported by 22 hardware drivers. The term "client" is used here to mean an upper layer 24 is used here to mean a driver for a specific vendor and model of NTB hardware. [all …]
|
| D | edac.rst | 5 ---------------------------------------- 7 There are several things to be aware of that aren't at all obvious, like 8 *sockets, *socket sets*, *banks*, *rows*, *chip-select rows*, *channels*, 16 * Memory devices 18 The individual DRAM chips on a memory stick. These devices commonly 20 provides the number of bits that the memory controller expects: 21 typically 72 bits, in order to provide 64 bits + 8 bits of ECC data. 23 * Memory Stick 25 A printed circuit board that aggregates multiple memory devices in 28 called DIMM (Dual Inline Memory Module). [all …]
|
| /Documentation/devicetree/bindings/soc/fsl/ |
| D | fsl,qman-fqd.yaml | 1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 3 --- 4 $id: http://devicetree.org/schemas/soc/fsl/fsl,qman-fqd.yaml# 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 7 title: QMan Private Memory Nodes 10 - Frank Li <Frank.Li@nxp.com> 13 QMan requires two contiguous range of physical memory used for the backing store 15 This memory is reserved/allocated as a node under the /reserved-memory node. 17 BMan requires a contiguous range of physical memory used for the backing store 18 for BMan Free Buffer Proxy Records (FBPR). This memory is reserved/allocated as [all …]
|
| /Documentation/devicetree/bindings/pmem/ |
| D | pmem-region.txt | 1 Device-tree bindings for persistent memory regions 2 ----------------------------------------------------- 4 Persistent memory refers to a class of memory devices that are: 6 a) Usable as main system memory (i.e. cacheable), and 9 Given b) it is best to think of persistent memory as a kind of memory mapped 10 storage device. To ensure data integrity the operating system needs to manage 11 persistent regions separately to the normal memory pool. To aid with that this 13 memory regions exist inside the physical address space. 16 ----------------------------- 19 - compatible = "pmem-region" [all …]
|
| /Documentation/devicetree/bindings/reserved-memory/ |
| D | xen,shared-memory.txt | 1 * Xen hypervisor reserved-memory binding 3 Expose one or more memory regions as reserved-memory to the guest 5 to be a shared memory area across multiple virtual machines for 8 For each of these pre-shared memory regions, a range is exposed under 9 the /reserved-memory node as a child node. Each range sub-node is named 10 xen-shmem@<address> and has the following properties: 12 - compatible: 13 compatible = "xen,shared-memory-v1" 15 - reg: 16 the base guest physical address and size of the shared memory region [all …]
|
12345678910>>...42