Searched +full:in +full:- +full:memory (Results 1 – 25 of 1022) sorted by relevance
12345678910>>...41
| /Documentation/admin-guide/mm/ |
| D | memory-hotplug.rst | 2 Memory Hot(Un)Plug 5 This document describes generic Linux support for memory hot(un)plug with 13 Memory hot(un)plug allows for increasing and decreasing the size of physical 14 memory available to a machine at runtime. In the simplest case, it consists of 18 Memory hot(un)plug is used for various purposes: 20 - The physical memory available to a machine can be adjusted at runtime, up- or 21 downgrading the memory capacity. This dynamic memory resizing, sometimes 25 - Replacing hardware, such as DIMMs or whole NUMA nodes, without downtime. One 26 example is replacing failing memory modules. 28 - Reducing energy consumption either by physically unplugging memory modules or [all …]
|
| D | concepts.rst | 5 The memory management in Linux is a complex system that evolved over the 7 systems from MMU-less microcontrollers to supercomputers. The memory 16 Virtual Memory Primer 19 The physical memory in a computer system is a limited resource and 20 even for systems that support memory hotplug there is a hard limit on 21 the amount of memory that can be installed. The physical memory is not 27 All this makes dealing directly with physical memory quite complex and 28 to avoid this complexity a concept of virtual memory was developed. 30 The virtual memory abstracts the details of physical memory from the 31 application software, allows to keep only needed information in the [all …]
|
| D | numaperf.rst | 2 NUMA Memory Performance 8 Some platforms may have multiple types of memory attached to a compute 9 node. These disparate memory ranges may share some characteristics, such 13 A system supports such heterogeneous memory by grouping each memory type 15 characteristics. Some memory may share the same node as a CPU, and others 16 are provided as memory only nodes. While memory only nodes do not provide 19 nodes with local memory and a memory only node for each of compute node:: 21 +------------------+ +------------------+ 22 | Compute Node 0 +-----+ Compute Node 1 | 24 +--------+---------+ +--------+---------+ [all …]
|
| D | index.rst | 2 Memory Management 5 Linux memory management subsystem is responsible, as the name implies, 6 for managing the memory in the system. This includes implementation of 7 virtual memory and demand paging, memory allocation both for kernel 11 Linux memory management is a complex system with many configurable 14 are described in Documentation/admin-guide/sysctl/vm.rst and in `man 5 proc`_. 16 .. _man 5 proc: http://man7.org/linux/man-pages/man5/proc.5.html 18 Linux memory management has its own jargon and if you are not yet 19 familiar with it, consider reading Documentation/admin-guide/mm/concepts.rst. 21 Here we document in detail how to interact with various mechanisms in [all …]
|
| D | numa_memory_policy.rst | 2 NUMA Memory Policy 5 What is NUMA Memory Policy? 8 In the Linux kernel, "memory policy" determines from which node the kernel will 9 allocate memory in a NUMA system or in an emulated NUMA system. Linux has 10 supported platforms with Non-Uniform Memory Access architectures since 2.4.?. 11 The current memory policy support was added to Linux 2.6 around May 2004. This 12 document attempts to describe the concepts and APIs of the 2.6 memory policy 15 Memory policies should not be confused with cpusets 16 (``Documentation/admin-guide/cgroup-v1/cpusets.rst``) 18 memory may be allocated by a set of processes. Memory policies are a [all …]
|
| /Documentation/core-api/ |
| D | memory-hotplug.rst | 4 Memory hotplug 7 Memory hotplug event notifier 12 There are six types of notification defined in ``include/linux/memory.h``: 15 Generated before new memory becomes available in order to be able to 16 prepare subsystems to handle memory. The page allocator is still unable 17 to allocate from the new memory. 23 Generated when memory has successfully brought online. The callback may 24 allocate pages from the new memory. 27 Generated to begin the process of offlining memory. Allocations are no 28 longer possible from the memory but some of the memory to be offlined [all …]
|
| D | swiotlb.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 swiotlb is a memory buffer allocator used by the Linux kernel DMA layer. It is 8 typically used when a device doing DMA can't directly access the target memory 9 buffer because of hardware limitations or other requirements. In such a case, 10 the DMA layer calls swiotlb to allocate a temporary memory buffer that conforms 11 to the limitations. The DMA is done to/from this temporary memory buffer, and 13 memory buffer. This approach is generically called "bounce buffering", and the 14 temporary memory buffer is called a "bounce buffer". 19 These APIs use the device DMA attributes and kernel-wide settings to determine 22 device, some devices in a system may use bounce buffering while others do not. [all …]
|
| /Documentation/mm/ |
| D | memory-model.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 Physical Memory Model 7 Physical memory in a system may be addressed in different ways. The 8 simplest case is when the physical memory starts at address 0 and 13 different memory banks are attached to different CPUs. 15 Linux abstracts this diversity using one of the two memory models: 17 memory models it supports, what the default memory model is and 20 All the memory models track the status of physical page frames using 21 struct page arranged in one or more arrays. 23 Regardless of the selected memory model, there exists one-to-one [all …]
|
| D | hmm.rst | 2 Heterogeneous Memory Management (HMM) 5 Provide infrastructure and helpers to integrate non-conventional memory (device 6 memory like GPU on board memory) into regular kernel path, with the cornerstone 7 of this being specialized struct page for such memory (see sections 5 to 7 of 10 HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., 17 This document is divided as follows: in the first section I expose the problems 18 related to using device specific memory allocators. In the second section, I 21 CPU page-table mirroring works and the purpose of HMM in this context. The 22 fifth section deals with how device memory is represented inside the kernel. 28 Problems of using a device specific memory allocator [all …]
|
| D | numa.rst | 12 or more CPUs, local memory, and/or IO buses. For brevity and to 15 'cells' in this document. 17 Each of the 'cells' may be viewed as an SMP [symmetric multi-processor] subset 18 of the system--although some components necessary for a stand-alone SMP system 20 connected together with some sort of system interconnect--e.g., a crossbar or 21 point-to-point link are common types of NUMA system interconnects. Both of 26 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible 28 is handled in hardware by the processor caches and/or the system interconnect. 30 Memory access time and effective memory bandwidth varies depending on how far 31 away the cell containing the CPU or IO bus making the memory access is from the [all …]
|
| D | page_tables.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 Paged virtual memory was invented along with virtual memory as a concept in 9 virtual memory. The feature migrated to newer computers and became a de facto 10 feature of all Unix-like systems as time went by. In 1985 the feature was 11 included in the Intel 80386, which was the CPU Linux 1.0 was developed on. 14 as seen on the external memory bus. 16 Linux defines page tables as a hierarchy which is currently five levels in 22 is the physical address of the page (as seen on the external memory bus) 25 Physical memory address 0 will be *pfn 0* and the highest pfn will be 26 the last page of physical memory the external address bus of the CPU can [all …]
|
| D | physical_memory.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 Physical Memory 8 architecture-independent abstraction to represent the physical memory. This 9 chapter describes the structures used to manage physical memory in a running 12 The first principal concept prevalent in the memory management is 13 `Non-Uniform Memory Access (NUMA) 14 <https://en.wikipedia.org/wiki/Non-uniform_memory_access>`_. 15 With multi-core and multi-socket machines, memory may be arranged into banks 17 processor. For example, there might be a bank of memory assigned to each CPU or 18 a bank of memory very suitable for DMA near peripheral devices. [all …]
|
| /Documentation/admin-guide/cgroup-v1/ |
| D | memory.rst | 2 Memory Resource Controller 12 The Memory Resource Controller has generically been referred to as the 13 memory controller in this document. Do not confuse memory controller 14 used here with the memory controller that is used in hardware. 17 When we mention a cgroup (cgroupfs's directory) with memory controller, 18 we call it "memory cgroup". When you see git-log and source code, you'll 20 In this document, we avoid using it. 22 Benefits and Purpose of the memory controller 25 The memory controller isolates the memory behaviour of a group of tasks 27 uses of the memory controller. The memory controller can be used to [all …]
|
| D | cpusets.rst | 11 - Portions Copyright (c) 2004-2006 Silicon Graphics, Inc. 12 - Modified by Paul Jackson <pj@sgi.com> 13 - Modified by Christoph Lameter <cl@linux.com> 14 - Modified by Paul Menage <menage@google.com> 15 - Modified by Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> 25 1.6 What is memory spread ? 41 ---------------------- 43 Cpusets provide a mechanism for assigning a set of CPUs and Memory 44 Nodes to a set of tasks. In this document "Memory Node" refers to 45 an on-line node that contains memory. [all …]
|
| /Documentation/admin-guide/mm/damon/ |
| D | reclaim.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 DAMON-based Reclamation 7 DAMON-based Reclamation (DAMON_RECLAIM) is a static kernel module that aimed to 8 be used for proactive and lightweight reclamation under light memory pressure. 9 It doesn't aim to replace the LRU-list based page_granularity reclamation, but 10 to be selectively used for different level of memory pressure and requirements. 15 On general memory over-committed systems, proactively reclaiming cold pages 16 helps saving memory and reducing latency spikes that incurred by the direct 20 Free Pages Reporting [3]_ based memory over-commit virtualization systems are 21 good example of the cases. In such systems, the guest VMs reports their free [all …]
|
| /Documentation/dev-tools/ |
| D | kmemleak.rst | 1 Kernel Memory Leak Detector 4 Kmemleak provides a way of detecting possible kernel memory leaks in a 9 Valgrind tool (``memcheck --leak-check``) to detect the memory leaks in 10 user-space applications. 13 ----- 15 CONFIG_DEBUG_KMEMLEAK in "Kernel hacking" has to be enabled. A kernel 16 thread scans the memory every 10 minutes (by default) and prints the 20 # mount -t debugfs nodev /sys/kernel/debug/ 22 To display the details of all the possible scanned memory leaks:: 26 To trigger an intermediate memory scan:: [all …]
|
| D | kasan.rst | 1 .. SPDX-License-Identifier: GPL-2.0 8 -------- 10 Kernel Address Sanitizer (KASAN) is a dynamic memory safety error detector 11 designed to find out-of-bounds and use-after-free bugs. 16 2. Software Tag-Based KASAN 17 3. Hardware Tag-Based KASAN 21 architectures, but it has significant performance and memory overheads. 23 Software Tag-Based KASAN or SW_TAGS KASAN, enabled with CONFIG_KASAN_SW_TAGS, 25 This mode is only supported for arm64, but its moderate memory overhead allows 26 using it for testing on memory-restricted devices with real workloads. [all …]
|
| /Documentation/driver-api/pci/ |
| D | p2pdma.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 PCI Peer-to-Peer DMA Support 9 called Peer-to-Peer (or P2P). However, there are a number of issues that 10 make P2P transactions tricky to do in a perfectly safe way. 13 transactions between hierarchy domains, and in PCIe, each Root Port 18 same PCI bridge, as such devices are all in the same PCI hierarchy 23 The second issue is that to make use of existing interfaces in Linux, 24 memory that is used for P2P transactions needs to be backed by struct 33 In a given P2P implementation there may be three or more different 34 types of kernel drivers in play: [all …]
|
| /Documentation/arch/arm64/ |
| D | kdump.rst | 2 crashkernel memory reservation on arm64 8 it can be subsequently analyzed. In order to do this, a preliminarily 9 reserved memory is needed to pre-load the kdump kernel and boot such 12 That reserved memory for kdump is adapted to be able to minimally 19 Through the kernel parameters below, memory can be reserved accordingly 21 large chunk of memomy can be found. The low memory reservation needs to 22 be considered if the crashkernel is reserved from the high memory area. 24 - crashkernel=size@offset 25 - crashkernel=size 26 - crashkernel=size,high crashkernel=size,low [all …]
|
| /Documentation/arch/x86/ |
| D | tdx.rst | 1 .. SPDX-License-Identifier: GPL-2.0 9 encrypting the guest memory. In TDX, a special module running in a special 18 CPU-attested software module called 'the TDX module' runs inside the new 22 TDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to 23 provide crypto-protection to the VMs. TDX reserves part of MKTME KeyIDs 32 TDX boot-time detection 33 ----------------------- 41 --------------------------------------- 48 special error. In this case the kernel fails the module initialization 54 use it as 'metadata' for the TDX memory. It also takes additional CPU [all …]
|
| /Documentation/ABI/stable/ |
| D | sysfs-devices-node | 3 Contact: Linux Memory Management list <linux-mm@kvack.org> 9 Contact: Linux Memory Management list <linux-mm@kvack.org> 15 Contact: Linux Memory Management list <linux-mm@kvack.org> 17 Nodes that have regular memory. 21 Contact: Linux Memory Management list <linux-mm@kvack.org> 27 Contact: Linux Memory Management list <linux-mm@kvack.org> 29 Nodes that have regular or high memory. 34 Contact: Linux Memory Management list <linux-mm@kvack.org> 42 Contact: Linux Memory Management list <linux-mm@kvack.org> 48 Contact: Linux Memory Management list <linux-mm@kvack.org> [all …]
|
| /Documentation/arch/powerpc/ |
| D | firmware-assisted-dump.rst | 2 Firmware-Assisted Dump 7 The goal of firmware-assisted dump is to enable the dump of 8 a crashed system, and to do so from a fully-reset system, and 10 in production use. 12 - Firmware-Assisted Dump (FADump) infrastructure is intended to replace 14 - Fadump uses the same firmware interfaces and memory reservation model 16 - Unlike phyp dump, FADump exports the memory dump through /proc/vmcore 17 in the ELF format in the same way as kdump. This helps us reuse the 19 - Unlike phyp dump, userspace tool does not need to refer any sysfs 21 - Unlike phyp dump, FADump allows user to release all the memory reserved [all …]
|
| /Documentation/admin-guide/sysctl/ |
| D | vm.rst | 11 For general info and legal blurb, please look in index.rst. 13 ------------------------------------------------------------------------------ 15 This file contains the documentation for the sysctl files in 18 The files in this directory can be used to tune the operation 19 of the virtual memory (VM) subsystem of the Linux kernel and 23 files can be found in mm/swap.c. 25 Currently, these files are in /proc/sys/vm: 27 - admin_reserve_kbytes 28 - compact_memory 29 - compaction_proactiveness [all …]
|
| /Documentation/userspace-api/ |
| D | mseal.rst | 1 .. SPDX-License-Identifier: GPL-2.0 9 Modern CPUs support memory permissions such as RW and NX bits. The memory 10 permission feature improves security stance on memory corruption bugs, i.e. 11 the attacker can’t just write to arbitrary memory and point the code to it, 12 the memory has to be marked with X bit, or else an exception will happen. 14 Memory sealing additionally protects the mapping itself against 15 modifications. This is useful to mitigate memory corruption issues where a 16 corrupted pointer is passed to a memory management system. For example, 17 such an attacker primitive can break control-flow integrity guarantees 18 since read-only memory that is supposed to be trusted can become writable [all …]
|
| /Documentation/security/ |
| D | self-protection.rst | 2 Kernel Self-Protection 5 Kernel self-protection is the design and implementation of systems and 6 structures within the Linux kernel to protect against security flaws in 9 and actively detecting attack attempts. Not all topics are explored in 13 In the worst-case scenario, we assume an unprivileged local attacker 14 has arbitrary read and write access to the kernel's memory. In many 16 but with systems in place that defend against the worst case we'll 18 still be kept in mind, is protecting the kernel against a _privileged_ 23 The goals for successful self-protection systems would be that they 24 are effective, on by default, require no opt-in by developers, have no [all …]
|
12345678910>>...41