Searched +full:memory +full:- +full:to +full:- +full:memory (Results 1 – 25 of 1113) sorted by relevance
12345678910>>...45
| /kernel/linux/linux-5.10/Documentation/admin-guide/mm/ |
| D | memory-hotplug.rst | 4 Memory Hotplug 10 This document is about memory hotplug including how-to-use and current status. 11 Because Memory Hotplug is still under development, contents of this text will 18 (1) x86_64's has special implementation for memory hotplug. 26 Purpose of memory hotplug 27 ------------------------- 29 Memory Hotplug allows users to increase/decrease the amount of memory. 32 (A) For changing the amount of memory. 33 This is to allow a feature like capacity on demand. 34 (B) For installing/removing DIMMs or NUMA-nodes physically. [all …]
|
| D | concepts.rst | 7 The memory management in Linux is a complex system that evolved over the 8 years and included more and more functionality to support a variety of 9 systems from MMU-less microcontrollers to supercomputers. The memory 14 address to a physical address. 18 Virtual Memory Primer 21 The physical memory in a computer system is a limited resource and 22 even for systems that support memory hotplug there is a hard limit on 23 the amount of memory that can be installed. The physical memory is not 29 All this makes dealing directly with physical memory quite complex and 30 to avoid this complexity a concept of virtual memory was developed. [all …]
|
| D | numaperf.rst | 7 Some platforms may have multiple types of memory attached to a compute 8 node. These disparate memory ranges may share some characteristics, such 12 A system supports such heterogeneous memory by grouping each memory type 14 characteristics. Some memory may share the same node as a CPU, and others 15 are provided as memory only nodes. While memory only nodes do not provide 16 CPUs, they may still be local to one or more compute nodes relative to 18 nodes with local memory and a memory only node for each of compute node:: 20 +------------------+ +------------------+ 21 | Compute Node 0 +-----+ Compute Node 1 | 23 +--------+---------+ +--------+---------+ [all …]
|
| /kernel/linux/linux-6.6/Documentation/admin-guide/mm/ |
| D | memory-hotplug.rst | 2 Memory Hot(Un)Plug 5 This document describes generic Linux support for memory hot(un)plug with 13 Memory hot(un)plug allows for increasing and decreasing the size of physical 14 memory available to a machine at runtime. In the simplest case, it consists of 18 Memory hot(un)plug is used for various purposes: 20 - The physical memory available to a machine can be adjusted at runtime, up- or 21 downgrading the memory capacity. This dynamic memory resizing, sometimes 22 referred to as "capacity on demand", is frequently used with virtual machines 25 - Replacing hardware, such as DIMMs or whole NUMA nodes, without downtime. One 26 example is replacing failing memory modules. [all …]
|
| D | concepts.rst | 5 The memory management in Linux is a complex system that evolved over the 6 years and included more and more functionality to support a variety of 7 systems from MMU-less microcontrollers to supercomputers. The memory 12 address to a physical address. 16 Virtual Memory Primer 19 The physical memory in a computer system is a limited resource and 20 even for systems that support memory hotplug there is a hard limit on 21 the amount of memory that can be installed. The physical memory is not 27 All this makes dealing directly with physical memory quite complex and 28 to avoid this complexity a concept of virtual memory was developed. [all …]
|
| D | numaperf.rst | 2 NUMA Memory Performance 8 Some platforms may have multiple types of memory attached to a compute 9 node. These disparate memory ranges may share some characteristics, such 13 A system supports such heterogeneous memory by grouping each memory type 15 characteristics. Some memory may share the same node as a CPU, and others 16 are provided as memory only nodes. While memory only nodes do not provide 17 CPUs, they may still be local to one or more compute nodes relative to 19 nodes with local memory and a memory only node for each of compute node:: 21 +------------------+ +------------------+ 22 | Compute Node 0 +-----+ Compute Node 1 | [all …]
|
| /kernel/linux/linux-6.6/Documentation/admin-guide/cgroup-v1/ |
| D | memory.rst | 2 Memory Resource Controller 8 here but make sure to check the current code if you need a deeper 12 The Memory Resource Controller has generically been referred to as the 13 memory controller in this document. Do not confuse memory controller 14 used here with the memory controller that is used in hardware. 17 When we mention a cgroup (cgroupfs's directory) with memory controller, 18 we call it "memory cgroup". When you see git-log and source code, you'll 19 see patch's title and function names tend to use "memcg". 22 Benefits and Purpose of the memory controller 25 The memory controller isolates the memory behaviour of a group of tasks [all …]
|
| /kernel/linux/linux-5.10/Documentation/admin-guide/cgroup-v1/ |
| D | memory.rst | 2 Memory Resource Controller 8 here but make sure to check the current code if you need a deeper 12 The Memory Resource Controller has generically been referred to as the 13 memory controller in this document. Do not confuse memory controller 14 used here with the memory controller that is used in hardware. 17 When we mention a cgroup (cgroupfs's directory) with memory controller, 18 we call it "memory cgroup". When you see git-log and source code, you'll 19 see patch's title and function names tend to use "memcg". 22 Benefits and Purpose of the memory controller 25 The memory controller isolates the memory behaviour of a group of tasks [all …]
|
| /kernel/linux/linux-6.6/Documentation/mm/ |
| D | memory-model.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 Physical Memory Model 7 Physical memory in a system may be addressed in different ways. The 8 simplest case is when the physical memory starts at address 0 and 9 spans a contiguous range up to the maximal address. It could be, 13 different memory banks are attached to different CPUs. 15 Linux abstracts this diversity using one of the two memory models: 17 memory models it supports, what the default memory model is and 18 whether it is possible to manually override that default. 20 All the memory models track the status of physical page frames using [all …]
|
| D | hmm.rst | 2 Heterogeneous Memory Management (HMM) 5 Provide infrastructure and helpers to integrate non-conventional memory (device 6 memory like GPU on board memory) into regular kernel path, with the cornerstone 7 of this being specialized struct page for such memory (see sections 5 to 7 of 10 HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., 11 allowing a device to transparently access program addresses coherently with 13 for the device. This is becoming mandatory to simplify the use of advanced 14 heterogeneous computing where GPU, DSP, or FPGA are used to perform various 18 related to using device specific memory allocators. In the second section, I 19 expose the hardware limitations that are inherent to many platforms. The third [all …]
|
| D | numa.rst | 12 or more CPUs, local memory, and/or IO buses. For brevity and to 17 Each of the 'cells' may be viewed as an SMP [symmetric multi-processor] subset 18 of the system--although some components necessary for a stand-alone SMP system 20 connected together with some sort of system interconnect--e.g., a crossbar or 21 point-to-point link are common types of NUMA system interconnects. Both of 22 these types of interconnects can be aggregated to create NUMA platforms with 26 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible 27 to and accessible from any CPU attached to any cell and cache coherency 30 Memory access time and effective memory bandwidth varies depending on how far 31 away the cell containing the CPU or IO bus making the memory access is from the [all …]
|
| /kernel/linux/linux-5.10/Documentation/vm/ |
| D | memory-model.rst | 1 .. SPDX-License-Identifier: GPL-2.0 6 Physical Memory Model 9 Physical memory in a system may be addressed in different ways. The 10 simplest case is when the physical memory starts at address 0 and 11 spans a contiguous range up to the maximal address. It could be, 15 different memory banks are attached to different CPUs. 17 Linux abstracts this diversity using one of the three memory models: 19 memory models it supports, what the default memory model is and 20 whether it is possible to manually override that default. 26 All the memory models track the status of physical page frames using [all …]
|
| D | numa.rst | 14 or more CPUs, local memory, and/or IO buses. For brevity and to 19 Each of the 'cells' may be viewed as an SMP [symmetric multi-processor] subset 20 of the system--although some components necessary for a stand-alone SMP system 22 connected together with some sort of system interconnect--e.g., a crossbar or 23 point-to-point link are common types of NUMA system interconnects. Both of 24 these types of interconnects can be aggregated to create NUMA platforms with 28 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible 29 to and accessible from any CPU attached to any cell and cache coherency 32 Memory access time and effective memory bandwidth varies depending on how far 33 away the cell containing the CPU or IO bus making the memory access is from the [all …]
|
| D | hmm.rst | 4 Heterogeneous Memory Management (HMM) 7 Provide infrastructure and helpers to integrate non-conventional memory (device 8 memory like GPU on board memory) into regular kernel path, with the cornerstone 9 of this being specialized struct page for such memory (see sections 5 to 7 of 12 HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., 13 allowing a device to transparently access program addresses coherently with 15 for the device. This is becoming mandatory to simplify the use of advanced 16 heterogeneous computing where GPU, DSP, or FPGA are used to perform various 20 related to using device specific memory allocators. In the second section, I 21 expose the hardware limitations that are inherent to many platforms. The third [all …]
|
| /kernel/linux/linux-5.10/mm/ |
| D | Kconfig | 1 # SPDX-License-Identifier: GPL-2.0-only 3 menu "Memory Management options" 10 prompt "Memory model" 16 This option allows you to change some of the ways that 17 Linux manages its memory internally. Most users will 22 bool "Flat Memory" 25 This option is best suited for non-NUMA systems with 31 spaces and for features like NUMA and memory hotplug, 32 choose "Sparse Memory". 34 If unsure, choose this option (Flat Memory) over any other. [all …]
|
| /kernel/linux/linux-6.6/Documentation/arch/arm64/ |
| D | kdump.rst | 2 crashkernel memory reservation on arm64 7 Kdump mechanism is used to capture a corrupted kernel vmcore so that 8 it can be subsequently analyzed. In order to do this, a preliminarily 9 reserved memory is needed to pre-load the kdump kernel and boot such 12 That reserved memory for kdump is adapted to be able to minimally 19 Through the kernel parameters below, memory can be reserved accordingly 21 large chunk of memomy can be found. The low memory reservation needs to 22 be considered if the crashkernel is reserved from the high memory area. 24 - crashkernel=size@offset 25 - crashkernel=size [all …]
|
| /kernel/liteos_m/kernel/include/ |
| D | los_memory.h | 2 * Copyright (c) 2013-2019 Huawei Technologies Co., Ltd. All rights reserved. 3 * Copyright (c) 2020-2022 Huawei Device Co., Ltd. All rights reserved. 16 * to endorse or promote products derived from this software without specific prior written 20 * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, 24 * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, 33 * @defgroup los_memory Dynamic memory 56 * Starting address of the memory. 67 * <li>This API is used to print function call stack information of all used nodes.</li> 70 * @param pool [IN] Starting address of memory. 85 * @brief Deinitialize dynamic memory. [all …]
|
| /kernel/linux/linux-6.6/Documentation/core-api/ |
| D | memory-hotplug.rst | 4 Memory hotplug 7 Memory hotplug event notifier 10 Hotplugging events are sent to a notification queue. 12 There are six types of notification defined in ``include/linux/memory.h``: 15 Generated before new memory becomes available in order to be able to 16 prepare subsystems to handle memory. The page allocator is still unable 17 to allocate from the new memory. 23 Generated when memory has successfully brought online. The callback may 24 allocate pages from the new memory. 27 Generated to begin the process of offlining memory. Allocations are no [all …]
|
| /kernel/linux/linux-5.10/Documentation/core-api/ |
| D | memory-hotplug.rst | 4 Memory hotplug 7 Memory hotplug event notifier 10 Hotplugging events are sent to a notification queue. 12 There are six types of notification defined in ``include/linux/memory.h``: 15 Generated before new memory becomes available in order to be able to 16 prepare subsystems to handle memory. The page allocator is still unable 17 to allocate from the new memory. 23 Generated when memory has successfully brought online. The callback may 24 allocate pages from the new memory. 27 Generated to begin the process of offlining memory. Allocations are no [all …]
|
| /kernel/linux/linux-6.6/Documentation/ABI/testing/ |
| D | sysfs-devices-memory | 1 What: /sys/devices/system/memory 5 The /sys/devices/system/memory contains a snapshot of the 6 internal state of the kernel memory blocks. Files could be 7 added or removed dynamically to represent hot-add/remove 9 Users: hotplug memory add/remove tools 10 http://www.ibm.com/developerworks/wikis/display/LinuxP/powerpc-utils 12 What: /sys/devices/system/memory/memoryX/removable 16 The file /sys/devices/system/memory/memoryX/removable is a 17 legacy interface used to indicated whether a memory block is 18 likely to be offlineable or not. Newer kernel versions return [all …]
|
| /kernel/linux/linux-5.10/Documentation/powerpc/ |
| D | firmware-assisted-dump.rst | 2 Firmware-Assisted Dump 7 The goal of firmware-assisted dump is to enable the dump of 8 a crashed system, and to do so from a fully-reset system, and 9 to minimize the total elapsed time until the system is back 12 - Firmware-Assisted Dump (FADump) infrastructure is intended to replace 14 - Fadump uses the same firmware interfaces and memory reservation model 16 - Unlike phyp dump, FADump exports the memory dump through /proc/vmcore 19 - Unlike phyp dump, userspace tool does not need to refer any sysfs 21 - Unlike phyp dump, FADump allows user to release all the memory reserved 23 - Once enabled through kernel boot parameter, FADump can be [all …]
|
| /kernel/linux/linux-6.6/Documentation/powerpc/ |
| D | firmware-assisted-dump.rst | 2 Firmware-Assisted Dump 7 The goal of firmware-assisted dump is to enable the dump of 8 a crashed system, and to do so from a fully-reset system, and 9 to minimize the total elapsed time until the system is back 12 - Firmware-Assisted Dump (FADump) infrastructure is intended to replace 14 - Fadump uses the same firmware interfaces and memory reservation model 16 - Unlike phyp dump, FADump exports the memory dump through /proc/vmcore 19 - Unlike phyp dump, userspace tool does not need to refer any sysfs 21 - Unlike phyp dump, FADump allows user to release all the memory reserved 23 - Once enabled through kernel boot parameter, FADump can be [all …]
|
| /kernel/liteos_a/kernel/include/ |
| D | los_memory.h | 2 * Copyright (c) 2013-2019 Huawei Technologies Co., Ltd. All rights reserved. 3 * Copyright (c) 2020-2021 Huawei Device Co., Ltd. All rights reserved. 16 * to endorse or promote products derived from this software without specific prior written 20 * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, 24 * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, 33 * @defgroup los_memory Dynamic memory 55 * The omit layers of function call from call kernel memory interfaces 69 * The start address of exc interaction dynamic memory pool address, when the exc 70 * interaction feature not support, m_aucSysMem0 equals to m_aucSysMem1. 76 * The start address of system dynamic memory pool address. [all …]
|
| /kernel/linux/linux-5.10/Documentation/ABI/testing/ |
| D | sysfs-devices-memory | 1 What: /sys/devices/system/memory 5 The /sys/devices/system/memory contains a snapshot of the 6 internal state of the kernel memory blocks. Files could be 7 added or removed dynamically to represent hot-add/remove 9 Users: hotplug memory add/remove tools 10 http://www.ibm.com/developerworks/wikis/display/LinuxP/powerpc-utils 12 What: /sys/devices/system/memory/memoryX/removable 16 The file /sys/devices/system/memory/memoryX/removable 17 indicates whether this memory block is removable or not. 18 This is useful for a user-level agent to determine [all …]
|
| /kernel/liteos_m/testsuites/include/ |
| D | los_dlinkmem.h | 2 * Copyright (c) 2013-2019 Huawei Technologies Co., Ltd. All rights reserved. 3 * Copyright (c) 2020-2021 Huawei Device Co., Ltd. All rights reserved. 16 * to endorse or promote products derived from this software without specific prior written 20 * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, 24 * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, 51 * Memory pool information structure 54 void *pPoolAddr; /* *<Starting address of a memory pool */ 55 UINT32 uwPoolSize; /* *<Memory pool size */ 60 * Memory linked list node structure 63 LOS_DL_LIST stFreeNodeInfo; /* *<Free memory node */ [all …]
|
12345678910>>...45