Searched full:memory (Results 1 – 25 of 1476) sorted by relevance
12345678910>>...60
/Documentation/admin-guide/mm/ |
D | memory-hotplug.rst | 4 Memory Hotplug 10 This document is about memory hotplug including how-to-use and current status. 11 Because Memory Hotplug is still under development, contents of this text will 18 (1) x86_64's has special implementation for memory hotplug. 26 Purpose of memory hotplug 29 Memory Hotplug allows users to increase/decrease the amount of memory. 32 (A) For changing the amount of memory. 38 hardware which supports memory power management. 40 Linux memory hotplug is designed for both purpose. 42 Phases of memory hotplug [all …]
|
D | concepts.rst | 7 The memory management in Linux is a complex system that evolved over the 9 systems from MMU-less microcontrollers to supercomputers. The memory 18 Virtual Memory Primer 21 The physical memory in a computer system is a limited resource and 22 even for systems that support memory hotplug there is a hard limit on 23 the amount of memory that can be installed. The physical memory is not 29 All this makes dealing directly with physical memory quite complex and 30 to avoid this complexity a concept of virtual memory was developed. 32 The virtual memory abstracts the details of physical memory from the 34 physical memory (demand paging) and provides a mechanism for the [all …]
|
D | numaperf.rst | 7 Some platforms may have multiple types of memory attached to a compute 8 node. These disparate memory ranges may share some characteristics, such 12 A system supports such heterogeneous memory by grouping each memory type 14 characteristics. Some memory may share the same node as a CPU, and others 15 are provided as memory only nodes. While memory only nodes do not provide 18 nodes with local memory and a memory only node for each of compute node:: 29 A "memory initiator" is a node containing one or more devices such as 30 CPUs or separate memory I/O devices that can initiate memory requests. 31 A "memory target" is a node containing one or more physical address 32 ranges accessible from one or more memory initiators. [all …]
|
D | index.rst | 2 Memory Management 5 Linux memory management subsystem is responsible, as the name implies, 6 for managing the memory in the system. This includes implemnetation of 7 virtual memory and demand paging, memory allocation both for kernel 11 Linux memory management is a complex system with many configurable 18 Linux memory management has its own jargon and if you are not yet 23 the Linux memory management. 33 memory-hotplug
|
/Documentation/ABI/testing/ |
D | sysfs-devices-memory | 1 What: /sys/devices/system/memory 5 The /sys/devices/system/memory contains a snapshot of the 6 internal state of the kernel memory blocks. Files could be 9 Users: hotplug memory add/remove tools 12 What: /sys/devices/system/memory/memoryX/removable 16 The file /sys/devices/system/memory/memoryX/removable 17 indicates whether this memory block is removable or not. 19 identify removable sections of the memory before attempting 20 potentially expensive hot-remove memory operation 21 Users: hotplug memory remove tools [all …]
|
/Documentation/admin-guide/cgroup-v1/ |
D | memory.rst | 2 Memory Resource Controller 12 The Memory Resource Controller has generically been referred to as the 13 memory controller in this document. Do not confuse memory controller 14 used here with the memory controller that is used in hardware. 17 When we mention a cgroup (cgroupfs's directory) with memory controller, 18 we call it "memory cgroup". When you see git-log and source code, you'll 22 Benefits and Purpose of the memory controller 25 The memory controller isolates the memory behaviour of a group of tasks 27 uses of the memory controller. The memory controller can be used to 30 Memory-hungry applications can be isolated and limited to a smaller [all …]
|
/Documentation/core-api/ |
D | memory-hotplug.rst | 4 Memory hotplug 7 Memory hotplug event notifier 12 There are six types of notification defined in ``include/linux/memory.h``: 15 Generated before new memory becomes available in order to be able to 16 prepare subsystems to handle memory. The page allocator is still unable 17 to allocate from the new memory. 23 Generated when memory has successfully brought online. The callback may 24 allocate pages from the new memory. 27 Generated to begin the process of offlining memory. Allocations are no 28 longer possible from the memory but some of the memory to be offlined [all …]
|
D | memory-allocation.rst | 4 Memory Allocation Guide 7 Linux provides a variety of APIs for memory allocation. You can 14 Most of the memory allocation APIs use GFP flags to express how that 15 memory should be allocated. The GFP acronym stands for "get free 16 pages", the underlying memory allocation function. 19 makes the question "How should I allocate memory?" not that easy to 32 The GFP flags control the allocators behavior. They tell what memory 34 memory, whether the memory can be accessed by the userspace etc. The 39 * Most of the time ``GFP_KERNEL`` is what you need. Memory for the 40 kernel data structures, DMAable memory, inode cache, all these and [all …]
|
D | bus-virt-phys-mapping.rst | 2 How to access I/O mapped memory from within device drivers 22 (because all bus master devices see the physical memory mappings directly). 25 at memory addresses, and in this case we actually want the third, the 28 Essentially, the three ways of addressing memory are (this is "real memory", 32 0 is what the CPU sees when it drives zeroes on the memory bus. 38 - bus address. This is the address of memory as seen by OTHER devices, 40 addresses, with each device seeing memory in some device-specific way, but 43 external hardware sees the memory the same way. 47 because the memory and the devices share the same address space, and that is 51 CPU sees a memory map something like this (this is from memory):: [all …]
|
/Documentation/userspace-api/media/v4l/ |
D | dev-mem2mem.rst | 6 Video Memory-To-Memory Interface 9 A V4L2 memory-to-memory device can compress, decompress, transform, or 10 otherwise convert video data from one format into another format, in memory. 11 Such memory-to-memory devices set the ``V4L2_CAP_VIDEO_M2M`` or 12 ``V4L2_CAP_VIDEO_M2M_MPLANE`` capability. Examples of memory-to-memory 16 A memory-to-memory video node acts just like a normal video node, but it 17 supports both output (sending frames from memory to the hardware) 19 memory) stream I/O. An application will have to setup the stream I/O for 23 Memory-to-memory devices function as a shared resource: you can 32 One of the most common memory-to-memory device is the codec. Codecs [all …]
|
/Documentation/vm/ |
D | memory-model.rst | 6 Physical Memory Model 9 Physical memory in a system may be addressed in different ways. The 10 simplest case is when the physical memory starts at address 0 and 15 different memory banks are attached to different CPUs. 17 Linux abstracts this diversity using one of the three memory models: 19 memory models it supports, what the default memory model is and 26 All the memory models track the status of physical page frames using 29 Regardless of the selected memory model, there exists one-to-one 33 Each memory model defines :c:func:`pfn_to_page` and :c:func:`page_to_pfn` 40 The simplest memory model is FLATMEM. This model is suitable for [all …]
|
D | numa.rst | 14 or more CPUs, local memory, and/or IO buses. For brevity and to 28 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible 32 Memory access time and effective memory bandwidth varies depending on how far 33 away the cell containing the CPU or IO bus making the memory access is from the 34 cell containing the target memory. For example, access to memory by CPUs 36 bandwidths than accesses to memory on other, remote cells. NUMA platforms 41 memory bandwidth. However, to achieve scalable memory bandwidth, system and 42 application software must arrange for a large majority of the memory references 43 [cache misses] to be to "local" memory--memory on the same cell, if any--or 44 to the closest cell with memory. [all …]
|
D | hmm.rst | 4 Heterogeneous Memory Management (HMM) 7 Provide infrastructure and helpers to integrate non-conventional memory (device 8 memory like GPU on board memory) into regular kernel path, with the cornerstone 9 of this being specialized struct page for such memory (see sections 5 to 7 of 12 HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., 20 related to using device specific memory allocators. In the second section, I 24 fifth section deals with how device memory is represented inside the kernel. 30 Problems of using a device specific memory allocator 33 Devices with a large amount of on board memory (several gigabytes) like GPUs 34 have historically managed their memory through dedicated driver specific APIs. [all …]
|
/Documentation/devicetree/bindings/reserved-memory/ |
D | reserved-memory.txt | 1 *** Reserved memory regions *** 3 Reserved memory is specified as a node under the /reserved-memory node. 4 The operating system shall exclude reserved memory from normal usage 6 normal use) memory regions. Such memory regions are usually designed for 9 Parameters for each memory region can be encoded into the device tree 12 /reserved-memory node 19 /reserved-memory/ child nodes 21 Each child of the reserved-memory node specifies one or more regions of 22 reserved memory. Each child node may either use a 'reg' property to 23 specify a specific range of reserved memory, or a 'size' property with [all …]
|
D | xen,shared-memory.txt | 1 * Xen hypervisor reserved-memory binding 3 Expose one or more memory regions as reserved-memory to the guest 5 to be a shared memory area across multiple virtual machines for 8 For each of these pre-shared memory regions, a range is exposed under 9 the /reserved-memory node as a child node. Each range sub-node is named 13 compatible = "xen,shared-memory-v1" 16 the base guest physical address and size of the shared memory region 20 memory region used for the mapping in the borrower VM. 23 a string that identifies the shared memory region as specified in
|
D | qcom,rmtfs-mem.txt | 1 Qualcomm Remote File System Memory binding 3 This binding describes the Qualcomm remote filesystem memory, which serves the 4 purpose of describing the shared memory region used for remote processors to 16 Definition: must specify base address and size of the memory region, 17 as described in reserved-memory.txt 22 Definition: must specify a size of the memory region, as described in 23 reserved-memory.txt 33 Definition: vmid of the remote processor, to set up memory protection. 36 The following example shows the remote filesystem memory setup for APQ8016, 39 reserved-memory {
|
/Documentation/driver-api/ |
D | edac.rst | 16 * Memory devices 18 The individual DRAM chips on a memory stick. These devices commonly 20 provides the number of bits that the memory controller expects: 23 * Memory Stick 25 A printed circuit board that aggregates multiple memory devices in 28 called DIMM (Dual Inline Memory Module). 30 * Memory Socket 32 A physical connector on the motherboard that accepts a single memory 37 A memory controller channel, responsible to communicate with a group of 43 It is typically the highest hierarchy on a Fully-Buffered DIMM memory [all …]
|
D | ntb.rst | 6 the separate memory systems of two or more computers to the same PCI-Express 8 registers and memory translation windows, as well as non common features like 15 Memory windows allow translated read and write access to the peer memory. 38 Primary purpose of NTB is to share some peace of memory between at least two 40 mainly used to perform the proper memory window initialization. Typically 41 there are two types of memory window interfaces supported by the NTB API: 48 Memory: Local NTB Port: Peer NTB Port: Peer MMIO: 51 | memory | _v____________ | ______________ 52 | (addr) |<======| MW xlat addr |<====| MW base addr |<== memory-mapped IO 55 So typical scenario of the first type memory window initialization looks: [all …]
|
/Documentation/powerpc/ |
D | firmware-assisted-dump.rst | 14 - Fadump uses the same firmware interfaces and memory reservation model 16 - Unlike phyp dump, FADump exports the memory dump through /proc/vmcore 21 - Unlike phyp dump, FADump allows user to release all the memory reserved 35 - Once the dump is copied out, the memory that held the dump 44 - The first kernel registers the sections of memory with the 46 These registered sections of memory are reserved by the first 50 low memory regions (boot memory) from source to destination area. 54 The term 'boot memory' means size of the low memory chunk 56 booted with restricted memory. By default, the boot memory 58 Alternatively, user can also specify boot memory size [all …]
|
/Documentation/x86/ |
D | amd-memory-encryption.rst | 4 AMD Memory Encryption 7 Secure Memory Encryption (SME) and Secure Encrypted Virtualization (SEV) are 10 SME provides the ability to mark individual pages of memory as encrypted using 19 memory. Private memory is encrypted with the guest-specific key, while shared 20 memory may be encrypted with hypervisor key. When SME is enabled, the hypervisor 38 memory. Since the memory encryption bit is controlled by the guest OS when it 40 forces the memory encryption bit to 1. 49 Bits[5:0] pagetable bit number used to activate memory 52 memory encryption is enabled (this only affects 57 determine if SME is enabled and/or to enable memory encryption:: [all …]
|
/Documentation/devicetree/bindings/memory-controllers/ |
D | nvidia,tegra210-emc.yaml | 4 $id: http://devicetree.org/schemas/memory-controllers/nvidia,tegra210-emc.yaml# 7 title: NVIDIA Tegra210 SoC External Memory Controller 15 sent from the memory controller. 26 - description: external memory clock 36 memory-region: 39 phandle to a reserved memory region describing the table of EMC 42 nvidia,memory-controller: 45 phandle of the memory controller node 52 - nvidia,memory-controller 61 reserved-memory { [all …]
|
/Documentation/dev-tools/ |
D | kmemleak.rst | 1 Kernel Memory Leak Detector 4 Kmemleak provides a way of detecting possible kernel memory leaks in a 9 Valgrind tool (``memcheck --leak-check``) to detect the memory leaks in 16 thread scans the memory every 10 minutes (by default) and prints the 22 To display the details of all the possible scanned memory leaks:: 26 To trigger an intermediate memory scan:: 30 To clear the list of all current possible memory leaks:: 41 Memory scanning parameters can be modified at run-time by writing to the 51 start the automatic memory scanning thread (default) 53 stop the automatic memory scanning thread [all …]
|
/Documentation/devicetree/bindings/soc/qcom/ |
D | qcom,smem.txt | 1 Qualcomm Shared Memory Manager binding 3 This binding describes the Qualcomm Shared Memory Manager, used to share data 12 - memory-region: 15 Definition: handle to memory reservation for main SMEM memory region. 20 Definition: handle to RPM message memory resource 26 the shared memory 32 reserved-memory { 46 memory-region = <&smem_region>; 53 rpm_msg_ram: memory@fc428000 {
|
/Documentation/devicetree/bindings/pmem/ |
D | pmem-region.txt | 1 Device-tree bindings for persistent memory regions 4 Persistent memory refers to a class of memory devices that are: 6 a) Usable as main system memory (i.e. cacheable), and 9 Given b) it is best to think of persistent memory as a kind of memory mapped 11 persistent regions separately to the normal memory pool. To aid with that this 13 memory regions exist inside the physical address space. 24 range should be mappable as normal system memory would be 36 backed by non-persistent memory. This lets the OS know that it 41 is backed by non-volatile memory. 48 * 0x5000 to 0x5fff that is backed by non-volatile memory. [all …]
|
/Documentation/devicetree/bindings/memory-controllers/fsl/ |
D | ddr.txt | 1 Freescale DDR memory controller 5 - compatible : Should include "fsl,chip-memory-controller" where 7 "fsl,qoriq-memory-controller". 15 memory-controller@2000 { 16 compatible = "fsl,bsc9132-memory-controller"; 24 ddr1: memory-controller@8000 { 25 compatible = "fsl,qoriq-memory-controller-v4.7", 26 "fsl,qoriq-memory-controller";
|
12345678910>>...60