Searched +full:page +full:- +full:level (Results 1 – 25 of 197) sorted by relevance
12345678
| /Documentation/mm/ |
| D | page_tables.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 Page Tables 10 feature of all Unix-like systems as time went by. In 1985 the feature was 13 Page tables map virtual addresses as seen by the CPU into physical addresses 16 Linux defines page tables as a hierarchy which is currently five levels in 21 by the underlying physical page frame. The **page frame number** or **pfn** 22 is the physical address of the page (as seen on the external memory bus) 26 the last page of physical memory the external address bus of the CPU can 29 With a page granularity of 4KB and a address range of 32 bits, pfn 0 is at 34 As you can see, with 4KB pages the page base address uses bits 12-31 of the [all …]
|
| D | vmemmap_dedup.rst | 2 .. SPDX-License-Identifier: GPL-2.0 13 The ``struct page`` structures are used to describe a physical page frame. By 14 default, there is a one-to-one mapping from a page frame to its corresponding 15 ``struct page``. 17 HugeTLB pages consist of multiple base page size pages and is supported by many 18 architectures. See Documentation/admin-guide/mm/hugetlbpage.rst for more 19 details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB are 20 currently supported. Since the base page size on x86 is 4KB, a 2MB HugeTLB page 21 consists of 512 base pages and a 1GB HugeTLB page consists of 262144 base pages. 22 For each base page, there is a corresponding ``struct page``. [all …]
|
| D | split_page_table_lock.rst | 2 Split page table lock 5 Originally, mm->page_table_lock spinlock protected all page tables of the 6 mm_struct. But this approach leads to poor page fault scalability of 7 multi-threaded applications due high contention on the lock. To improve 8 scalability, split page table lock was introduced. 10 With split page table lock we have separate per-table lock to serialize 12 tables. Access to higher level tables protected by mm->page_table_lock. 16 - pte_offset_map_lock() 19 - pte_offset_map_ro_nolock() 22 - pte_offset_map_rw_nolock() [all …]
|
| D | page_migration.rst | 2 Page migration 5 Page migration allows moving the physical location of pages between 13 The main intent of page migration is to reduce the latency of memory accesses 17 Page migration allows a process to manually relocate the node on which its 23 Page migration functions are provided by the numactl package by Andi Kleen 26 which provides an interface similar to other NUMA functionality for page 29 proc(5) man page. 35 manual page migration support. Automatic page migration may be implemented 38 For example, A NUMA profiler may obtain a log showing frequent off-node 52 Page migration allows the preservation of the relative location of pages [all …]
|
| D | page_frags.rst | 2 Page fragments 5 A page fragment is an arbitrary-length arbitrary-offset area of memory 6 which resides within a 0 or higher order compound page. Multiple 7 fragments within that page are individually refcounted, in the page's 11 simple allocation framework for page fragments. This is used by the 13 memory for use as either an sk_buff->head, or to be used in the "frags" 16 In order to make use of the page fragment APIs a backing page fragment 18 and tracks allows multiple calls to make use of a cached page. The 22 either a per-cpu limitation, or a per-cpu limitation and forcing interrupts 34 Many network device drivers use a similar methodology for allocating page [all …]
|
| D | hwpoison.rst | 9 (``MCA recovery``). This requires the OS to declare a page "poisoned", 16 High level machine check handler. Handles pages reported by the 27 Handles page cache pages in various states. The tricky part 28 here is that we can access any page asynchronous to other VM 41 The code consists of a the high level handler in mm/memory-failure.c, 42 a new page poison bit and various checks in the VM to handle poisoned 46 of applications. KVM support requires a recent qemu-kvm release. 70 Send SIGBUS when the application runs into the corrupted page. 109 * madvise(MADV_HWPOISON, ....) (as root) - Poison a page in the 112 * hwpoison-inject module through debugfs ``/sys/kernel/debug/hwpoison/`` [all …]
|
| D | hugetlbfs_reserv.rst | 8 Huge pages as described at Documentation/admin-guide/mm/hugetlbpage.rst are 10 in a task's address space at page fault time if the VMA indicates huge pages 11 are to be used. If no huge page exists at page fault time, the task is sent 12 a SIGBUS and often dies an unhappy death. Shortly after huge page support 20 available for page faults in that mapping. The description below attempts to 21 describe how huge page reserve processing is done in the v4.10 kernel. 34 This is a global (per-hstate) count of reserved huge pages. Reserved 37 as (``free_huge_pages - resv_huge_pages``). 50 There is one reserve map for each huge page mapping in the system. 60 The 'from' and 'to' fields of the file region structure are huge page [all …]
|
| /Documentation/virt/kvm/x86/ |
| D | mmu.rst | 1 .. SPDX-License-Identifier: GPL-2.0 13 - correctness: 18 - security: 21 - performance: 23 - scaling: 25 - hardware: 27 - integration: 29 so that swapping, page migration, page merging, transparent 31 - dirty tracking: 33 and framebuffer-based displays [all …]
|
| /Documentation/admin-guide/mm/ |
| D | concepts.rst | 7 systems from MMU-less microcontrollers to supercomputers. The memory 41 The physical system memory is divided into page frames, or pages. The 42 size of each page is architecture specific. Some architectures allow 43 selection of the page size from several supported values; this 47 Each physical memory page can be mapped as one or more virtual 48 pages. These mappings are described by page tables that allow 50 memory address. The page tables are organized hierarchically. 52 The tables at the lowest level of the hierarchy contain physical 55 levels. The pointer to the top level page table resides in a 57 register to access the top level page table. The high bits of the [all …]
|
| D | transhuge.rst | 12 that supports the automatic promotion and demotion of page sizes and 19 in the examples below we presume that the basic page size is 4K and 20 the huge page size is 2M, although the actual numbers may vary 26 requiring larger clear-page copy-page in page faults which is a 28 single page fault for each 2M virtual region touched by userland (so 48 Modern kernels support "multi-size THP" (mTHP), which introduces the 49 ability to allocate memory in blocks that are bigger than a base page 50 but smaller than traditional PMD-size (as described above), in 51 increments of a power-of-2 number of pages. mTHP can back anonymous 53 PTE-mapped, but in many cases can still provide similar benefits to [all …]
|
| /Documentation/arch/powerpc/ |
| D | vmemmap_dedup.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 The device-dax interface uses the tail deduplication technique explained in 11 with a 64K page size, only the devdax namespace with 1G alignment uses vmemmap 14 With 2M PMD level mapping, we require 32 struct pages and a single 64K vmemmap 15 page can contain 1024 struct pages (64K/sizeof(struct page)). Hence there is no 18 With 1G PUD level mapping, we require 16384 struct pages and a single 64K 19 vmemmap page can contain 1024 struct pages (64K/sizeof(struct page)). Hence we 20 require 16 64K pages in vmemmap to map the struct page for 1G PUD level mapping. 22 Here's how things look like on device-dax after the sections are populated:: 23 +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ [all …]
|
| /Documentation/devicetree/bindings/perf/ |
| D | marvell-cn10k-tad.yaml | 1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 3 --- 4 $id: http://devicetree.org/schemas/perf/marvell-cn10k-tad.yaml# 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 7 title: Marvell CN10K LLC-TAD performance monitor 10 - Bhaskara Budiredla <bbudiredla@marvell.com> 13 The Tag-and-Data units (TADs) maintain coherence and contain CN10K 14 shared on-chip last level cache (LLC). The tad pmu measures the 15 performance of last-level cache. Each tad pmu supports up to eight 23 const: marvell,cn10k-tad-pmu [all …]
|
| /Documentation/arch/arm64/ |
| D | hugetlbpage.rst | 8 address translations. The benefit depends on both - 10 - the size of hugepages 11 - size of entries supported by the TLBs 15 1) Block mappings at the pud/pmd level 16 -------------------------------------- 18 These are regular hugepages where a pmd or a pud page table entry points to a 20 mappings reduce the depth of page table walk needed to translate hugepage 24 --------------------------- 31 pte (last) level. The number of supported contiguous entries varies by page size 32 and level of the page table. [all …]
|
| /Documentation/arch/x86/ |
| D | pti.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 Page Table Isolation (PTI) 10 Page Table Isolation (pti, previously known as KAISER [1]_) is a 15 page tables for use only when running userspace applications. When 17 page tables are switched to the full "kernel" copy. When the system 20 The userspace page tables contain only a minimal amount of kernel 27 This approach helps to ensure that side-channel attacks leveraging 30 time. Once enabled at compile-time, it can be disabled at boot with 31 the 'nopti' or 'pti=' kernel parameters (see kernel-parameters.txt). 33 Page Table Management [all …]
|
| D | intel_txt.rst | 6 Technology (Intel(R) TXT), defines platform-level enhancements that 13 - Provides dynamic root of trust for measurement (DRTM) 14 - Data protection in case of improper shutdown 15 - Measurement and verification of launched environment 18 non-vPro systems. It is currently available on desktop systems 30 - LinuxTAG 2008: 31 http://www.linuxtag.org/2008/en/conf/events/vp-donnerstag.html 33 - TRUST2008: 34 http://www.trust-conference.eu/downloads/Keynote-Speakers/ 35 3_David-Grawrock_The-Front-Door-of-Trusted-Computing.pdf [all …]
|
| /Documentation/virt/hyperv/ |
| D | overview.rst | 1 .. SPDX-License-Identifier: GPL-2.0 6 enlightened guest on Microsoft's Hyper-V hypervisor. Hyper-V 7 consists primarily of a bare-metal hypervisor plus a virtual machine 10 partitions. In this documentation, references to Hyper-V usually 15 Hyper-V runs on x86/x64 and arm64 architectures, and Linux guests 16 are supported on both. The functionality and behavior of Hyper-V is 19 Linux Guest Communication with Hyper-V 20 -------------------------------------- 21 Linux guests communicate with Hyper-V in four different ways: 24 some guest actions trap to Hyper-V. Hyper-V emulates the action and [all …]
|
| /Documentation/arch/x86/x86_64/ |
| D | 5level-paging.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 5-level paging 9 Original x86-64 was limited by 4-level paging to 256 TiB of virtual address 14 5-level paging. It is a straight-forward extension of the current page 20 QEMU 2.9 and later support 5-level paging. 22 Virtual memory layout for 5-level paging is described in 26 Enabling 5-level paging 30 Kernel with CONFIG_X86_5LEVEL=y still able to boot on 4-level hardware. 31 In this case additional page table level -- p4d -- will be folded at 34 User-space and large virtual address space [all …]
|
| /Documentation/ABI/testing/ |
| D | sysfs-bus-event_source-devices-iommu | 5 Description: Read-only. Attribute group to describe the magic bits 9 ABI/testing/sysfs-bus-event_source-devices-format). 14 are listed below (See the VT-d Spec 4.0 for possible 17 event = "config:0-27" - event ID 18 event_group = "config:28-31" - event group ID 20 filter_requester_en = "config1:0" - Enable Requester ID filter 21 filter_domain_en = "config1:1" - Enable Domain ID filter 22 filter_pasid_en = "config1:2" - Enable PASID filter 23 filter_ats_en = "config1:3" - Enable Address Type filter 24 filter_page_table_en= "config1:4" - Enable Page Table Level filter [all …]
|
| D | sysfs-bus-pci-devices-cciss | 5 Description: Displays the SCSI INQUIRY page 0 model for logical drive 12 Description: Displays the SCSI INQUIRY page 0 revision for logical 19 Description: Displays the SCSI INQUIRY page 83 serial number for logical 26 Description: Displays the SCSI INQUIRY page 0 vendor for logical drive 46 Description: Displays the 8-byte LUN ID used to address logical 53 Description: Displays the RAID level of logical drive Y of
|
| /Documentation/gpu/rfc/ |
| D | i915_vm_bind.rst | 18 User has to opt-in for VM_BIND mode of binding for an address space (VM) 34 ------------------------ 42 ------------------------------- 52 "dma-buf: Add an API for exporting sync files" 68 be using the i915_vma active reference tracking. It will instead use dma-resv 78 ------------------- 79 By default, BOs can be mapped on multiple VMs and can also be dma-buf 82 dma-resv fence list of all shared BOs mapped on the VM. 87 the VM they are private to and can't be dma-buf exported. 88 All private BOs of a VM share the dma-resv object. Hence during each execbuf [all …]
|
| /Documentation/gpu/amdgpu/display/ |
| D | display-contributing.rst | 4 AMDGPU - Display Contributions 10 This page summarizes some of the issues you can help with; keep in mind that 11 this is a static page, and it is always a good idea to try to reach developers 12 in the amdgfx or some of the maintainers. Finally, this page follows the DRM 21 - https://gitlab.freedesktop.org/drm/amd 27 Level: diverse 37 issue; it is necessary to analyze case-by-case. 39 Level: diverse 41 .. _IGT: https://gitlab.freedesktop.org/drm/igt-gpu-tools 47 ------------------------ [all …]
|
| /Documentation/netlabel/ |
| D | draft-ietf-cipso-ipsecurity-01.txt | 12 This Internet Draft provides the high level specification for a Commercial 27 Please check the I-D abstract listing contained in each Internet Draft 46 mandatory access controls and multi-level security. These systems are 62 Internet Draft, Expires 15 Jan 93 [PAGE 1] 88 once in a datagram. All multi-octet fields in the option are defined to be 91 +----------+----------+------//------+-----------//---------+ 93 +----------+----------+------//------+-----------//---------+ 124 corresponding ASCII representations. Non-related groups of systems may 128 Internet Draft, Expires 15 Jan 93 [PAGE 2] 138 number 1 to represent that same security level. The DOI identifier is used [all …]
|
| /Documentation/admin-guide/blockdev/ |
| D | zram.rst | 2 zram: Compressed RAM-based block devices 8 The zram module creates RAM-based block devices named /dev/zram<id> 20 There are several ways to configure and manage zram device(-s): 23 b) using zramctl utility, provided by util-linux (util-linux@vger.kernel.org). 28 In order to get a better idea about zramctl please consult util-linux 29 documentation, zramctl man-page or `zramctl --help`. Please be informed 30 that zram maintainers do not develop/maintain util-linux or zramctl, should 31 you have any questions please contact util-linux@vger.kernel.org 45 -EBUSY an attempt to modify an attribute that cannot be changed once 47 -ENOMEM zram was not able to allocate enough memory to fulfil your [all …]
|
| /Documentation/arch/sparc/oradax/ |
| D | oracle-dax.rst | 10 and data formats. A user space library provides high level services 11 and translates these into low level commands which are then passed 25 the accompanying document, dax-hv-api.txt, which is a plain text 27 Specification" version 3.0.20+15, dated 2017-09-25. 30 High Level Overview 51 done at the user level, which results in almost zero latency between 60 architecture, as there is an additional level of memory virtualization 61 present. This intermediate level is called "real" memory, and the 86 made accessible via mmap(), and are read-only for the application. 109 equal to the number of bytes given in the call. Otherwise -1 is [all …]
|
| /Documentation/admin-guide/cgroup-v1/ |
| D | memory.rst | 18 we call it "memory cgroup". When you see git-log and source code, you'll 30 Memory-hungry applications can be isolated and limited to a smaller 42 Current Status: linux-2.6.34-mmotm(development version of 2010/April) 46 - accounting anonymous pages, file caches, swap caches usage and limiting them. 47 - pages are linked to per-memcg LRU exclusively, and there is no global LRU. 48 - optionally, memory+swap usage can be accounted and limited. 49 - hierarchical accounting 50 - soft limit 51 - moving (recharging) account at moving a task is selectable. 52 - usage threshold notifier [all …]
|
12345678