Home
last modified time | relevance | path

Searched refs:PTE (Results 1 – 25 of 34) sorted by relevance

12

/kernel/linux/linux-5.10/Documentation/vm/
Darch_pgtable_helpers.rst17 PTE Page Table Helpers
21 | pte_same | Tests whether both PTE entries are the same |
23 | pte_bad | Tests a non-table mapped PTE |
25 | pte_present | Tests a valid mapped PTE |
27 | pte_young | Tests a young PTE |
29 | pte_dirty | Tests a dirty PTE |
31 | pte_write | Tests a writable PTE |
33 | pte_special | Tests a special PTE |
35 | pte_protnone | Tests a PROT_NONE PTE |
37 | pte_devmap | Tests a ZONE_DEVICE mapped PTE |
[all …]
Dsplit_page_table_lock.rst13 access to the table. At the moment we use split lock for PTE and PMD
19 maps pte and takes PTE table lock, returns pointer to the taken
22 unlocks and unmaps PTE table;
24 allocates PTE table if needed and take the lock, returns pointer
27 returns pointer to PTE table lock;
33 Split page table lock for PTE tables is enabled compile-time if
37 Split page table lock for PMD tables is enabled, if it's enabled for PTE
57 There's no need in special enabling of PTE split page table lock: everything
59 must be called on PTE table allocation / freeing.
97 The spinlock_t allocated in pgtable_pte_page_ctor() for PTE table and in
Dremap_file_pages.rst18 PTE for this purpose. PTE flags are scarce resource especially on some CPU
Dtranshuge.rst125 - map/unmap of the pages with PTE entry increment/decrement ->_mapcount
148 File pages get PG_double_map set on the first map of the page with PTE and
Dhighmem.rst141 advantage is that PAE has more PTE bits and can provide advanced features
Dhmm.rst344 of copying a page of zeros. Valid PTE entries to system memory or
347 the LRU), unmapped from the process, and a special migration PTE is
348 inserted in place of the original PTE.
Dzswap.rst89 During a page fault on a PTE that is a swap entry, frontswap calls the zswap
/kernel/linux/linux-5.10/arch/arc/mm/
Dtlbex.S201 ; OUT: r0 = PTE faulted on, r1 = ptr to PTE, r2 = Faulting V-address
222 bnz.d 2f ; YES: PGD == PMD has THP PTE: stop pgd walk
228 ; Get the PTE entry: The idea is
242 ld.aw r0, [r1, r0] ; r0: PTE (lower word only for PAE40)
243 ; r1: PTE ptr
250 ; Convert Linux PTE entry into TLB entry
251 ; A one-word PTE entry is programmed as two-word TLB Entry [PD0:PD1] in mmu
252 ; (for PAE40, two-words PTE, while three-word TLB Entry [PD0:PD1:PD1HI])
253 ; IN: r0 = PTE, r1 = ptr to PTE
261 and r3, r0, PTE_BITS_NON_RWX_IN_PD1 ; Extract PFN+cache bits from PTE
[all …]
/kernel/linux/linux-5.10/arch/sparc/include/asm/
Dpgalloc_64.h68 #define pmd_populate_kernel(MM, PMD, PTE) pmd_set(MM, PMD, PTE) argument
69 #define pmd_populate(MM, PMD, PTE) pmd_set(MM, PMD, PTE) argument
/kernel/linux/linux-5.10/Documentation/admin-guide/mm/
Dsoft-dirty.rst7 The soft-dirty is a bit on a PTE which helps to track which pages a task
20 64-bit qword is the soft-dirty one. If set, the respective PTE was
27 the soft-dirty bit on the respective PTE.
33 bits on the PTE.
38 the same place. When unmap is called, the kernel internally clears PTE values
Didle_page_tracking.rst111 more page flag is introduced, the Young flag. When the PTE Accessed bit is
113 is set on the page. The reclaimer treats the Young flag as an extra PTE
Dpagemap.rst130 a PTE. To make sure the flag is up-to-date one has to read
/kernel/linux/linux-5.10/Documentation/translations/zh_CN/arm64/
Dhugetlbpage.rst40 - CONT PTE PMD CONT PMD PUD
/kernel/linux/linux-5.10/Documentation/virt/kvm/
Dlocking.rst182 page (via kvm_mmu_notifier_clear_flush_young), it marks the PTE as not-present
183 by clearing the RWX bits in the PTE and storing the original R & X bits in
185 PTE (using the ignored bit 62). When the VM tries to access the page later on,
187 to atomically restore the PTE to a Present state. The W bit is not saved when
188 the PTE is marked for access tracking and during restoration to the Present
Dhypercalls.rst62 :Purpose: Support MMU operations such as writing to PTE,
/kernel/linux/linux-5.10/arch/microblaze/include/asm/
Dmmu.h36 } PTE; typedef
/kernel/linux/linux-5.10/Documentation/admin-guide/hw-vuln/
Dl1tf.rst47 table entry (PTE) has the Present bit cleared or other reserved bits set,
48 then speculative execution ignores the invalid PTE and loads the referenced
50 by the address bits in the PTE was still present and accessible.
72 PTE which is marked non present. This allows a malicious user space
75 encoded in the address bits of the PTE, thus making attacks more
78 The Linux kernel contains a mitigation for this attack vector, PTE
92 PTE inversion mitigation for L1TF, to attack physical host memory.
132 'Mitigation: PTE Inversion' The host protection is active
136 information is appended to the 'Mitigation: PTE Inversion' part:
582 - PTE inversion to protect against malicious user space. This is done
/kernel/liteos_m/arch/risc-v/nuclei/gcc/nmsis/Core/Include/
Driscv_encoding.h298 #define PTE_TABLE(PTE) (((PTE) & (PTE_V | PTE_R | PTE_W | PTE_X)) == PTE_V) argument
/kernel/linux/linux-5.10/Documentation/x86/
Dintel-iommu.rst106 DMAR:[fault reason 05] PTE Write access is not set
108 DMAR:[fault reason 05] PTE Write access is not set
/kernel/linux/linux-5.10/arch/xtensa/
DKconfig.debug8 This check can spot missing TLB invalidation/wrong PTE permissions/
/kernel/linux/linux-5.10/Documentation/arm64/
Dhugetlbpage.rst38 - CONT PTE PMD CONT PMD PUD
/kernel/linux/linux-5.10/arch/arm/mm/
Dproc-macros.S111 #error PTE shared bit mismatch
116 #error Invalid Linux PTE bit settings
/kernel/linux/linux-5.10/arch/nds32/kernel/
Dex-entry.S93 .long do_page_fault !PTE not present
/kernel/linux/linux-5.10/arch/sparc/kernel/
Dsun4v_tlb_miss.S82 mov %g3, %o2 ! PTE
125 mov %g3, %o2 ! PTE
/kernel/linux/linux-5.10/Documentation/powerpc/
Dpapr_hcalls.rst178 an active PTE entry to the SCM block being bound.
188 HCALL can fail if the Guest has an active PTE entry to the SCM block being

12