• Home
  • Raw
  • Download

Lines Matching +full:user +full:- +full:level

20  *   Signed-off-by: Richard Fellner <richard.fellner@student.tugraz.at>
21 * Signed-off-by: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
22 * Signed-off-by: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
23 * Signed-off-by: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
52 #define pr_fmt(fmt) "Kernel/User page tables isolation: " fmt
60 * Define the page-table levels we clone for user-space on 32
141 * Top-level entries added to init_mm's usermode pgd after boot in __pti_set_user_pgtbl()
148 * The user page tables get the full PGD, accessible from in __pti_set_user_pgtbl()
151 kernel_to_user_pgdp(pgdp)->pgd = pgd.pgd; in __pti_set_user_pgtbl()
154 * If this is normal user memory, make it NX in the kernel in __pti_set_user_pgtbl()
157 * instead of allowing user code to execute with the wrong CR3. in __pti_set_user_pgtbl()
160 * - _PAGE_USER is not set. This could be an executable in __pti_set_user_pgtbl()
163 * - we don't have NX support in __pti_set_user_pgtbl()
164 * - we're clearing the PGD (i.e. the new pgd is not present). in __pti_set_user_pgtbl()
175 * Walk the user copy of the page tables (optionally) trying to allocate
186 WARN_ONCE(1, "attempt to walk user address\n"); in pti_user_pagetable_walk_p4d()
203 * Walk the user copy of the page tables (optionally) trying to allocate
228 /* The user page tables do not use large mappings: */ in pti_user_pagetable_walk_pmd()
249 * user/shadow page tables. It is never used for userspace data.
279 WARN_ONCE(1, "attempt to walk to user pte\n"); in pti_user_pagetable_walk_pte()
289 unsigned int level; in pti_setup_vsyscall() local
291 pte = lookup_address(VSYSCALL_ADDR, &level); in pti_setup_vsyscall()
292 if (!pte || WARN_ON(level != PG_LEVEL_4K) || pte_none(*pte)) in pti_setup_vsyscall()
313 enum pti_clone_level level) in pti_clone_pgtable() argument
353 if (pmd_large(*pmd) || level == PTI_CLONE_PMD) { in pti_clone_pgtable()
361 * called on well-known addresses anyway, so a non- in pti_clone_pgtable()
369 * the user and kernel page tables. It is effectively in pti_clone_pgtable()
381 * tables will share the last-level page tables of this in pti_clone_pgtable()
388 } else if (level == PTI_CLONE_PTE) { in pti_clone_pgtable()
390 /* Walk the page-table down to the pte level */ in pti_clone_pgtable()
401 /* Allocate PTE in the user page-table */ in pti_clone_pgtable()
423 * Clone a single p4d (i.e. a top-level entry on 4-level systems and a
424 * next-level entry on 5-level systems.
441 * Clone the CPU_ENTRY_AREA into the user space visible page table.
453 * address space into the user page-tables, making PTI useless. So clone
454 * the page-table on the PMD level to prevent that.
468 * Clone the ESPFIX P4D into the user space visible page table
491 * to Meltdown-style attacks which make it trivial to find gadgets or
524 * data structures. Keep the kernel image non-global in in pti_kernel_image_global_ok()
535 * This is the only user for these and it is not arch-generic
542 * For some configurations, map all of kernel text into the user page
543 * tables. This reduces TLB misses, especially on non-PCID systems.
559 pr_debug("mapping partial kernel image into user address space\n"); in pti_clone_kernel_text()
571 * the last level for areas that are not huge-page-aligned. in pti_clone_kernel_text()
574 /* Set the global bit for normal non-__init kernel text: */ in pti_clone_kernel_text()
575 set_memory_global(start, (end_global - start) >> PAGE_SHIFT); in pti_clone_kernel_text()
594 set_memory_nonglobal(start, (end - start) >> PAGE_SHIFT); in pti_set_kernel_image_nonglobal()
609 * We check for X86_FEATURE_PCID here. But the init-code will in pti_init()
620 printk(KERN_WARNING "** You are using 32-bit PTI on a 64-bit PCID-capable CPU. **\n"); in pti_init()
622 printk(KERN_WARNING "** switch to a 64-bit kernel! **\n"); in pti_init()
640 * Finalize the kernel mappings in the userspace page-table. Some of the
644 * userspace page-table.