• Home
  • Raw
  • Download

Lines Matching full:large

46  * Serialize cpa() (for !DEBUG_PAGEALLOC which uses large identity mappings)
47 * using cpa_lock. So that we don't allow any other cpu, with stale large tlb
49 * splitting a large page entry along with changing the attribute.
183 * large page flushing. in __cpa_flush_all()
330 * kernel text mappings for the large page aligned text, rodata sections in static_protections()
334 * This will preserve the large page mappings for kernel text/data in static_protections()
345 * No need to work hard to preserve large page mappings in this in static_protections()
418 * Note: We return pud and pmd either when the entry is marked large
592 * Calculate the number of pages, which fit into this large in try_preserve_large_page()
613 * req_prot is in format of 4k pages. It must be converted to large in try_preserve_large_page()
623 * old_pfn points to the large page base pfn. So we need in try_preserve_large_page()
656 * change the large page in one go. We request a split, when in try_preserve_large_page()
658 * smaller than the number of pages in the large page. Note in try_preserve_large_page()
660 * the number of pages in the large page. in try_preserve_large_page()
763 * a large TLB mixed with 4K TLBs while instruction fetches are in __split_large_page()
1273 * Check, whether we can keep the large page intact in __change_page_attr()
1278 * When the range fits into the existing large page, in __change_page_attr()
1286 * We have to split the large page: in __change_page_attr()
1291 * Do a global flush tlb after splitting the large page in __change_page_attr()
1295 * "The TLBs may contain both ordinary and large-page in __change_page_attr()
1306 * just split large page entry. in __change_page_attr()
1385 * Store the remaining nr of pages for the large page in __change_page_attr_set_clr()
1389 /* for array changes, we can't use large page */ in __change_page_attr_set_clr()
1409 * CPA operation. Either a large page has been in __change_page_attr_set_clr()
2042 * we may need to break large pages for 64-bit kernel text in __set_pages_p()
2061 * we may need to break large pages for 64-bit kernel text in __set_pages_np()
2079 * Large pages for identity mappings are not used at boot time in __kernel_map_pages()
2080 * and hence no memory allocations during large page split. in __kernel_map_pages()