Home
last modified time | relevance | path

Searched refs:huge (Results 1 – 25 of 98) sorted by relevance

1234

/kernel/linux/linux-5.10/Documentation/admin-guide/mm/
Dhugetlbpage.rst21 Users can use the huge page support in Linux kernel by either using the mmap
30 persistent hugetlb pages in the kernel's huge page pool. It also displays
31 default huge page size and information about the number of free, reserved
32 and surplus huge pages in the pool of huge pages of default size.
33 The huge page size is needed for generating the proper alignment and
34 size of the arguments to system calls that map huge page regions.
48 is the size of the pool of huge pages.
50 is the number of huge pages in the pool that are not yet
53 is short for "reserved," and is the number of huge pages for
55 but no allocation has yet been made. Reserved huge pages
[all …]
Dtranshuge.rst13 using huge pages for the backing of virtual memory with huge pages
22 the huge page size is 2M, although the actual numbers may vary
53 collapses sequences of basic pages into huge pages.
151 By default kernel tries to use huge zero page on read page fault to
152 anonymous mapping. It's possible to disable huge zero page by writing 0
214 swap when collapsing a group of pages into a transparent huge page::
242 ``huge=``. It can have following values:
245 Attempt to allocate huge pages every time we need a new page;
248 Do not allocate huge pages;
251 Only allocate huge page if it will be fully within i_size.
[all …]
Dconcepts.rst81 `huge`. Usage of huge pages significantly reduces pressure on TLB,
85 memory with the huge pages. The first one is `HugeTLB filesystem`, or
88 the memory and mapped using huge pages. The hugetlbfs is described at
91 Another, more recent, mechanism that enables use of the huge pages is
94 the system memory should and can be mapped by the huge pages, THP
204 buffer for DMA, or when THP allocates a huge page. Memory `compaction`
/kernel/linux/linux-5.10/tools/testing/selftests/vm/
Dcharge_reserved_hugetlb.sh45 if [[ -e /mnt/huge ]]; then
46 rm -rf /mnt/huge/*
47 umount /mnt/huge || echo error
48 rmdir /mnt/huge
253 if [[ -e /mnt/huge ]]; then
254 rm -rf /mnt/huge/*
255 umount /mnt/huge
256 rmdir /mnt/huge
283 mkdir -p /mnt/huge
284 mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
[all …]
Drun_vmtests8 mnt=./huge
/kernel/linux/linux-5.10/Documentation/vm/
Dhugetlbfs_reserv.rst11 preallocated for application use. These huge pages are instantiated in a
12 task's address space at page fault time if the VMA indicates huge pages are
13 to be used. If no huge page exists at page fault time, the task is sent
14 a SIGBUS and often dies an unhappy death. Shortly after huge page support
16 of huge pages at mmap() time. The idea is that if there were not enough
17 huge pages to cover the mapping, the mmap() would fail. This was first
19 were enough free huge pages to cover the mapping. Like most things in the
21 'reserve' huge pages at mmap() time to ensure that huge pages would be
23 describe how huge page reserve processing is done in the v4.10 kernel.
36 This is a global (per-hstate) count of reserved huge pages. Reserved
[all …]
Dtranshuge.rst15 knowledge fall back to breaking huge pmd mapping into table of ptes and,
43 is complete, so they won't ever notice the fact the page is huge. But
64 Code walking pagetables but unaware about huge pmds can simply call
99 To make pagetable walks huge pmd aware, all you need to do is to call
101 mmap_lock in read (or write) mode to be sure a huge pmd cannot be
107 page table lock will prevent the huge pmd being converted into a
111 before. Otherwise, you can proceed to process the huge pmd and the
114 Refcounts and transparent huge pages
129 (stored in first tail page). For file huge pages, we also increment
156 requests to split pinned huge pages: it expects page count to be equal to
Darch_pgtable_helpers.rst139 | pmd_set_huge | Creates a PMD huge mapping |
141 | pmd_clear_huge | Clears a PMD huge mapping |
195 | pud_set_huge | Creates a PUD huge mapping |
197 | pud_clear_huge | Clears a PUD huge mapping |
/kernel/linux/linux-5.10/arch/powerpc/include/asm/nohash/32/
Dpgtable.h234 static int number_of_cells_per_pte(pmd_t *pmd, pte_basic_t val, int huge) in number_of_cells_per_pte() argument
236 if (!huge) in number_of_cells_per_pte()
247 unsigned long clr, unsigned long set, int huge) in pte_update() argument
255 num = number_of_cells_per_pte(pmd, new, huge); in pte_update()
276 unsigned long clr, unsigned long set, int huge) in pte_update() argument
328 int huge = psize > mmu_virtual_psize ? 1 : 0; in __ptep_set_access_flags() local
330 pte_update(vma->vm_mm, address, ptep, clr, set, huge); in __ptep_set_access_flags()
/kernel/linux/linux-5.10/drivers/gpu/drm/ttm/
Dttm_page_alloc.c221 static struct ttm_page_pool *ttm_get_pool(int flags, bool huge, in ttm_get_pool() argument
235 if (huge) in ttm_get_pool()
239 } else if (huge) { in ttm_get_pool()
713 struct ttm_page_pool *huge = ttm_get_pool(flags, true, cstate); in ttm_put_pages() local
759 if (huge) { in ttm_put_pages()
762 spin_lock_irqsave(&huge->lock, irq_flags); in ttm_put_pages()
777 list_add_tail(&pages[i]->lru, &huge->list); in ttm_put_pages()
781 huge->npages++; in ttm_put_pages()
787 if (huge->npages > max_size) in ttm_put_pages()
788 n2free = huge->npages - max_size; in ttm_put_pages()
[all …]
/kernel/linux/linux-5.10/arch/powerpc/include/asm/book3s/64/
Dhash.h147 pte_t *ptep, unsigned long pte, int huge);
154 int huge) in hash__pte_update() argument
172 if (!huge) in hash__pte_update()
177 hpte_need_flush(mm, addr, ptep, old, huge); in hash__pte_update()
Dradix.h170 int huge) in radix__pte_update() argument
175 if (!huge) in radix__pte_update()
/kernel/linux/linux-5.10/Documentation/core-api/
Dpin_user_pages.rst58 For huge pages (and in fact, any compound page of more than 2 pages), the
65 huge pages, because each tail page adds a refcount to the head page. And in
67 page overflows were seen in some huge page stress tests.
69 This also means that huge pages and compound pages (of order > 1) do not suffer
241 acquired since the system was powered on. For huge pages, the head page is
242 pinned once for each page (head page and each tail page) within the huge page.
243 This follows the same sort of behavior that get_user_pages() uses for huge
244 pages: the head page is refcounted once for each tail or head page in the huge
245 page, when get_user_pages() is applied to a huge page.
249 PAGE_SIZE granularity, even if the original pin was applied to a huge page.
/kernel/linux/linux-5.10/drivers/misc/lkdtm/
Dbugs.c180 volatile unsigned int huge = INT_MAX - 2; variable
187 value = huge; in lkdtm_OVERFLOW_SIGNED()
202 value = huge; in lkdtm_OVERFLOW_UNSIGNED()
/kernel/linux/linux-5.10/Documentation/admin-guide/hw-vuln/
Dmultihit.rst81 * - KVM: Mitigation: Split huge pages
111 In order to mitigate the vulnerability, KVM initially marks all huge pages
125 The KVM hypervisor mitigation mechanism for marking huge pages as
134 non-executable huge pages in Linux kernel KVM module. All huge
/kernel/linux/linux-5.10/arch/alpha/lib/
Dev6-clear_user.S86 subq $1, 16, $4 # .. .. .. E : If < 16, we can not use the huge loop
87 and $16, 0x3f, $2 # .. .. E .. : Forward work for huge loop
88 subq $2, 0x40, $3 # .. E .. .. : bias counter (huge loop)
/kernel/linux/linux-5.10/mm/
Dshmem.c118 int huge; member
499 static const char *shmem_format_huge(int huge) in shmem_format_huge() argument
501 switch (huge) { in shmem_format_huge()
663 (shmem_huge == SHMEM_HUGE_FORCE || sbinfo->huge) && in is_huge_enabled()
1578 pgoff_t index, bool huge) in shmem_alloc_and_acct_page() argument
1586 huge = false; in shmem_alloc_and_acct_page()
1587 nr = huge ? HPAGE_PMD_NR : 1; in shmem_alloc_and_acct_page()
1592 if (huge) in shmem_alloc_and_acct_page()
1872 switch (sbinfo->huge) { in shmem_getpage_gfp()
2183 if (SHMEM_SB(sb)->huge == SHMEM_HUGE_NEVER) in shmem_get_unmapped_area()
[all …]
Dmemory-failure.c1813 bool huge = PageHuge(page); in __soft_offline_page() local
1858 bool release = !huge; in __soft_offline_page()
1860 if (!page_handle_poison(page, huge, release)) in __soft_offline_page()
1867 pfn, msg_page[huge], ret, page->flags, &page->flags); in __soft_offline_page()
1873 pfn, msg_page[huge], page_count(page), page->flags, &page->flags); in __soft_offline_page()
/kernel/linux/linux-5.10/arch/powerpc/mm/book3s64/
Dhash_tlb.c41 pte_t *ptep, unsigned long pte, int huge) in hpte_need_flush() argument
61 if (huge) { in hpte_need_flush()
/kernel/linux/linux-5.10/Documentation/features/vm/huge-vmap/
Darch-support.txt2 # Feature name: huge-vmap
/kernel/linux/linux-5.10/arch/parisc/mm/
Dinit.c407 bool huge = false; in map_pages() local
417 huge = true; in map_pages()
422 huge = true; in map_pages()
428 if (huge) in map_pages()
/kernel/linux/linux-5.10/arch/powerpc/include/asm/nohash/64/
Dpgtable.h193 int huge) in pte_update() argument
199 if (!huge) in pte_update()
/kernel/linux/linux-5.10/include/linux/
Dshmem_fs.h36 unsigned char huge; /* Whether to try for hugepages */ member
/kernel/linux/linux-5.10/Documentation/filesystems/ext4/
Dbigalloc.rst9 exceeds the page size. However, for a filesystem of mostly huge files,
/kernel/linux/linux-5.10/Documentation/x86/x86_64/
Dmm.rst35 …0000800000000000 | +128 TB | ffff7fffffffffff | ~16M TB | ... huge, almost 64 bits wide hole of…
94 …0100000000000000 | +64 PB | feffffffffffffff | ~16K PB | ... huge, still almost 64 bits wide h…

1234