1= Transparent Hugepage Support = 2 3== Objective == 4 5Performance critical computing applications dealing with large memory 6working sets are already running on top of libhugetlbfs and in turn 7hugetlbfs. Transparent Hugepage Support is an alternative means of 8using huge pages for the backing of virtual memory with huge pages 9that supports the automatic promotion and demotion of page sizes and 10without the shortcomings of hugetlbfs. 11 12Currently it only works for anonymous memory mappings but in the 13future it can expand over the pagecache layer starting with tmpfs. 14 15The reason applications are running faster is because of two 16factors. The first factor is almost completely irrelevant and it's not 17of significant interest because it'll also have the downside of 18requiring larger clear-page copy-page in page faults which is a 19potentially negative effect. The first factor consists in taking a 20single page fault for each 2M virtual region touched by userland (so 21reducing the enter/exit kernel frequency by a 512 times factor). This 22only matters the first time the memory is accessed for the lifetime of 23a memory mapping. The second long lasting and much more important 24factor will affect all subsequent accesses to the memory for the whole 25runtime of the application. The second factor consist of two 26components: 1) the TLB miss will run faster (especially with 27virtualization using nested pagetables but almost always also on bare 28metal without virtualization) and 2) a single TLB entry will be 29mapping a much larger amount of virtual memory in turn reducing the 30number of TLB misses. With virtualization and nested pagetables the 31TLB can be mapped of larger size only if both KVM and the Linux guest 32are using hugepages but a significant speedup already happens if only 33one of the two is using hugepages just because of the fact the TLB 34miss is going to run faster. 35 36== Design == 37 38- "graceful fallback": mm components which don't have transparent 39 hugepage knowledge fall back to breaking a transparent hugepage and 40 working on the regular pages and their respective regular pmd/pte 41 mappings 42 43- if a hugepage allocation fails because of memory fragmentation, 44 regular pages should be gracefully allocated instead and mixed in 45 the same vma without any failure or significant delay and without 46 userland noticing 47 48- if some task quits and more hugepages become available (either 49 immediately in the buddy or through the VM), guest physical memory 50 backed by regular pages should be relocated on hugepages 51 automatically (with khugepaged) 52 53- it doesn't require memory reservation and in turn it uses hugepages 54 whenever possible (the only possible reservation here is kernelcore= 55 to avoid unmovable pages to fragment all the memory but such a tweak 56 is not specific to transparent hugepage support and it's a generic 57 feature that applies to all dynamic high order allocations in the 58 kernel) 59 60- this initial support only offers the feature in the anonymous memory 61 regions but it'd be ideal to move it to tmpfs and the pagecache 62 later 63 64Transparent Hugepage Support maximizes the usefulness of free memory 65if compared to the reservation approach of hugetlbfs by allowing all 66unused memory to be used as cache or other movable (or even unmovable 67entities). It doesn't require reservation to prevent hugepage 68allocation failures to be noticeable from userland. It allows paging 69and all other advanced VM features to be available on the 70hugepages. It requires no modifications for applications to take 71advantage of it. 72 73Applications however can be further optimized to take advantage of 74this feature, like for example they've been optimized before to avoid 75a flood of mmap system calls for every malloc(4k). Optimizing userland 76is by far not mandatory and khugepaged already can take care of long 77lived page allocations even for hugepage unaware applications that 78deals with large amounts of memory. 79 80In certain cases when hugepages are enabled system wide, application 81may end up allocating more memory resources. An application may mmap a 82large region but only touch 1 byte of it, in that case a 2M page might 83be allocated instead of a 4k page for no good. This is why it's 84possible to disable hugepages system-wide and to only have them inside 85MADV_HUGEPAGE madvise regions. 86 87Embedded systems should enable hugepages only inside madvise regions 88to eliminate any risk of wasting any precious byte of memory and to 89only run faster. 90 91Applications that gets a lot of benefit from hugepages and that don't 92risk to lose memory by using hugepages, should use 93madvise(MADV_HUGEPAGE) on their critical mmapped regions. 94 95== sysfs == 96 97Transparent Hugepage Support can be entirely disabled (mostly for 98debugging purposes) or only enabled inside MADV_HUGEPAGE regions (to 99avoid the risk of consuming more memory resources) or enabled system 100wide. This can be achieved with one of: 101 102echo always >/sys/kernel/mm/transparent_hugepage/enabled 103echo madvise >/sys/kernel/mm/transparent_hugepage/enabled 104echo never >/sys/kernel/mm/transparent_hugepage/enabled 105 106It's also possible to limit defrag efforts in the VM to generate 107hugepages in case they're not immediately free to madvise regions or 108to never try to defrag memory and simply fallback to regular pages 109unless hugepages are immediately available. Clearly if we spend CPU 110time to defrag memory, we would expect to gain even more by the fact 111we use hugepages later instead of regular pages. This isn't always 112guaranteed, but it may be more likely in case the allocation is for a 113MADV_HUGEPAGE region. 114 115echo always >/sys/kernel/mm/transparent_hugepage/defrag 116echo madvise >/sys/kernel/mm/transparent_hugepage/defrag 117echo never >/sys/kernel/mm/transparent_hugepage/defrag 118 119khugepaged will be automatically started when 120transparent_hugepage/enabled is set to "always" or "madvise, and it'll 121be automatically shutdown if it's set to "never". 122 123khugepaged runs usually at low frequency so while one may not want to 124invoke defrag algorithms synchronously during the page faults, it 125should be worth invoking defrag at least in khugepaged. However it's 126also possible to disable defrag in khugepaged by writing 0 or enable 127defrag in khugepaged by writing 1: 128 129echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag 130echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag 131 132You can also control how many pages khugepaged should scan at each 133pass: 134 135/sys/kernel/mm/transparent_hugepage/khugepaged/pages_to_scan 136 137and how many milliseconds to wait in khugepaged between each pass (you 138can set this to 0 to run khugepaged at 100% utilization of one core): 139 140/sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs 141 142and how many milliseconds to wait in khugepaged if there's an hugepage 143allocation failure to throttle the next allocation attempt. 144 145/sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs 146 147The khugepaged progress can be seen in the number of pages collapsed: 148 149/sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed 150 151for each pass: 152 153/sys/kernel/mm/transparent_hugepage/khugepaged/full_scans 154 155== Boot parameter == 156 157You can change the sysfs boot time defaults of Transparent Hugepage 158Support by passing the parameter "transparent_hugepage=always" or 159"transparent_hugepage=madvise" or "transparent_hugepage=never" 160(without "") to the kernel command line. 161 162== Need of application restart == 163 164The transparent_hugepage/enabled values only affect future 165behavior. So to make them effective you need to restart any 166application that could have been using hugepages. This also applies to 167the regions registered in khugepaged. 168 169== get_user_pages and follow_page == 170 171get_user_pages and follow_page if run on a hugepage, will return the 172head or tail pages as usual (exactly as they would do on 173hugetlbfs). Most gup users will only care about the actual physical 174address of the page and its temporary pinning to release after the I/O 175is complete, so they won't ever notice the fact the page is huge. But 176if any driver is going to mangle over the page structure of the tail 177page (like for checking page->mapping or other bits that are relevant 178for the head page and not the tail page), it should be updated to jump 179to check head page instead (while serializing properly against 180split_huge_page() to avoid the head and tail pages to disappear from 181under it, see the futex code to see an example of that, hugetlbfs also 182needed special handling in futex code for similar reasons). 183 184NOTE: these aren't new constraints to the GUP API, and they match the 185same constrains that applies to hugetlbfs too, so any driver capable 186of handling GUP on hugetlbfs will also work fine on transparent 187hugepage backed mappings. 188 189In case you can't handle compound pages if they're returned by 190follow_page, the FOLL_SPLIT bit can be specified as parameter to 191follow_page, so that it will split the hugepages before returning 192them. Migration for example passes FOLL_SPLIT as parameter to 193follow_page because it's not hugepage aware and in fact it can't work 194at all on hugetlbfs (but it instead works fine on transparent 195hugepages thanks to FOLL_SPLIT). migration simply can't deal with 196hugepages being returned (as it's not only checking the pfn of the 197page and pinning it during the copy but it pretends to migrate the 198memory in regular page sizes and with regular pte/pmd mappings). 199 200== Optimizing the applications == 201 202To be guaranteed that the kernel will map a 2M page immediately in any 203memory region, the mmap region has to be hugepage naturally 204aligned. posix_memalign() can provide that guarantee. 205 206== Hugetlbfs == 207 208You can use hugetlbfs on a kernel that has transparent hugepage 209support enabled just fine as always. No difference can be noted in 210hugetlbfs other than there will be less overall fragmentation. All 211usual features belonging to hugetlbfs are preserved and 212unaffected. libhugetlbfs will also work fine as usual. 213 214== Graceful fallback == 215 216Code walking pagetables but unware about huge pmds can simply call 217split_huge_page_pmd(mm, pmd) where the pmd is the one returned by 218pmd_offset. It's trivial to make the code transparent hugepage aware 219by just grepping for "pmd_offset" and adding split_huge_page_pmd where 220missing after pmd_offset returns the pmd. Thanks to the graceful 221fallback design, with a one liner change, you can avoid to write 222hundred if not thousand of lines of complex code to make your code 223hugepage aware. 224 225If you're not walking pagetables but you run into a physical hugepage 226but you can't handle it natively in your code, you can split it by 227calling split_huge_page(page). This is what the Linux VM does before 228it tries to swapout the hugepage for example. 229 230Example to make mremap.c transparent hugepage aware with a one liner 231change: 232 233diff --git a/mm/mremap.c b/mm/mremap.c 234--- a/mm/mremap.c 235+++ b/mm/mremap.c 236@@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru 237 return NULL; 238 239 pmd = pmd_offset(pud, addr); 240+ split_huge_page_pmd(mm, pmd); 241 if (pmd_none_or_clear_bad(pmd)) 242 return NULL; 243 244== Locking in hugepage aware code == 245 246We want as much code as possible hugepage aware, as calling 247split_huge_page() or split_huge_page_pmd() has a cost. 248 249To make pagetable walks huge pmd aware, all you need to do is to call 250pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the 251mmap_sem in read (or write) mode to be sure an huge pmd cannot be 252created from under you by khugepaged (khugepaged collapse_huge_page 253takes the mmap_sem in write mode in addition to the anon_vma lock). If 254pmd_trans_huge returns false, you just fallback in the old code 255paths. If instead pmd_trans_huge returns true, you have to take the 256mm->page_table_lock and re-run pmd_trans_huge. Taking the 257page_table_lock will prevent the huge pmd to be converted into a 258regular pmd from under you (split_huge_page can run in parallel to the 259pagetable walk). If the second pmd_trans_huge returns false, you 260should just drop the page_table_lock and fallback to the old code as 261before. Otherwise you should run pmd_trans_splitting on the pmd. In 262case pmd_trans_splitting returns true, it means split_huge_page is 263already in the middle of splitting the page. So if pmd_trans_splitting 264returns true it's enough to drop the page_table_lock and call 265wait_split_huge_page and then fallback the old code paths. You are 266guaranteed by the time wait_split_huge_page returns, the pmd isn't 267huge anymore. If pmd_trans_splitting returns false, you can proceed to 268process the huge pmd and the hugepage natively. Once finished you can 269drop the page_table_lock. 270 271== compound_lock, get_user_pages and put_page == 272 273split_huge_page internally has to distribute the refcounts in the head 274page to the tail pages before clearing all PG_head/tail bits from the 275page structures. It can do that easily for refcounts taken by huge pmd 276mappings. But the GUI API as created by hugetlbfs (that returns head 277and tail pages if running get_user_pages on an address backed by any 278hugepage), requires the refcount to be accounted on the tail pages and 279not only in the head pages, if we want to be able to run 280split_huge_page while there are gup pins established on any tail 281page. Failure to be able to run split_huge_page if there's any gup pin 282on any tail page, would mean having to split all hugepages upfront in 283get_user_pages which is unacceptable as too many gup users are 284performance critical and they must work natively on hugepages like 285they work natively on hugetlbfs already (hugetlbfs is simpler because 286hugetlbfs pages cannot be splitted so there wouldn't be requirement of 287accounting the pins on the tail pages for hugetlbfs). If we wouldn't 288account the gup refcounts on the tail pages during gup, we won't know 289anymore which tail page is pinned by gup and which is not while we run 290split_huge_page. But we still have to add the gup pin to the head page 291too, to know when we can free the compound page in case it's never 292splitted during its lifetime. That requires changing not just 293get_page, but put_page as well so that when put_page runs on a tail 294page (and only on a tail page) it will find its respective head page, 295and then it will decrease the head page refcount in addition to the 296tail page refcount. To obtain a head page reliably and to decrease its 297refcount without race conditions, put_page has to serialize against 298__split_huge_page_refcount using a special per-page lock called 299compound_lock. 300