• Home
  • Raw
  • Download

Lines Matching refs:huge

20   - Implement transparent huge page support for internal metadata.  (@interwq)
21 - Add opt.thp to allow enabling / disabling transparent huge pages for all
157 configured huge page size (--with-lg-hugepage). (@jasone)
246 interact better with huge pages (not yet explicitly supported). (@jasone)
247 - Fold large and huge size classes together; only small and large size classes
335 + stats.arenas.<i>.huge.{allocated,nmalloc,ndalloc,nrequests}
355 transparent huge page integration. (@jasone)
371 - Fix huge-aligned allocation. This regression was first released in 4.4.0.
373 - When transparent huge page integration is enabled, detect what state pages
375 arena chunks to non-huge during purging if that is not their initial state.
396 - Mark partially purged arena chunks as non-huge-page. This improves
397 interaction with Linux's transparent huge page functionality. (@jasone)
469 - Fix opt_zero-triggered in-place huge reallocation zeroing. (@jasone)
562 - Attempt mmap-based in-place huge reallocation. This can dramatically speed
563 up incremental huge reallocation. (@jasone)
608 - Fix xallocx(..., MALLOCX_ZERO) to zero trailing bytes of huge allocations
653 - Fix chunk purge hook calls for in-place huge shrinking reallocation to
718 - Refactor huge allocation to be managed by arenas, so that arenas now
722 + The "stats.arenas.<i>.huge.allocated", "stats.arenas.<i>.huge.nmalloc",
723 "stats.arenas.<i>.huge.ndalloc", and "stats.arenas.<i>.huge.nrequests"
724 mallctls provide high level per arena huge allocation statistics.
767 reduces the cost of repeated huge allocation/deallocation, because it
782 - Implement in-place huge allocation growing and shrinking.
785 levels. This resolves what was a concurrency bottleneck for per arena huge
787 which arenas own which huge allocations.
815 - Remove the "stats.huge.allocated", "stats.huge.nmalloc", and
816 "stats.huge.ndalloc" mallctls.
856 - Use dss allocation precedence for huge allocations as well as small/large
872 - Fix junk filling for mremap(2)-based huge reallocation. This is only
922 - Fix huge deallocation to junk fill when munmap is disabled.
960 typically triggered by multiple threads concurrently deallocating huge
1231 - Fix aligned huge reallocation (affected allocm()).
1242 - Use Linux's mremap(2) for huge object reallocation when possible.