Lines Matching refs:huge
22 - Mark partially purged arena chunks as non-huge-page. This improves
23 interaction with Linux's transparent huge page functionality. (@jasone)
95 - Fix opt_zero-triggered in-place huge reallocation zeroing. (@jasone)
188 - Attempt mmap-based in-place huge reallocation. This can dramatically speed
189 up incremental huge reallocation. (@jasone)
234 - Fix xallocx(..., MALLOCX_ZERO) to zero trailing bytes of huge allocations
279 - Fix chunk purge hook calls for in-place huge shrinking reallocation to
344 - Refactor huge allocation to be managed by arenas, so that arenas now
348 + The "stats.arenas.<i>.huge.allocated", "stats.arenas.<i>.huge.nmalloc",
349 "stats.arenas.<i>.huge.ndalloc", and "stats.arenas.<i>.huge.nrequests"
350 mallctls provide high level per arena huge allocation statistics.
393 reduces the cost of repeated huge allocation/deallocation, because it
408 - Implement in-place huge allocation growing and shrinking.
411 levels. This resolves what was a concurrency bottleneck for per arena huge
413 which arenas own which huge allocations.
441 - Remove the "stats.huge.allocated", "stats.huge.nmalloc", and
442 "stats.huge.ndalloc" mallctls.
482 - Use dss allocation precedence for huge allocations as well as small/large
498 - Fix junk filling for mremap(2)-based huge reallocation. This is only
548 - Fix huge deallocation to junk fill when munmap is disabled.
586 typically triggered by multiple threads concurrently deallocating huge
857 - Fix aligned huge reallocation (affected allocm()).
868 - Use Linux's mremap(2) for huge object reallocation when possible.