Lines Matching refs:huge
53 - Attempt mmap-based in-place huge reallocation. This can dramatically speed
54 up incremental huge reallocation. (@jasone)
99 - Fix xallocx(..., MALLOCX_ZERO) to zero trailing bytes of huge allocations
144 - Fix chunk purge hook calls for in-place huge shrinking reallocation to
209 - Refactor huge allocation to be managed by arenas, so that arenas now
213 + The "stats.arenas.<i>.huge.allocated", "stats.arenas.<i>.huge.nmalloc",
214 "stats.arenas.<i>.huge.ndalloc", and "stats.arenas.<i>.huge.nrequests"
215 mallctls provide high level per arena huge allocation statistics.
258 reduces the cost of repeated huge allocation/deallocation, because it
273 - Implement in-place huge allocation growing and shrinking.
276 levels. This resolves what was a concurrency bottleneck for per arena huge
278 which arenas own which huge allocations.
306 - Remove the "stats.huge.allocated", "stats.huge.nmalloc", and
307 "stats.huge.ndalloc" mallctls.
347 - Use dss allocation precedence for huge allocations as well as small/large
363 - Fix junk filling for mremap(2)-based huge reallocation. This is only
413 - Fix huge deallocation to junk fill when munmap is disabled.
451 typically triggered by multiple threads concurrently deallocating huge
722 - Fix aligned huge reallocation (affected allocm()).
733 - Use Linux's mremap(2) for huge object reallocation when possible.