1 /*
2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain, as explained at
4 http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
6
7 * Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
8 Note: There may be an updated version of this malloc obtainable at
9 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
10 Check before installing!
11
12 * Quickstart
13
14 This library is all in one file to simplify the most common usage:
15 ftp it, compile it (-O3), and link it into another program. All of
16 the compile-time options default to reasonable values for use on
17 most platforms. You might later want to step through various
18 compile-time and dynamic tuning options.
19
20 For convenience, an include file for code using this malloc is at:
21 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h
22 You don't really need this .h file unless you call functions not
23 defined in your system include files. The .h file contains only the
24 excerpts from this file needed for using this malloc on ANSI C/C++
25 systems, so long as you haven't changed compile-time options about
26 naming and tuning parameters. If you do, then you can create your
27 own malloc.h that does include all settings by cutting at the point
28 indicated below. Note that you may already by default be using a C
29 library containing a malloc that is based on some version of this
30 malloc (for example in linux). You might still want to use the one
31 in this file to customize settings or to avoid overheads associated
32 with library versions.
33
34 * Vital statistics:
35
36 Supported pointer/size_t representation: 4 or 8 bytes
37 size_t MUST be an unsigned type of the same width as
38 pointers. (If you are using an ancient system that declares
39 size_t as a signed type, or need it to be a different width
40 than pointers, you can use a previous release of this malloc
41 (e.g. 2.7.2) supporting these.)
42
43 Alignment: 8 bytes (minimum)
44 This suffices for nearly all current machines and C compilers.
45 However, you can define MALLOC_ALIGNMENT to be wider than this
46 if necessary (up to 128bytes), at the expense of using more space.
47
48 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
49 8 or 16 bytes (if 8byte sizes)
50 Each malloced chunk has a hidden word of overhead holding size
51 and status information, and additional cross-check word
52 if FOOTERS is defined.
53
54 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
55 8-byte ptrs: 32 bytes (including overhead)
56
57 Even a request for zero bytes (i.e., malloc(0)) returns a
58 pointer to something of the minimum allocatable size.
59 The maximum overhead wastage (i.e., number of extra bytes
60 allocated than were requested in malloc) is less than or equal
61 to the minimum size, except for requests >= mmap_threshold that
62 are serviced via mmap(), where the worst case wastage is about
63 32 bytes plus the remainder from a system page (the minimal
64 mmap unit); typically 4096 or 8192 bytes.
65
66 Security: static-safe; optionally more or less
67 The "security" of malloc refers to the ability of malicious
68 code to accentuate the effects of errors (for example, freeing
69 space that is not currently malloc'ed or overwriting past the
70 ends of chunks) in code that calls malloc. This malloc
71 guarantees not to modify any memory locations below the base of
72 heap, i.e., static variables, even in the presence of usage
73 errors. The routines additionally detect most improper frees
74 and reallocs. All this holds as long as the static bookkeeping
75 for malloc itself is not corrupted by some other means. This
76 is only one aspect of security -- these checks do not, and
77 cannot, detect all possible programming errors.
78
79 If FOOTERS is defined nonzero, then each allocated chunk
80 carries an additional check word to verify that it was malloced
81 from its space. These check words are the same within each
82 execution of a program using malloc, but differ across
83 executions, so externally crafted fake chunks cannot be
84 freed. This improves security by rejecting frees/reallocs that
85 could corrupt heap memory, in addition to the checks preventing
86 writes to statics that are always on. This may further improve
87 security at the expense of time and space overhead. (Note that
88 FOOTERS may also be worth using with MSPACES.)
89
90 By default detected errors cause the program to abort (calling
91 "abort()"). You can override this to instead proceed past
92 errors by defining PROCEED_ON_ERROR. In this case, a bad free
93 has no effect, and a malloc that encounters a bad address
94 caused by user overwrites will ignore the bad address by
95 dropping pointers and indices to all known memory. This may
96 be appropriate for programs that should continue if at all
97 possible in the face of programming errors, although they may
98 run out of memory because dropped memory is never reclaimed.
99
100 If you don't like either of these options, you can define
101 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
102 else. And if if you are sure that your program using malloc has
103 no errors or vulnerabilities, you can define INSECURE to 1,
104 which might (or might not) provide a small performance improvement.
105
106 It is also possible to limit the maximum total allocatable
107 space, using malloc_set_footprint_limit. This is not
108 designed as a security feature in itself (calls to set limits
109 are not screened or privileged), but may be useful as one
110 aspect of a secure implementation.
111
112 Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
113 When USE_LOCKS is defined, each public call to malloc, free,
114 etc is surrounded with a lock. By default, this uses a plain
115 pthread mutex, win32 critical section, or a spin-lock if if
116 available for the platform and not disabled by setting
117 USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined,
118 recursive versions are used instead (which are not required for
119 base functionality but may be needed in layered extensions).
120 Using a global lock is not especially fast, and can be a major
121 bottleneck. It is designed only to provide minimal protection
122 in concurrent environments, and to provide a basis for
123 extensions. If you are using malloc in a concurrent program,
124 consider instead using nedmalloc
125 (http://www.nedprod.com/programs/portable/nedmalloc/) or
126 ptmalloc (See http://www.malloc.de), which are derived from
127 versions of this malloc.
128
129 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
130 This malloc can use unix sbrk or any emulation (invoked using
131 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
132 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
133 memory. On most unix systems, it tends to work best if both
134 MORECORE and MMAP are enabled. On Win32, it uses emulations
135 based on VirtualAlloc. It also uses common C library functions
136 like memset.
137
138 Compliance: I believe it is compliant with the Single Unix Specification
139 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
140 others as well.
141
142 * Overview of algorithms
143
144 This is not the fastest, most space-conserving, most portable, or
145 most tunable malloc ever written. However it is among the fastest
146 while also being among the most space-conserving, portable and
147 tunable. Consistent balance across these factors results in a good
148 general-purpose allocator for malloc-intensive programs.
149
150 In most ways, this malloc is a best-fit allocator. Generally, it
151 chooses the best-fitting existing chunk for a request, with ties
152 broken in approximately least-recently-used order. (This strategy
153 normally maintains low fragmentation.) However, for requests less
154 than 256bytes, it deviates from best-fit when there is not an
155 exactly fitting available chunk by preferring to use space adjacent
156 to that used for the previous small request, as well as by breaking
157 ties in approximately most-recently-used order. (These enhance
158 locality of series of small allocations.) And for very large requests
159 (>= 256Kb by default), it relies on system memory mapping
160 facilities, if supported. (This helps avoid carrying around and
161 possibly fragmenting memory used only for large chunks.)
162
163 All operations (except malloc_stats and mallinfo) have execution
164 times that are bounded by a constant factor of the number of bits in
165 a size_t, not counting any clearing in calloc or copying in realloc,
166 or actions surrounding MORECORE and MMAP that have times
167 proportional to the number of non-contiguous regions returned by
168 system allocation routines, which is often just 1. In real-time
169 applications, you can optionally suppress segment traversals using
170 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
171 system allocators return non-contiguous spaces, at the typical
172 expense of carrying around more memory and increased fragmentation.
173
174 The implementation is not very modular and seriously overuses
175 macros. Perhaps someday all C compilers will do as good a job
176 inlining modular code as can now be done by brute-force expansion,
177 but now, enough of them seem not to.
178
179 Some compilers issue a lot of warnings about code that is
180 dead/unreachable only on some platforms, and also about intentional
181 uses of negation on unsigned types. All known cases of each can be
182 ignored.
183
184 For a longer but out of date high-level description, see
185 http://gee.cs.oswego.edu/dl/html/malloc.html
186
187 * MSPACES
188 If MSPACES is defined, then in addition to malloc, free, etc.,
189 this file also defines mspace_malloc, mspace_free, etc. These
190 are versions of malloc routines that take an "mspace" argument
191 obtained using create_mspace, to control all internal bookkeeping.
192 If ONLY_MSPACES is defined, only these versions are compiled.
193 So if you would like to use this allocator for only some allocations,
194 and your system malloc for others, you can compile with
195 ONLY_MSPACES and then do something like...
196 static mspace mymspace = create_mspace(0,0); // for example
197 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
198
199 (Note: If you only need one instance of an mspace, you can instead
200 use "USE_DL_PREFIX" to relabel the global malloc.)
201
202 You can similarly create thread-local allocators by storing
203 mspaces as thread-locals. For example:
204 static __thread mspace tlms = 0;
205 void* tlmalloc(size_t bytes) {
206 if (tlms == 0) tlms = create_mspace(0, 0);
207 return mspace_malloc(tlms, bytes);
208 }
209 void tlfree(void* mem) { mspace_free(tlms, mem); }
210
211 Unless FOOTERS is defined, each mspace is completely independent.
212 You cannot allocate from one and free to another (although
213 conformance is only weakly checked, so usage errors are not always
214 caught). If FOOTERS is defined, then each chunk carries around a tag
215 indicating its originating mspace, and frees are directed to their
216 originating spaces. Normally, this requires use of locks.
217
218 ------------------------- Compile-time options ---------------------------
219
220 Be careful in setting #define values for numerical constants of type
221 size_t. On some systems, literal values are not automatically extended
222 to size_t precision unless they are explicitly casted. You can also
223 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
224
225 WIN32 default: defined if _WIN32 defined
226 Defining WIN32 sets up defaults for MS environment and compilers.
227 Otherwise defaults are for unix. Beware that there seem to be some
228 cases where this malloc might not be a pure drop-in replacement for
229 Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
230 SetDIBits()) may be due to bugs in some video driver implementations
231 when pixel buffers are malloc()ed, and the region spans more than
232 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
233 default granularity, pixel buffers may straddle virtual allocation
234 regions more often than when using the Microsoft allocator. You can
235 avoid this by using VirtualAlloc() and VirtualFree() for all pixel
236 buffers rather than using malloc(). If this is not possible,
237 recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
238 in cases where MSC and gcc (cygwin) are known to differ on WIN32,
239 conditions use _MSC_VER to distinguish them.
240
241 DLMALLOC_EXPORT default: extern
242 Defines how public APIs are declared. If you want to export via a
243 Windows DLL, you might define this as
244 #define DLMALLOC_EXPORT extern __declspec(dllexport)
245 If you want a POSIX ELF shared object, you might use
246 #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
247
248 MALLOC_ALIGNMENT default: (size_t)(2 * sizeof(void *))
249 Controls the minimum alignment for malloc'ed chunks. It must be a
250 power of two and at least 8, even on machines for which smaller
251 alignments would suffice. It may be defined as larger than this
252 though. Note however that code and data structures are optimized for
253 the case of 8-byte alignment.
254
255 MSPACES default: 0 (false)
256 If true, compile in support for independent allocation spaces.
257 This is only supported if HAVE_MMAP is true.
258
259 ONLY_MSPACES default: 0 (false)
260 If true, only compile in mspace versions, not regular versions.
261
262 USE_LOCKS default: 0 (false)
263 Causes each call to each public routine to be surrounded with
264 pthread or WIN32 mutex lock/unlock. (If set true, this can be
265 overridden on a per-mspace basis for mspace versions.) If set to a
266 non-zero value other than 1, locks are used, but their
267 implementation is left out, so lock functions must be supplied manually,
268 as described below.
269
270 USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available
271 If true, uses custom spin locks for locking. This is currently
272 supported only gcc >= 4.1, older gccs on x86 platforms, and recent
273 MS compilers. Otherwise, posix locks or win32 critical sections are
274 used.
275
276 USE_RECURSIVE_LOCKS default: not defined
277 If defined nonzero, uses recursive (aka reentrant) locks, otherwise
278 uses plain mutexes. This is not required for malloc proper, but may
279 be needed for layered allocators such as nedmalloc.
280
281 LOCK_AT_FORK default: not defined
282 If defined nonzero, performs pthread_atfork upon initialization
283 to initialize child lock while holding parent lock. The implementation
284 assumes that pthread locks (not custom locks) are being used. In other
285 cases, you may need to customize the implementation.
286
287 FOOTERS default: 0
288 If true, provide extra checking and dispatching by placing
289 information in the footers of allocated chunks. This adds
290 space and time overhead.
291
292 INSECURE default: 0
293 If true, omit checks for usage errors and heap space overwrites.
294
295 USE_DL_PREFIX default: NOT defined
296 Causes compiler to prefix all public routines with the string 'dl'.
297 This can be useful when you only want to use this malloc in one part
298 of a program, using your regular system malloc elsewhere.
299
300 MALLOC_INSPECT_ALL default: NOT defined
301 If defined, compiles malloc_inspect_all and mspace_inspect_all, that
302 perform traversal of all heap space. Unless access to these
303 functions is otherwise restricted, you probably do not want to
304 include them in secure implementations.
305
306 ABORT default: defined as abort()
307 Defines how to abort on failed checks. On most systems, a failed
308 check cannot die with an "assert" or even print an informative
309 message, because the underlying print routines in turn call malloc,
310 which will fail again. Generally, the best policy is to simply call
311 abort(). It's not very useful to do more than this because many
312 errors due to overwriting will show up as address faults (null, odd
313 addresses etc) rather than malloc-triggered checks, so will also
314 abort. Also, most compilers know that abort() does not return, so
315 can better optimize code conditionally calling it.
316
317 PROCEED_ON_ERROR default: defined as 0 (false)
318 Controls whether detected bad addresses cause them to bypassed
319 rather than aborting. If set, detected bad arguments to free and
320 realloc are ignored. And all bookkeeping information is zeroed out
321 upon a detected overwrite of freed heap space, thus losing the
322 ability to ever return it from malloc again, but enabling the
323 application to proceed. If PROCEED_ON_ERROR is defined, the
324 static variable malloc_corruption_error_count is compiled in
325 and can be examined to see if errors have occurred. This option
326 generates slower code than the default abort policy.
327
328 DEBUG default: NOT defined
329 The DEBUG setting is mainly intended for people trying to modify
330 this code or diagnose problems when porting to new platforms.
331 However, it may also be able to better isolate user errors than just
332 using runtime checks. The assertions in the check routines spell
333 out in more detail the assumptions and invariants underlying the
334 algorithms. The checking is fairly extensive, and will slow down
335 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
336 set will attempt to check every non-mmapped allocated and free chunk
337 in the course of computing the summaries.
338
339 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
340 Debugging assertion failures can be nearly impossible if your
341 version of the assert macro causes malloc to be called, which will
342 lead to a cascade of further failures, blowing the runtime stack.
343 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
344 which will usually make debugging easier.
345
346 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
347 The action to take before "return 0" when malloc fails to be able to
348 return memory because there is none available.
349
350 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
351 True if this system supports sbrk or an emulation of it.
352
353 MORECORE default: sbrk
354 The name of the sbrk-style system routine to call to obtain more
355 memory. See below for guidance on writing custom MORECORE
356 functions. The type of the argument to sbrk/MORECORE varies across
357 systems. It cannot be size_t, because it supports negative
358 arguments, so it is normally the signed type of the same width as
359 size_t (sometimes declared as "intptr_t"). It doesn't much matter
360 though. Internally, we only call it with arguments less than half
361 the max value of a size_t, which should work across all reasonable
362 possibilities, although sometimes generating compiler warnings.
363
364 MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
365 If true, take advantage of fact that consecutive calls to MORECORE
366 with positive arguments always return contiguous increasing
367 addresses. This is true of unix sbrk. It does not hurt too much to
368 set it true anyway, since malloc copes with non-contiguities.
369 Setting it false when definitely non-contiguous saves time
370 and possibly wasted space it would take to discover this though.
371
372 MORECORE_CANNOT_TRIM default: NOT defined
373 True if MORECORE cannot release space back to the system when given
374 negative arguments. This is generally necessary only if you are
375 using a hand-crafted MORECORE function that cannot handle negative
376 arguments.
377
378 NO_SEGMENT_TRAVERSAL default: 0
379 If non-zero, suppresses traversals of memory segments
380 returned by either MORECORE or CALL_MMAP. This disables
381 merging of segments that are contiguous, and selectively
382 releasing them to the OS if unused, but bounds execution times.
383
384 HAVE_MMAP default: 1 (true)
385 True if this system supports mmap or an emulation of it. If so, and
386 HAVE_MORECORE is not true, MMAP is used for all system
387 allocation. If set and HAVE_MORECORE is true as well, MMAP is
388 primarily used to directly allocate very large blocks. It is also
389 used as a backup strategy in cases where MORECORE fails to provide
390 space from system. Note: A single call to MUNMAP is assumed to be
391 able to unmap memory that may have be allocated using multiple calls
392 to MMAP, so long as they are adjacent.
393
394 HAVE_MREMAP default: 1 on linux, else 0
395 If true realloc() uses mremap() to re-allocate large blocks and
396 extend or shrink allocation spaces.
397
398 MMAP_CLEARS default: 1 except on WINCE.
399 True if mmap clears memory so calloc doesn't need to. This is true
400 for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
401
402 USE_BUILTIN_FFS default: 0 (i.e., not used)
403 Causes malloc to use the builtin ffs() function to compute indices.
404 Some compilers may recognize and intrinsify ffs to be faster than the
405 supplied C version. Also, the case of x86 using gcc is special-cased
406 to an asm instruction, so is already as fast as it can be, and so
407 this setting has no effect. Similarly for Win32 under recent MS compilers.
408 (On most x86s, the asm version is only slightly faster than the C version.)
409
410 malloc_getpagesize default: derive from system includes, or 4096.
411 The system page size. To the extent possible, this malloc manages
412 memory from the system in page-size units. This may be (and
413 usually is) a function rather than a constant. This is ignored
414 if WIN32, where page size is determined using getSystemInfo during
415 initialization.
416
417 USE_DEV_RANDOM default: 0 (i.e., not used)
418 Causes malloc to use /dev/random to initialize secure magic seed for
419 stamping footers. Otherwise, the current time is used.
420
421 NO_MALLINFO default: 0
422 If defined, don't compile "mallinfo". This can be a simple way
423 of dealing with mismatches between system declarations and
424 those in this file.
425
426 MALLINFO_FIELD_TYPE default: size_t
427 The type of the fields in the mallinfo struct. This was originally
428 defined as "int" in SVID etc, but is more usefully defined as
429 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
430
431 NO_MALLOC_STATS default: 0
432 If defined, don't compile "malloc_stats". This avoids calls to
433 fprintf and bringing in stdio dependencies you might not want.
434
435 REALLOC_ZERO_BYTES_FREES default: not defined
436 This should be set if a call to realloc with zero bytes should
437 be the same as a call to free. Some people think it should. Otherwise,
438 since this malloc returns a unique pointer for malloc(0), so does
439 realloc(p, 0).
440
441 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
442 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
443 LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32
444 Define these if your system does not have these header files.
445 You might need to manually insert some of the declarations they provide.
446
447 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
448 system_info.dwAllocationGranularity in WIN32,
449 otherwise 64K.
450 Also settable using mallopt(M_GRANULARITY, x)
451 The unit for allocating and deallocating memory from the system. On
452 most systems with contiguous MORECORE, there is no reason to
453 make this more than a page. However, systems with MMAP tend to
454 either require or encourage larger granularities. You can increase
455 this value to prevent system allocation functions to be called so
456 often, especially if they are slow. The value must be at least one
457 page and must be a power of two. Setting to 0 causes initialization
458 to either page size or win32 region size. (Note: In previous
459 versions of malloc, the equivalent of this option was called
460 "TOP_PAD")
461
462 DEFAULT_TRIM_THRESHOLD default: 2MB
463 Also settable using mallopt(M_TRIM_THRESHOLD, x)
464 The maximum amount of unused top-most memory to keep before
465 releasing via malloc_trim in free(). Automatic trimming is mainly
466 useful in long-lived programs using contiguous MORECORE. Because
467 trimming via sbrk can be slow on some systems, and can sometimes be
468 wasteful (in cases where programs immediately afterward allocate
469 more large chunks) the value should be high enough so that your
470 overall system performance would improve by releasing this much
471 memory. As a rough guide, you might set to a value close to the
472 average size of a process (program) running on your system.
473 Releasing this much memory would allow such a process to run in
474 memory. Generally, it is worth tuning trim thresholds when a
475 program undergoes phases where several large chunks are allocated
476 and released in ways that can reuse each other's storage, perhaps
477 mixed with phases where there are no such chunks at all. The trim
478 value must be greater than page size to have any useful effect. To
479 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
480 some people use of mallocing a huge space and then freeing it at
481 program startup, in an attempt to reserve system memory, doesn't
482 have the intended effect under automatic trimming, since that memory
483 will immediately be returned to the system.
484
485 DEFAULT_MMAP_THRESHOLD default: 256K
486 Also settable using mallopt(M_MMAP_THRESHOLD, x)
487 The request size threshold for using MMAP to directly service a
488 request. Requests of at least this size that cannot be allocated
489 using already-existing space will be serviced via mmap. (If enough
490 normal freed space already exists it is used instead.) Using mmap
491 segregates relatively large chunks of memory so that they can be
492 individually obtained and released from the host system. A request
493 serviced through mmap is never reused by any other request (at least
494 not directly; the system may just so happen to remap successive
495 requests to the same locations). Segregating space in this way has
496 the benefits that: Mmapped space can always be individually released
497 back to the system, which helps keep the system level memory demands
498 of a long-lived program low. Also, mapped memory doesn't become
499 `locked' between other chunks, as can happen with normally allocated
500 chunks, which means that even trimming via malloc_trim would not
501 release them. However, it has the disadvantage that the space
502 cannot be reclaimed, consolidated, and then used to service later
503 requests, as happens with normal chunks. The advantages of mmap
504 nearly always outweigh disadvantages for "large" chunks, but the
505 value of "large" may vary across systems. The default is an
506 empirically derived value that works well in most systems. You can
507 disable mmap by setting to MAX_SIZE_T.
508
509 MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
510 The number of consolidated frees between checks to release
511 unused segments when freeing. When using non-contiguous segments,
512 especially with multiple mspaces, checking only for topmost space
513 doesn't always suffice to trigger trimming. To compensate for this,
514 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
515 current number of segments, if greater) try to release unused
516 segments to the OS when freeing chunks that result in
517 consolidation. The best value for this parameter is a compromise
518 between slowing down frees with relatively costly checks that
519 rarely trigger versus holding on to unused memory. To effectively
520 disable, set to MAX_SIZE_T. This may lead to a very slight speed
521 improvement at the expense of carrying around more memory.
522 */
523
524 /* Version identifier to allow people to support multiple versions */
525 #ifndef DLMALLOC_VERSION
526 #define DLMALLOC_VERSION 20806
527 #endif /* DLMALLOC_VERSION */
528
529 #ifndef DLMALLOC_EXPORT
530 #define DLMALLOC_EXPORT extern __attribute__((__weak__))
531 #endif
532
533 #include <lk/compiler.h>
534
535 #ifndef WIN32
536 #ifdef _WIN32
537 #define WIN32 1
538 #endif /* _WIN32 */
539 #ifdef _WIN32_WCE
540 #define LACKS_FCNTL_H
541 #define WIN32 1
542 #endif /* _WIN32_WCE */
543 #endif /* WIN32 */
544 #ifdef WIN32
545 #define WIN32_LEAN_AND_MEAN
546 #include <windows.h>
547 #include <tchar.h>
548 #define HAVE_MMAP 1
549 #define HAVE_MORECORE 0
550 #define LACKS_UNISTD_H
551 #define LACKS_SYS_PARAM_H
552 #define LACKS_SYS_MMAN_H
553 #define LACKS_STRING_H
554 #define LACKS_STRINGS_H
555 #define LACKS_SYS_TYPES_H
556 #define LACKS_ERRNO_H
557 #define LACKS_SCHED_H
558 #ifndef MALLOC_FAILURE_ACTION
559 #define MALLOC_FAILURE_ACTION
560 #endif /* MALLOC_FAILURE_ACTION */
561 #ifndef MMAP_CLEARS
562 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
563 #define MMAP_CLEARS 0
564 #else
565 #define MMAP_CLEARS 1
566 #endif /* _WIN32_WCE */
567 #endif /*MMAP_CLEARS */
568 #endif /* WIN32 */
569
570 #if defined(DARWIN) || defined(_DARWIN)
571 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
572 #ifndef HAVE_MORECORE
573 #define HAVE_MORECORE 0
574 #define HAVE_MMAP 1
575 /* OSX allocators provide 16 byte alignment */
576 #ifndef MALLOC_ALIGNMENT
577 #define MALLOC_ALIGNMENT ((size_t)16U)
578 #endif
579 #endif /* HAVE_MORECORE */
580 #endif /* DARWIN */
581
582 #ifndef LACKS_SYS_TYPES_H
583 #include <sys/types.h> /* For size_t */
584 #endif /* LACKS_SYS_TYPES_H */
585
586 /* The maximum possible size_t value has all bits set */
587 #define MAX_SIZE_T (~(size_t)0)
588
589 #ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
590 #if ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
591 (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
592 #define USE_LOCKS 1
593 #else
594 #define USE_LOCKS 0
595 #endif
596 #endif /* USE_LOCKS */
597
598 #if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
599 #if ((defined(__GNUC__) && \
600 ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \
601 defined(__i386__) || defined(__x86_64__))) || \
602 (defined(_MSC_VER) && _MSC_VER>=1310))
603 #ifndef USE_SPIN_LOCKS
604 #define USE_SPIN_LOCKS 1
605 #endif /* USE_SPIN_LOCKS */
606 #elif USE_SPIN_LOCKS
607 #error "USE_SPIN_LOCKS defined without implementation"
608 #endif /* ... locks available... */
609 #elif !defined(USE_SPIN_LOCKS)
610 #define USE_SPIN_LOCKS 0
611 #endif /* USE_LOCKS */
612
613 #ifndef ONLY_MSPACES
614 #define ONLY_MSPACES 0
615 #endif /* ONLY_MSPACES */
616 #ifndef MSPACES
617 #if ONLY_MSPACES
618 #define MSPACES 1
619 #else /* ONLY_MSPACES */
620 #define MSPACES 0
621 #endif /* ONLY_MSPACES */
622 #endif /* MSPACES */
623 #ifndef MALLOC_ALIGNMENT
624 #define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *)))
625 #endif /* MALLOC_ALIGNMENT */
626 #ifndef FOOTERS
627 #define FOOTERS 0
628 #endif /* FOOTERS */
629 #ifndef ABORT
630 #define ABORT abort()
631 #endif /* ABORT */
632 #ifndef ABORT_ON_ASSERT_FAILURE
633 #define ABORT_ON_ASSERT_FAILURE 1
634 #endif /* ABORT_ON_ASSERT_FAILURE */
635 #ifndef PROCEED_ON_ERROR
636 #define PROCEED_ON_ERROR 0
637 #endif /* PROCEED_ON_ERROR */
638
639 #ifndef INSECURE
640 #define INSECURE 0
641 #endif /* INSECURE */
642 #ifndef MALLOC_INSPECT_ALL
643 #define MALLOC_INSPECT_ALL 0
644 #endif /* MALLOC_INSPECT_ALL */
645 #ifndef HAVE_MMAP
646 #define HAVE_MMAP 1
647 #endif /* HAVE_MMAP */
648 #ifndef MMAP_CLEARS
649 #define MMAP_CLEARS 1
650 #endif /* MMAP_CLEARS */
651 #ifndef HAVE_MREMAP
652 #ifdef linux
653 #define HAVE_MREMAP 1
654 #define _GNU_SOURCE /* Turns on mremap() definition */
655 #else /* linux */
656 #define HAVE_MREMAP 0
657 #endif /* linux */
658 #endif /* HAVE_MREMAP */
659 #ifndef MALLOC_FAILURE_ACTION
660 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
661 #endif /* MALLOC_FAILURE_ACTION */
662 #ifndef HAVE_MORECORE
663 #if ONLY_MSPACES
664 #define HAVE_MORECORE 0
665 #else /* ONLY_MSPACES */
666 #define HAVE_MORECORE 1
667 #endif /* ONLY_MSPACES */
668 #endif /* HAVE_MORECORE */
669 #if !HAVE_MORECORE
670 #define MORECORE_CONTIGUOUS 0
671 #else /* !HAVE_MORECORE */
672 #define MORECORE_DEFAULT sbrk
673 #ifndef MORECORE_CONTIGUOUS
674 #define MORECORE_CONTIGUOUS 1
675 #endif /* MORECORE_CONTIGUOUS */
676 #endif /* HAVE_MORECORE */
677 #ifndef DEFAULT_GRANULARITY
678 #if (MORECORE_CONTIGUOUS || defined(WIN32))
679 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
680 #else /* MORECORE_CONTIGUOUS */
681 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
682 #endif /* MORECORE_CONTIGUOUS */
683 #endif /* DEFAULT_GRANULARITY */
684 #ifndef DEFAULT_TRIM_THRESHOLD
685 #ifndef MORECORE_CANNOT_TRIM
686 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
687 #else /* MORECORE_CANNOT_TRIM */
688 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
689 #endif /* MORECORE_CANNOT_TRIM */
690 #endif /* DEFAULT_TRIM_THRESHOLD */
691 #ifndef DEFAULT_MMAP_THRESHOLD
692 #if HAVE_MMAP
693 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
694 #else /* HAVE_MMAP */
695 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
696 #endif /* HAVE_MMAP */
697 #endif /* DEFAULT_MMAP_THRESHOLD */
698 #ifndef MAX_RELEASE_CHECK_RATE
699 #if HAVE_MMAP
700 #define MAX_RELEASE_CHECK_RATE 4095
701 #else
702 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
703 #endif /* HAVE_MMAP */
704 #endif /* MAX_RELEASE_CHECK_RATE */
705 #ifndef USE_BUILTIN_FFS
706 #define USE_BUILTIN_FFS 0
707 #endif /* USE_BUILTIN_FFS */
708 #ifndef USE_DEV_RANDOM
709 #define USE_DEV_RANDOM 0
710 #endif /* USE_DEV_RANDOM */
711 #ifndef NO_MALLINFO
712 #define NO_MALLINFO 0
713 #endif /* NO_MALLINFO */
714 #ifndef MALLINFO_FIELD_TYPE
715 #define MALLINFO_FIELD_TYPE size_t
716 #endif /* MALLINFO_FIELD_TYPE */
717 #ifndef NO_MALLOC_STATS
718 #define NO_MALLOC_STATS 0
719 #endif /* NO_MALLOC_STATS */
720 #ifndef NO_SEGMENT_TRAVERSAL
721 #define NO_SEGMENT_TRAVERSAL 0
722 #endif /* NO_SEGMENT_TRAVERSAL */
723
724 /*
725 mallopt tuning options. SVID/XPG defines four standard parameter
726 numbers for mallopt, normally defined in malloc.h. None of these
727 are used in this malloc, so setting them has no effect. But this
728 malloc does support the following options.
729 */
730
731 #define M_TRIM_THRESHOLD (-1)
732 #define M_GRANULARITY (-2)
733 #define M_MMAP_THRESHOLD (-3)
734
735 /* ------------------------ Mallinfo declarations ------------------------ */
736
737 #if !NO_MALLINFO
738 /*
739 This version of malloc supports the standard SVID/XPG mallinfo
740 routine that returns a struct containing usage properties and
741 statistics. It should work on any system that has a
742 /usr/include/malloc.h defining struct mallinfo. The main
743 declaration needed is the mallinfo struct that is returned (by-copy)
744 by mallinfo(). The malloinfo struct contains a bunch of fields that
745 are not even meaningful in this version of malloc. These fields are
746 are instead filled by mallinfo() with other numbers that might be of
747 interest.
748
749 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
750 /usr/include/malloc.h file that includes a declaration of struct
751 mallinfo. If so, it is included; else a compliant version is
752 declared below. These must be precisely the same for mallinfo() to
753 work. The original SVID version of this struct, defined on most
754 systems with mallinfo, declares all fields as ints. But some others
755 define as unsigned long. If your system defines the fields using a
756 type of different width than listed here, you MUST #include your
757 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
758 */
759
760 /* #define HAVE_USR_INCLUDE_MALLOC_H */
761
762 #ifdef HAVE_USR_INCLUDE_MALLOC_H
763 #include "/usr/include/malloc.h"
764 #else /* HAVE_USR_INCLUDE_MALLOC_H */
765 #ifndef STRUCT_MALLINFO_DECLARED
766 /* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
767 #define _STRUCT_MALLINFO
768 #define STRUCT_MALLINFO_DECLARED 1
769 struct mallinfo {
770 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
771 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
772 MALLINFO_FIELD_TYPE smblks; /* always 0 */
773 MALLINFO_FIELD_TYPE hblks; /* always 0 */
774 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
775 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
776 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
777 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
778 MALLINFO_FIELD_TYPE fordblks; /* total free space */
779 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
780 };
781 #endif /* STRUCT_MALLINFO_DECLARED */
782 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
783 #endif /* NO_MALLINFO */
784
785 /*
786 Try to persuade compilers to inline. The most critical functions for
787 inlining are defined as macros, so these aren't used for them.
788 */
789
790 #ifndef FORCEINLINE
791 #if defined(__GNUC__)
792 #define FORCEINLINE __inline __attribute__ ((always_inline))
793 #elif defined(_MSC_VER)
794 #define FORCEINLINE __forceinline
795 #endif
796 #endif
797 #ifndef NOINLINE
798 #if defined(__GNUC__)
799 #define NOINLINE __attribute__ ((noinline))
800 #elif defined(_MSC_VER)
801 #define NOINLINE __declspec(noinline)
802 #else
803 #define NOINLINE
804 #endif
805 #endif
806
807 #ifdef __cplusplus
808 extern "C" {
809 #ifndef FORCEINLINE
810 #define FORCEINLINE inline
811 #endif
812 #endif /* __cplusplus */
813 #ifndef FORCEINLINE
814 #define FORCEINLINE
815 #endif
816
817 #if !ONLY_MSPACES
818
819 /* ------------------- Declarations of public routines ------------------- */
820
821 #ifndef USE_DL_PREFIX
822 #define dlcalloc calloc
823 #define dlfree free
824 #define dlmalloc malloc
825 #define dlmemalign memalign
826 #define dlposix_memalign posix_memalign
827 #define dlrealloc realloc
828 #define dlrealloc_in_place realloc_in_place
829 #define dlvalloc valloc
830 #define dlpvalloc pvalloc
831 #define dlmallinfo mallinfo
832 #define dlmallopt mallopt
833 #define dlmalloc_trim malloc_trim
834 #define dlmalloc_stats malloc_stats
835 #define dlmalloc_usable_size malloc_usable_size
836 #define dlmalloc_footprint malloc_footprint
837 #define dlmalloc_max_footprint malloc_max_footprint
838 #define dlmalloc_footprint_limit malloc_footprint_limit
839 #define dlmalloc_set_footprint_limit malloc_set_footprint_limit
840 #define dlmalloc_inspect_all malloc_inspect_all
841 #define dlindependent_calloc independent_calloc
842 #define dlindependent_comalloc independent_comalloc
843 #define dlbulk_free bulk_free
844 #endif /* USE_DL_PREFIX */
845
846 /*
847 malloc(size_t n)
848 Returns a pointer to a newly allocated chunk of at least n bytes, or
849 null if no space is available, in which case errno is set to ENOMEM
850 on ANSI C systems.
851
852 If n is zero, malloc returns a minimum-sized chunk. (The minimum
853 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
854 systems.) Note that size_t is an unsigned type, so calls with
855 arguments that would be negative if signed are interpreted as
856 requests for huge amounts of space, which will often fail. The
857 maximum supported value of n differs across systems, but is in all
858 cases less than the maximum representable value of a size_t.
859 */
860 DLMALLOC_EXPORT void* dlmalloc(size_t);
861
862 /*
863 free(void* p)
864 Releases the chunk of memory pointed to by p, that had been previously
865 allocated using malloc or a related routine such as realloc.
866 It has no effect if p is null. If p was not malloced or already
867 freed, free(p) will by default cause the current program to abort.
868 */
869 DLMALLOC_EXPORT void dlfree(void*);
870
871 /*
872 calloc(size_t n_elements, size_t element_size);
873 Returns a pointer to n_elements * element_size bytes, with all locations
874 set to zero.
875 */
876 DLMALLOC_EXPORT void* dlcalloc(size_t, size_t);
877
878 /*
879 realloc(void* p, size_t n)
880 Returns a pointer to a chunk of size n that contains the same data
881 as does chunk p up to the minimum of (n, p's size) bytes, or null
882 if no space is available.
883
884 The returned pointer may or may not be the same as p. The algorithm
885 prefers extending p in most cases when possible, otherwise it
886 employs the equivalent of a malloc-copy-free sequence.
887
888 If p is null, realloc is equivalent to malloc.
889
890 If space is not available, realloc returns null, errno is set (if on
891 ANSI) and p is NOT freed.
892
893 if n is for fewer bytes than already held by p, the newly unused
894 space is lopped off and freed if possible. realloc with a size
895 argument of zero (re)allocates a minimum-sized chunk.
896
897 The old unix realloc convention of allowing the last-free'd chunk
898 to be used as an argument to realloc is not supported.
899 */
900 DLMALLOC_EXPORT void* dlrealloc(void*, size_t);
901
902 /*
903 realloc_in_place(void* p, size_t n)
904 Resizes the space allocated for p to size n, only if this can be
905 done without moving p (i.e., only if there is adjacent space
906 available if n is greater than p's current allocated size, or n is
907 less than or equal to p's size). This may be used instead of plain
908 realloc if an alternative allocation strategy is needed upon failure
909 to expand space; for example, reallocation of a buffer that must be
910 memory-aligned or cleared. You can use realloc_in_place to trigger
911 these alternatives only when needed.
912
913 Returns p if successful; otherwise null.
914 */
915 DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t);
916
917 /*
918 memalign(size_t alignment, size_t n);
919 Returns a pointer to a newly allocated chunk of n bytes, aligned
920 in accord with the alignment argument.
921
922 The alignment argument should be a power of two. If the argument is
923 not a power of two, the nearest greater power is used.
924 8-byte alignment is guaranteed by normal malloc calls, so don't
925 bother calling memalign with an argument of 8 or less.
926
927 Overreliance on memalign is a sure way to fragment space.
928 */
929 DLMALLOC_EXPORT void* dlmemalign(size_t, size_t);
930
931 /*
932 int posix_memalign(void** pp, size_t alignment, size_t n);
933 Allocates a chunk of n bytes, aligned in accord with the alignment
934 argument. Differs from memalign only in that it (1) assigns the
935 allocated memory to *pp rather than returning it, (2) fails and
936 returns EINVAL if the alignment is not a power of two (3) fails and
937 returns ENOMEM if memory cannot be allocated.
938 */
939 DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t);
940
941 /*
942 valloc(size_t n);
943 Equivalent to memalign(pagesize, n), where pagesize is the page
944 size of the system. If the pagesize is unknown, 4096 is used.
945 */
946 DLMALLOC_EXPORT void* dlvalloc(size_t);
947
948 /*
949 mallopt(int parameter_number, int parameter_value)
950 Sets tunable parameters The format is to provide a
951 (parameter-number, parameter-value) pair. mallopt then sets the
952 corresponding parameter to the argument value if it can (i.e., so
953 long as the value is meaningful), and returns 1 if successful else
954 0. To workaround the fact that mallopt is specified to use int,
955 not size_t parameters, the value -1 is specially treated as the
956 maximum unsigned size_t value.
957
958 SVID/XPG/ANSI defines four standard param numbers for mallopt,
959 normally defined in malloc.h. None of these are use in this malloc,
960 so setting them has no effect. But this malloc also supports other
961 options in mallopt. See below for details. Briefly, supported
962 parameters are as follows (listed defaults are for "typical"
963 configurations).
964
965 Symbol param # default allowed param values
966 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
967 M_GRANULARITY -2 page size any power of 2 >= page size
968 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
969 */
970 DLMALLOC_EXPORT int dlmallopt(int, int);
971
972 /*
973 malloc_footprint();
974 Returns the number of bytes obtained from the system. The total
975 number of bytes allocated by malloc, realloc etc., is less than this
976 value. Unlike mallinfo, this function returns only a precomputed
977 result, so can be called frequently to monitor memory consumption.
978 Even if locks are otherwise defined, this function does not use them,
979 so results might not be up to date.
980 */
981 DLMALLOC_EXPORT size_t dlmalloc_footprint(void);
982
983 /*
984 malloc_max_footprint();
985 Returns the maximum number of bytes obtained from the system. This
986 value will be greater than current footprint if deallocated space
987 has been reclaimed by the system. The peak number of bytes allocated
988 by malloc, realloc etc., is less than this value. Unlike mallinfo,
989 this function returns only a precomputed result, so can be called
990 frequently to monitor memory consumption. Even if locks are
991 otherwise defined, this function does not use them, so results might
992 not be up to date.
993 */
994 DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void);
995
996 /*
997 malloc_footprint_limit();
998 Returns the number of bytes that the heap is allowed to obtain from
999 the system, returning the last value returned by
1000 malloc_set_footprint_limit, or the maximum size_t value if
1001 never set. The returned value reflects a permission. There is no
1002 guarantee that this number of bytes can actually be obtained from
1003 the system.
1004 */
1005 DLMALLOC_EXPORT size_t dlmalloc_footprint_limit(void);
1006
1007 /*
1008 malloc_set_footprint_limit();
1009 Sets the maximum number of bytes to obtain from the system, causing
1010 failure returns from malloc and related functions upon attempts to
1011 exceed this value. The argument value may be subject to page
1012 rounding to an enforceable limit; this actual value is returned.
1013 Using an argument of the maximum possible size_t effectively
1014 disables checks. If the argument is less than or equal to the
1015 current malloc_footprint, then all future allocations that require
1016 additional system memory will fail. However, invocation cannot
1017 retroactively deallocate existing used memory.
1018 */
1019 DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes);
1020
1021 #if MALLOC_INSPECT_ALL
1022 /*
1023 malloc_inspect_all(void(*handler)(void *start,
1024 void *end,
1025 size_t used_bytes,
1026 void* callback_arg),
1027 void* arg);
1028 Traverses the heap and calls the given handler for each managed
1029 region, skipping all bytes that are (or may be) used for bookkeeping
1030 purposes. Traversal does not include include chunks that have been
1031 directly memory mapped. Each reported region begins at the start
1032 address, and continues up to but not including the end address. The
1033 first used_bytes of the region contain allocated data. If
1034 used_bytes is zero, the region is unallocated. The handler is
1035 invoked with the given callback argument. If locks are defined, they
1036 are held during the entire traversal. It is a bad idea to invoke
1037 other malloc functions from within the handler.
1038
1039 For example, to count the number of in-use chunks with size greater
1040 than 1000, you could write:
1041 static int count = 0;
1042 void count_chunks(void* start, void* end, size_t used, void* arg) {
1043 if (used >= 1000) ++count;
1044 }
1045 then:
1046 malloc_inspect_all(count_chunks, NULL);
1047
1048 malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
1049 */
1050 DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*),
1051 void* arg);
1052
1053 #endif /* MALLOC_INSPECT_ALL */
1054
1055 #if !NO_MALLINFO
1056 /*
1057 mallinfo()
1058 Returns (by copy) a struct containing various summary statistics:
1059
1060 arena: current total non-mmapped bytes allocated from system
1061 ordblks: the number of free chunks
1062 smblks: always zero.
1063 hblks: current number of mmapped regions
1064 hblkhd: total bytes held in mmapped regions
1065 usmblks: the maximum total allocated space. This will be greater
1066 than current total if trimming has occurred.
1067 fsmblks: always zero
1068 uordblks: current total allocated space (normal or mmapped)
1069 fordblks: total free space
1070 keepcost: the maximum number of bytes that could ideally be released
1071 back to system via malloc_trim. ("ideally" means that
1072 it ignores page restrictions etc.)
1073
1074 Because these fields are ints, but internal bookkeeping may
1075 be kept as longs, the reported values may wrap around zero and
1076 thus be inaccurate.
1077 */
1078 DLMALLOC_EXPORT struct mallinfo dlmallinfo(void);
1079 #endif /* NO_MALLINFO */
1080
1081 /*
1082 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
1083
1084 independent_calloc is similar to calloc, but instead of returning a
1085 single cleared space, it returns an array of pointers to n_elements
1086 independent elements that can hold contents of size elem_size, each
1087 of which starts out cleared, and can be independently freed,
1088 realloc'ed etc. The elements are guaranteed to be adjacently
1089 allocated (this is not guaranteed to occur with multiple callocs or
1090 mallocs), which may also improve cache locality in some
1091 applications.
1092
1093 The "chunks" argument is optional (i.e., may be null, which is
1094 probably the most typical usage). If it is null, the returned array
1095 is itself dynamically allocated and should also be freed when it is
1096 no longer needed. Otherwise, the chunks array must be of at least
1097 n_elements in length. It is filled in with the pointers to the
1098 chunks.
1099
1100 In either case, independent_calloc returns this pointer array, or
1101 null if the allocation failed. If n_elements is zero and "chunks"
1102 is null, it returns a chunk representing an array with zero elements
1103 (which should be freed if not wanted).
1104
1105 Each element must be freed when it is no longer needed. This can be
1106 done all at once using bulk_free.
1107
1108 independent_calloc simplifies and speeds up implementations of many
1109 kinds of pools. It may also be useful when constructing large data
1110 structures that initially have a fixed number of fixed-sized nodes,
1111 but the number is not known at compile time, and some of the nodes
1112 may later need to be freed. For example:
1113
1114 struct Node { int item; struct Node* next; };
1115
1116 struct Node* build_list() {
1117 struct Node** pool;
1118 int n = read_number_of_nodes_needed();
1119 if (n <= 0) return 0;
1120 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1121 if (pool == 0) die();
1122 // organize into a linked list...
1123 struct Node* first = pool[0];
1124 for (i = 0; i < n-1; ++i)
1125 pool[i]->next = pool[i+1];
1126 free(pool); // Can now free the array (or not, if it is needed later)
1127 return first;
1128 }
1129 */
1130 DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**);
1131
1132 /*
1133 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
1134
1135 independent_comalloc allocates, all at once, a set of n_elements
1136 chunks with sizes indicated in the "sizes" array. It returns
1137 an array of pointers to these elements, each of which can be
1138 independently freed, realloc'ed etc. The elements are guaranteed to
1139 be adjacently allocated (this is not guaranteed to occur with
1140 multiple callocs or mallocs), which may also improve cache locality
1141 in some applications.
1142
1143 The "chunks" argument is optional (i.e., may be null). If it is null
1144 the returned array is itself dynamically allocated and should also
1145 be freed when it is no longer needed. Otherwise, the chunks array
1146 must be of at least n_elements in length. It is filled in with the
1147 pointers to the chunks.
1148
1149 In either case, independent_comalloc returns this pointer array, or
1150 null if the allocation failed. If n_elements is zero and chunks is
1151 null, it returns a chunk representing an array with zero elements
1152 (which should be freed if not wanted).
1153
1154 Each element must be freed when it is no longer needed. This can be
1155 done all at once using bulk_free.
1156
1157 independent_comallac differs from independent_calloc in that each
1158 element may have a different size, and also that it does not
1159 automatically clear elements.
1160
1161 independent_comalloc can be used to speed up allocation in cases
1162 where several structs or objects must always be allocated at the
1163 same time. For example:
1164
1165 struct Head { ... }
1166 struct Foot { ... }
1167
1168 void send_message(char* msg) {
1169 int msglen = strlen(msg);
1170 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1171 void* chunks[3];
1172 if (independent_comalloc(3, sizes, chunks) == 0)
1173 die();
1174 struct Head* head = (struct Head*)(chunks[0]);
1175 char* body = (char*)(chunks[1]);
1176 struct Foot* foot = (struct Foot*)(chunks[2]);
1177 // ...
1178 }
1179
1180 In general though, independent_comalloc is worth using only for
1181 larger values of n_elements. For small values, you probably won't
1182 detect enough difference from series of malloc calls to bother.
1183
1184 Overuse of independent_comalloc can increase overall memory usage,
1185 since it cannot reuse existing noncontiguous small chunks that
1186 might be available for some of the elements.
1187 */
1188 DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**);
1189
1190 /*
1191 bulk_free(void* array[], size_t n_elements)
1192 Frees and clears (sets to null) each non-null pointer in the given
1193 array. This is likely to be faster than freeing them one-by-one.
1194 If footers are used, pointers that have been allocated in different
1195 mspaces are not freed or cleared, and the count of all such pointers
1196 is returned. For large arrays of pointers with poor locality, it
1197 may be worthwhile to sort this array before calling bulk_free.
1198 */
1199 DLMALLOC_EXPORT size_t dlbulk_free(void**, size_t n_elements);
1200
1201 /*
1202 pvalloc(size_t n);
1203 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1204 round up n to nearest pagesize.
1205 */
1206 DLMALLOC_EXPORT void* dlpvalloc(size_t);
1207
1208 /*
1209 malloc_trim(size_t pad);
1210
1211 If possible, gives memory back to the system (via negative arguments
1212 to sbrk) if there is unused memory at the `high' end of the malloc
1213 pool or in unused MMAP segments. You can call this after freeing
1214 large blocks of memory to potentially reduce the system-level memory
1215 requirements of a program. However, it cannot guarantee to reduce
1216 memory. Under some allocation patterns, some large free blocks of
1217 memory will be locked between two used chunks, so they cannot be
1218 given back to the system.
1219
1220 The `pad' argument to malloc_trim represents the amount of free
1221 trailing space to leave untrimmed. If this argument is zero, only
1222 the minimum amount of memory to maintain internal data structures
1223 will be left. Non-zero arguments can be supplied to maintain enough
1224 trailing space to service future expected allocations without having
1225 to re-obtain memory from the system.
1226
1227 Malloc_trim returns 1 if it actually released any memory, else 0.
1228 */
1229 DLMALLOC_EXPORT int dlmalloc_trim(size_t);
1230
1231 /*
1232 malloc_stats();
1233 Prints on stderr the amount of space obtained from the system (both
1234 via sbrk and mmap), the maximum amount (which may be more than
1235 current if malloc_trim and/or munmap got called), and the current
1236 number of bytes allocated via malloc (or realloc, etc) but not yet
1237 freed. Note that this is the number of bytes allocated, not the
1238 number requested. It will be larger than the number requested
1239 because of alignment and bookkeeping overhead. Because it includes
1240 alignment wastage as being in use, this figure may be greater than
1241 zero even when no user-level chunks are allocated.
1242
1243 The reported current and maximum system memory can be inaccurate if
1244 a program makes other calls to system memory allocation functions
1245 (normally sbrk) outside of malloc.
1246
1247 malloc_stats prints only the most commonly interesting statistics.
1248 More information can be obtained by calling mallinfo.
1249 */
1250 DLMALLOC_EXPORT void dlmalloc_stats(void);
1251
1252 /*
1253 malloc_usable_size(void* p);
1254
1255 Returns the number of bytes you can actually use in
1256 an allocated chunk, which may be more than you requested (although
1257 often not) due to alignment and minimum size constraints.
1258 You can use this many bytes without worrying about
1259 overwriting other allocated objects. This is not a particularly great
1260 programming practice. malloc_usable_size can be more useful in
1261 debugging and assertions, for example:
1262
1263 p = malloc(n);
1264 assert(malloc_usable_size(p) >= 256);
1265 */
1266 DLMALLOC_EXPORT size_t dlmalloc_usable_size(void*);
1267
1268 #endif /* ONLY_MSPACES */
1269
1270 #if MSPACES
1271
1272 /*
1273 mspace is an opaque type representing an independent
1274 region of space that supports mspace_malloc, etc.
1275 */
1276 typedef void* mspace;
1277
1278 /*
1279 create_mspace creates and returns a new independent space with the
1280 given initial capacity, or, if 0, the default granularity size. It
1281 returns null if there is no system memory available to create the
1282 space. If argument locked is non-zero, the space uses a separate
1283 lock to control access. The capacity of the space will grow
1284 dynamically as needed to service mspace_malloc requests. You can
1285 control the sizes of incremental increases of this space by
1286 compiling with a different DEFAULT_GRANULARITY or dynamically
1287 setting with mallopt(M_GRANULARITY, value).
1288 */
1289 DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked);
1290
1291 /*
1292 destroy_mspace destroys the given space, and attempts to return all
1293 of its memory back to the system, returning the total number of
1294 bytes freed. After destruction, the results of access to all memory
1295 used by the space become undefined.
1296 */
1297 DLMALLOC_EXPORT size_t destroy_mspace(mspace msp);
1298
1299 /*
1300 create_mspace_with_base uses the memory supplied as the initial base
1301 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1302 space is used for bookkeeping, so the capacity must be at least this
1303 large. (Otherwise 0 is returned.) When this initial space is
1304 exhausted, additional memory will be obtained from the system.
1305 Destroying this space will deallocate all additionally allocated
1306 space (if possible) but not the initial base.
1307 */
1308 DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1309
1310 /*
1311 mspace_track_large_chunks controls whether requests for large chunks
1312 are allocated in their own untracked mmapped regions, separate from
1313 others in this mspace. By default large chunks are not tracked,
1314 which reduces fragmentation. However, such chunks are not
1315 necessarily released to the system upon destroy_mspace. Enabling
1316 tracking by setting to true may increase fragmentation, but avoids
1317 leakage when relying on destroy_mspace to release all memory
1318 allocated using this space. The function returns the previous
1319 setting.
1320 */
1321 DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable);
1322
1323
1324 /*
1325 mspace_malloc behaves as malloc, but operates within
1326 the given space.
1327 */
1328 DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes);
1329
1330 /*
1331 mspace_free behaves as free, but operates within
1332 the given space.
1333
1334 If compiled with FOOTERS==1, mspace_free is not actually needed.
1335 free may be called instead of mspace_free because freed chunks from
1336 any space are handled by their originating spaces.
1337 */
1338 DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem);
1339
1340 /*
1341 mspace_realloc behaves as realloc, but operates within
1342 the given space.
1343
1344 If compiled with FOOTERS==1, mspace_realloc is not actually
1345 needed. realloc may be called instead of mspace_realloc because
1346 realloced chunks from any space are handled by their originating
1347 spaces.
1348 */
1349 DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1350
1351 /*
1352 mspace_calloc behaves as calloc, but operates within
1353 the given space.
1354 */
1355 DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1356
1357 /*
1358 mspace_memalign behaves as memalign, but operates within
1359 the given space.
1360 */
1361 DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1362
1363 /*
1364 mspace_independent_calloc behaves as independent_calloc, but
1365 operates within the given space.
1366 */
1367 DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements,
1368 size_t elem_size, void* chunks[]);
1369
1370 /*
1371 mspace_independent_comalloc behaves as independent_comalloc, but
1372 operates within the given space.
1373 */
1374 DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1375 size_t sizes[], void* chunks[]);
1376
1377 /*
1378 mspace_footprint() returns the number of bytes obtained from the
1379 system for this space.
1380 */
1381 DLMALLOC_EXPORT size_t mspace_footprint(mspace msp);
1382
1383 /*
1384 mspace_max_footprint() returns the peak number of bytes obtained from the
1385 system for this space.
1386 */
1387 DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp);
1388
1389
1390 #if !NO_MALLINFO
1391 /*
1392 mspace_mallinfo behaves as mallinfo, but reports properties of
1393 the given space.
1394 */
1395 DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp);
1396 #endif /* NO_MALLINFO */
1397
1398 /*
1399 malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1400 */
1401 DLMALLOC_EXPORT size_t mspace_usable_size(const void* mem);
1402
1403 /*
1404 mspace_malloc_stats behaves as malloc_stats, but reports
1405 properties of the given space.
1406 */
1407 DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp);
1408
1409 /*
1410 mspace_trim behaves as malloc_trim, but
1411 operates within the given space.
1412 */
1413 DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad);
1414
1415 /*
1416 An alias for mallopt.
1417 */
1418 DLMALLOC_EXPORT int mspace_mallopt(int, int);
1419
1420 #endif /* MSPACES */
1421
1422 #ifdef __cplusplus
1423 } /* end of extern "C" */
1424 #endif /* __cplusplus */
1425
1426 /*
1427 ========================================================================
1428 To make a fully customizable malloc.h header file, cut everything
1429 above this line, put into file malloc.h, edit to suit, and #include it
1430 on the next line, as well as in programs that use this malloc.
1431 ========================================================================
1432 */
1433
1434 /* #include "malloc.h" */
1435
1436 /*------------------------------ internal #includes ---------------------- */
1437
1438 #ifdef _MSC_VER
1439 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1440 #endif /* _MSC_VER */
1441 #if !NO_MALLOC_STATS
1442 #include <stdio.h> /* for printing in malloc_stats */
1443 #endif /* NO_MALLOC_STATS */
1444 #ifndef LACKS_ERRNO_H
1445 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1446 #endif /* LACKS_ERRNO_H */
1447 #ifdef DEBUG
1448 #if ABORT_ON_ASSERT_FAILURE
1449 #undef assert
1450 #define assert(x) if(!(x)) ABORT
1451 #else /* ABORT_ON_ASSERT_FAILURE */
1452 #include <assert.h>
1453 #endif /* ABORT_ON_ASSERT_FAILURE */
1454 #else /* DEBUG */
1455 #ifndef assert
1456 #define assert(x)
1457 #endif
1458 #define DEBUG 0
1459 #endif /* DEBUG */
1460 #if !defined(WIN32) && !defined(LACKS_TIME_H)
1461 #include <time.h> /* for magic initialization */
1462 #endif /* WIN32 */
1463 #ifndef LACKS_STDLIB_H
1464 #include <stdlib.h> /* for abort() */
1465 #endif /* LACKS_STDLIB_H */
1466 #ifndef LACKS_STRING_H
1467 #include <string.h> /* for memset etc */
1468 #endif /* LACKS_STRING_H */
1469 #if USE_BUILTIN_FFS
1470 #ifndef LACKS_STRINGS_H
1471 #include <strings.h> /* for ffs */
1472 #endif /* LACKS_STRINGS_H */
1473 #endif /* USE_BUILTIN_FFS */
1474 #if HAVE_MMAP
1475 #ifndef LACKS_SYS_MMAN_H
1476 /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
1477 #if (defined(linux) && !defined(__USE_GNU))
1478 #define __USE_GNU 1
1479 #include <sys/mman.h> /* for mmap */
1480 #undef __USE_GNU
1481 #else
1482 #include <sys/mman.h> /* for mmap */
1483 #endif /* linux */
1484 #endif /* LACKS_SYS_MMAN_H */
1485 #ifndef LACKS_FCNTL_H
1486 #include <fcntl.h>
1487 #endif /* LACKS_FCNTL_H */
1488 #endif /* HAVE_MMAP */
1489 #ifndef LACKS_UNISTD_H
1490 #include <unistd.h> /* for sbrk, sysconf */
1491 #else /* LACKS_UNISTD_H */
1492 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1493 extern void* sbrk(ptrdiff_t);
1494 #endif /* FreeBSD etc */
1495 #endif /* LACKS_UNISTD_H */
1496 #ifdef HWASAN_ENABLED
1497 #include <lib/hwasan/hwasan_shadow.h>
1498 #endif /* HWASAN_ENABLED */
1499
1500 /* Declarations for locking */
1501 #if USE_LOCKS
1502 #ifndef WIN32
1503 #if defined (__SVR4) && defined (__sun) /* solaris */
1504 #include <thread.h>
1505 #elif !defined(LACKS_SCHED_H)
1506 #include <sched.h>
1507 #endif /* solaris or LACKS_SCHED_H */
1508 #if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
1509 #include <pthread.h>
1510 #endif /* USE_RECURSIVE_LOCKS ... */
1511 #elif defined(_MSC_VER)
1512 #ifndef _M_AMD64
1513 /* These are already defined on AMD64 builds */
1514 #ifdef __cplusplus
1515 extern "C" {
1516 #endif /* __cplusplus */
1517 LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1518 LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1519 #ifdef __cplusplus
1520 }
1521 #endif /* __cplusplus */
1522 #endif /* _M_AMD64 */
1523 #pragma intrinsic (_InterlockedCompareExchange)
1524 #pragma intrinsic (_InterlockedExchange)
1525 #define interlockedcompareexchange _InterlockedCompareExchange
1526 #define interlockedexchange _InterlockedExchange
1527 #elif defined(WIN32) && defined(__GNUC__)
1528 #define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
1529 #define interlockedexchange __sync_lock_test_and_set
1530 #endif /* Win32 */
1531 #else /* USE_LOCKS */
1532 #endif /* USE_LOCKS */
1533
1534 #ifndef LOCK_AT_FORK
1535 #define LOCK_AT_FORK 0
1536 #endif
1537
1538 /* Declarations for bit scanning on win32 */
1539 #if defined(_MSC_VER) && _MSC_VER>=1300
1540 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1541 #ifdef __cplusplus
1542 extern "C" {
1543 #endif /* __cplusplus */
1544 unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1545 unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1546 #ifdef __cplusplus
1547 }
1548 #endif /* __cplusplus */
1549
1550 #define BitScanForward _BitScanForward
1551 #define BitScanReverse _BitScanReverse
1552 #pragma intrinsic(_BitScanForward)
1553 #pragma intrinsic(_BitScanReverse)
1554 #endif /* BitScanForward */
1555 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1556
1557 #ifndef WIN32
1558 #ifndef malloc_getpagesize
1559 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1560 # ifndef _SC_PAGE_SIZE
1561 # define _SC_PAGE_SIZE _SC_PAGESIZE
1562 # endif
1563 # endif
1564 # ifdef _SC_PAGE_SIZE
1565 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1566 # else
1567 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1568 extern size_t getpagesize();
1569 # define malloc_getpagesize getpagesize()
1570 # else
1571 # ifdef WIN32 /* use supplied emulation of getpagesize */
1572 # define malloc_getpagesize getpagesize()
1573 # else
1574 # ifndef LACKS_SYS_PARAM_H
1575 # include <sys/param.h>
1576 # endif
1577 # ifdef EXEC_PAGESIZE
1578 # define malloc_getpagesize EXEC_PAGESIZE
1579 # else
1580 # ifdef NBPG
1581 # ifndef CLSIZE
1582 # define malloc_getpagesize NBPG
1583 # else
1584 # define malloc_getpagesize (NBPG * CLSIZE)
1585 # endif
1586 # else
1587 # ifdef NBPC
1588 # define malloc_getpagesize NBPC
1589 # else
1590 # ifdef PAGESIZE
1591 # define malloc_getpagesize PAGESIZE
1592 # else /* just guess */
1593 # define malloc_getpagesize ((size_t)4096U)
1594 # endif
1595 # endif
1596 # endif
1597 # endif
1598 # endif
1599 # endif
1600 # endif
1601 #endif
1602 #endif
1603
1604 /* ------------------- size_t and alignment properties -------------------- */
1605
1606 /* The byte and bit size of a size_t */
1607 #define SIZE_T_SIZE (sizeof(size_t))
1608 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1609
1610 /* Some constants coerced to size_t */
1611 /* Annoying but necessary to avoid errors on some platforms */
1612 #define SIZE_T_ZERO ((size_t)0)
1613 #define SIZE_T_ONE ((size_t)1)
1614 #define SIZE_T_TWO ((size_t)2)
1615 #define SIZE_T_FOUR ((size_t)4)
1616 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1617 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1618 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1619 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1620
1621 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1622 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1623
1624 /* True if address a has acceptable alignment */
1625 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1626
1627 /* the number of bytes to offset an address to align it */
1628 #define align_offset(A)\
1629 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1630 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1631
1632 /* -------------------------- MMAP preliminaries ------------------------- */
1633
1634 /*
1635 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1636 checks to fail so compiler optimizer can delete code rather than
1637 using so many "#if"s.
1638 */
1639
1640
1641 /* MORECORE and MMAP must return MFAIL on failure */
1642 #define MFAIL ((void*)(MAX_SIZE_T))
1643 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1644
1645 #if HAVE_MMAP
1646
1647 #ifndef WIN32
1648 #define MUNMAP_DEFAULT(a, s) munmap((a), (s))
1649 #define MMAP_PROT (PROT_READ|PROT_WRITE)
1650 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1651 #define MAP_ANONYMOUS MAP_ANON
1652 #endif /* MAP_ANON */
1653 #ifdef MAP_ANONYMOUS
1654 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1655 #define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1656 #else /* MAP_ANONYMOUS */
1657 /*
1658 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1659 is unlikely to be needed, but is supplied just in case.
1660 */
1661 #define MMAP_FLAGS (MAP_PRIVATE)
1662 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1663 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1664 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1665 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1666 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1667 #endif /* MAP_ANONYMOUS */
1668
1669 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1670
1671 #else /* WIN32 */
1672
1673 /* Win32 MMAP via VirtualAlloc */
win32mmap(size_t size)1674 static FORCEINLINE void* win32mmap(size_t size) {
1675 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1676 return (ptr != 0)? ptr: MFAIL;
1677 }
1678
1679 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
win32direct_mmap(size_t size)1680 static FORCEINLINE void* win32direct_mmap(size_t size) {
1681 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1682 PAGE_READWRITE);
1683 return (ptr != 0)? ptr: MFAIL;
1684 }
1685
1686 /* This function supports releasing coalesed segments */
win32munmap(void * ptr,size_t size)1687 static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1688 MEMORY_BASIC_INFORMATION minfo;
1689 char* cptr = (char*)ptr;
1690 while (size) {
1691 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1692 return -1;
1693 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1694 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1695 return -1;
1696 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1697 return -1;
1698 cptr += minfo.RegionSize;
1699 size -= minfo.RegionSize;
1700 }
1701 return 0;
1702 }
1703
1704 #define MMAP_DEFAULT(s) win32mmap(s)
1705 #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
1706 #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
1707 #endif /* WIN32 */
1708 #endif /* HAVE_MMAP */
1709
1710 #if HAVE_MREMAP
1711 #ifndef WIN32
1712 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1713 #endif /* WIN32 */
1714 #endif /* HAVE_MREMAP */
1715
1716 /**
1717 * Define CALL_MORECORE
1718 */
1719 #if HAVE_MORECORE
1720 #ifdef MORECORE
1721 #define CALL_MORECORE(S) MORECORE(S)
1722 #else /* MORECORE */
1723 #define CALL_MORECORE(S) MORECORE_DEFAULT(S)
1724 #endif /* MORECORE */
1725 #else /* HAVE_MORECORE */
1726 #define CALL_MORECORE(S) MFAIL
1727 #endif /* HAVE_MORECORE */
1728
1729 /**
1730 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1731 */
1732 #if HAVE_MMAP
1733 #define USE_MMAP_BIT (SIZE_T_ONE)
1734
1735 #ifdef MMAP
1736 #define CALL_MMAP(s) MMAP(s)
1737 #else /* MMAP */
1738 #define CALL_MMAP(s) MMAP_DEFAULT(s)
1739 #endif /* MMAP */
1740 #ifdef MUNMAP
1741 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1742 #else /* MUNMAP */
1743 #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
1744 #endif /* MUNMAP */
1745 #ifdef DIRECT_MMAP
1746 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1747 #else /* DIRECT_MMAP */
1748 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1749 #endif /* DIRECT_MMAP */
1750 #else /* HAVE_MMAP */
1751 #define USE_MMAP_BIT (SIZE_T_ZERO)
1752
1753 #define MMAP(s) MFAIL
1754 #define MUNMAP(a, s) (-1)
1755 #define DIRECT_MMAP(s) MFAIL
1756 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1757 #define CALL_MMAP(s) MMAP(s)
1758 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1759 #endif /* HAVE_MMAP */
1760
1761 /**
1762 * Define CALL_MREMAP
1763 */
1764 #if HAVE_MMAP && HAVE_MREMAP
1765 #ifdef MREMAP
1766 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1767 #else /* MREMAP */
1768 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1769 #endif /* MREMAP */
1770 #else /* HAVE_MMAP && HAVE_MREMAP */
1771 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1772 #endif /* HAVE_MMAP && HAVE_MREMAP */
1773
1774 /* mstate bit set if continguous morecore disabled or failed */
1775 #define USE_NONCONTIGUOUS_BIT (4U)
1776
1777 /* segment bit set in create_mspace_with_base */
1778 #define EXTERN_BIT (8U)
1779
1780
1781 /* --------------------------- Lock preliminaries ------------------------ */
1782
1783 /*
1784 When locks are defined, there is one global lock, plus
1785 one per-mspace lock.
1786
1787 The global lock_ensures that mparams.magic and other unique
1788 mparams values are initialized only once. It also protects
1789 sequences of calls to MORECORE. In many cases sys_alloc requires
1790 two calls, that should not be interleaved with calls by other
1791 threads. This does not protect against direct calls to MORECORE
1792 by other threads not using this lock, so there is still code to
1793 cope the best we can on interference.
1794
1795 Per-mspace locks surround calls to malloc, free, etc.
1796 By default, locks are simple non-reentrant mutexes.
1797
1798 Because lock-protected regions generally have bounded times, it is
1799 OK to use the supplied simple spinlocks. Spinlocks are likely to
1800 improve performance for lightly contended applications, but worsen
1801 performance under heavy contention.
1802
1803 If USE_LOCKS is > 1, the definitions of lock routines here are
1804 bypassed, in which case you will need to define the type MLOCK_T,
1805 and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
1806 and TRY_LOCK. You must also declare a
1807 static MLOCK_T malloc_global_mutex = { initialization values };.
1808
1809 */
1810
1811 #if !USE_LOCKS
1812 #define USE_LOCK_BIT (0U)
1813 #define INITIAL_LOCK(l) (0)
1814 #define DESTROY_LOCK(l) (0)
1815 #define ACQUIRE_MALLOC_GLOBAL_LOCK()
1816 #define RELEASE_MALLOC_GLOBAL_LOCK()
1817
1818 #else
1819 #if USE_LOCKS > 1
1820 /* ----------------------- User-defined locks ------------------------ */
1821 /* Define your own lock implementation here */
1822 /* #define INITIAL_LOCK(lk) ... */
1823 /* #define DESTROY_LOCK(lk) ... */
1824 /* #define ACQUIRE_LOCK(lk) ... */
1825 /* #define RELEASE_LOCK(lk) ... */
1826 /* #define TRY_LOCK(lk) ... */
1827 /* static MLOCK_T malloc_global_mutex = ... */
1828
1829 #elif USE_SPIN_LOCKS
1830
1831 /* First, define CAS_LOCK and CLEAR_LOCK on ints */
1832 /* Note CAS_LOCK defined to return 0 on success */
1833
1834 #if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
1835 #define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1)
1836 #define CLEAR_LOCK(sl) __sync_lock_release(sl)
1837
1838 #elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
1839 /* Custom spin locks for older gcc on x86 */
x86_cas_lock(int * sl)1840 static FORCEINLINE int x86_cas_lock(int *sl) {
1841 int ret;
1842 int val = 1;
1843 int cmp = 0;
1844 __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
1845 : "=a" (ret)
1846 : "r" (val), "m" (*(sl)), "0"(cmp)
1847 : "memory", "cc");
1848 return ret;
1849 }
1850
x86_clear_lock(int * sl)1851 static FORCEINLINE void x86_clear_lock(int* sl) {
1852 assert(*sl != 0);
1853 int prev = 0;
1854 int ret;
1855 __asm__ __volatile__ ("lock; xchgl %0, %1"
1856 : "=r" (ret)
1857 : "m" (*(sl)), "0"(prev)
1858 : "memory");
1859 }
1860
1861 #define CAS_LOCK(sl) x86_cas_lock(sl)
1862 #define CLEAR_LOCK(sl) x86_clear_lock(sl)
1863
1864 #else /* Win32 MSC */
1865 #define CAS_LOCK(sl) interlockedexchange(sl, (LONG)1)
1866 #define CLEAR_LOCK(sl) interlockedexchange (sl, (LONG)0)
1867
1868 #endif /* ... gcc spins locks ... */
1869
1870 /* How to yield for a spin lock */
1871 #define SPINS_PER_YIELD 63
1872 #if defined(_MSC_VER)
1873 #define SLEEP_EX_DURATION 50 /* delay for yield/sleep */
1874 #define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE)
1875 #elif defined (__SVR4) && defined (__sun) /* solaris */
1876 #define SPIN_LOCK_YIELD thr_yield();
1877 #elif !defined(LACKS_SCHED_H)
1878 #define SPIN_LOCK_YIELD sched_yield();
1879 #else
1880 #define SPIN_LOCK_YIELD
1881 #endif /* ... yield ... */
1882
1883 #if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
1884 /* Plain spin locks use single word (embedded in malloc_states) */
spin_acquire_lock(int * sl)1885 static int spin_acquire_lock(int *sl) {
1886 int spins = 0;
1887 while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) {
1888 if ((++spins & SPINS_PER_YIELD) == 0) {
1889 SPIN_LOCK_YIELD;
1890 }
1891 }
1892 return 0;
1893 }
1894
1895 #define MLOCK_T int
1896 #define TRY_LOCK(sl) !CAS_LOCK(sl)
1897 #define RELEASE_LOCK(sl) CLEAR_LOCK(sl)
1898 #define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)
1899 #define INITIAL_LOCK(sl) (*sl = 0)
1900 #define DESTROY_LOCK(sl) (0)
1901 static MLOCK_T malloc_global_mutex = 0;
1902
1903 #else /* USE_RECURSIVE_LOCKS */
1904 /* types for lock owners */
1905 #ifdef WIN32
1906 #define THREAD_ID_T DWORD
1907 #define CURRENT_THREAD GetCurrentThreadId()
1908 #define EQ_OWNER(X,Y) ((X) == (Y))
1909 #else
1910 /*
1911 Note: the following assume that pthread_t is a type that can be
1912 initialized to (casted) zero. If this is not the case, you will need to
1913 somehow redefine these or not use spin locks.
1914 */
1915 #define THREAD_ID_T pthread_t
1916 #define CURRENT_THREAD pthread_self()
1917 #define EQ_OWNER(X,Y) pthread_equal(X, Y)
1918 #endif
1919
1920 struct malloc_recursive_lock {
1921 int sl;
1922 unsigned int c;
1923 THREAD_ID_T threadid;
1924 };
1925
1926 #define MLOCK_T struct malloc_recursive_lock
1927 static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0};
1928
recursive_release_lock(MLOCK_T * lk)1929 static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) {
1930 assert(lk->sl != 0);
1931 if (--lk->c == 0) {
1932 CLEAR_LOCK(&lk->sl);
1933 }
1934 }
1935
recursive_acquire_lock(MLOCK_T * lk)1936 static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) {
1937 THREAD_ID_T mythreadid = CURRENT_THREAD;
1938 int spins = 0;
1939 for (;;) {
1940 if (*((volatile int *)(&lk->sl)) == 0) {
1941 if (!CAS_LOCK(&lk->sl)) {
1942 lk->threadid = mythreadid;
1943 lk->c = 1;
1944 return 0;
1945 }
1946 }
1947 else if (EQ_OWNER(lk->threadid, mythreadid)) {
1948 ++lk->c;
1949 return 0;
1950 }
1951 if ((++spins & SPINS_PER_YIELD) == 0) {
1952 SPIN_LOCK_YIELD;
1953 }
1954 }
1955 }
1956
recursive_try_lock(MLOCK_T * lk)1957 static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) {
1958 THREAD_ID_T mythreadid = CURRENT_THREAD;
1959 if (*((volatile int *)(&lk->sl)) == 0) {
1960 if (!CAS_LOCK(&lk->sl)) {
1961 lk->threadid = mythreadid;
1962 lk->c = 1;
1963 return 1;
1964 }
1965 }
1966 else if (EQ_OWNER(lk->threadid, mythreadid)) {
1967 ++lk->c;
1968 return 1;
1969 }
1970 return 0;
1971 }
1972
1973 #define RELEASE_LOCK(lk) recursive_release_lock(lk)
1974 #define TRY_LOCK(lk) recursive_try_lock(lk)
1975 #define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk)
1976 #define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)
1977 #define DESTROY_LOCK(lk) (0)
1978 #endif /* USE_RECURSIVE_LOCKS */
1979
1980 #elif defined(WIN32) /* Win32 critical sections */
1981 #define MLOCK_T CRITICAL_SECTION
1982 #define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0)
1983 #define RELEASE_LOCK(lk) LeaveCriticalSection(lk)
1984 #define TRY_LOCK(lk) TryEnterCriticalSection(lk)
1985 #define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))
1986 #define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0)
1987 #define NEED_GLOBAL_LOCK_INIT
1988
1989 static MLOCK_T malloc_global_mutex;
1990 static volatile LONG malloc_global_mutex_status;
1991
1992 /* Use spin loop to initialize global lock */
init_malloc_global_mutex()1993 static void init_malloc_global_mutex() {
1994 for (;;) {
1995 long stat = malloc_global_mutex_status;
1996 if (stat > 0)
1997 return;
1998 /* transition to < 0 while initializing, then to > 0) */
1999 if (stat == 0 &&
2000 interlockedcompareexchange(&malloc_global_mutex_status, (LONG)-1, (LONG)0) == 0) {
2001 InitializeCriticalSection(&malloc_global_mutex);
2002 interlockedexchange(&malloc_global_mutex_status, (LONG)1);
2003 return;
2004 }
2005 SleepEx(0, FALSE);
2006 }
2007 }
2008
2009 #else /* pthreads-based locks */
2010 #define MLOCK_T pthread_mutex_t
2011 #define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk)
2012 #define RELEASE_LOCK(lk) pthread_mutex_unlock(lk)
2013 #define TRY_LOCK(lk) (!pthread_mutex_trylock(lk))
2014 #define INITIAL_LOCK(lk) pthread_init_lock(lk)
2015 #define DESTROY_LOCK(lk) pthread_mutex_destroy(lk)
2016
2017 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
2018 /* Cope with old-style linux recursive lock initialization by adding */
2019 /* skipped internal declaration from pthread.h */
2020 extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
2021 int __kind));
2022 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
2023 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
2024 #endif /* USE_RECURSIVE_LOCKS ... */
2025
2026 static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
2027
pthread_init_lock(MLOCK_T * lk)2028 static int pthread_init_lock (MLOCK_T *lk) {
2029 pthread_mutexattr_t attr;
2030 if (pthread_mutexattr_init(&attr)) return 1;
2031 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
2032 if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
2033 #endif
2034 if (pthread_mutex_init(lk, &attr)) return 1;
2035 if (pthread_mutexattr_destroy(&attr)) return 1;
2036 return 0;
2037 }
2038
2039 #endif /* ... lock types ... */
2040
2041 /* Common code for all lock types */
2042 #define USE_LOCK_BIT (2U)
2043
2044 #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
2045 #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
2046 #endif
2047
2048 #ifndef RELEASE_MALLOC_GLOBAL_LOCK
2049 #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
2050 #endif
2051
2052 #endif /* USE_LOCKS */
2053
2054 /* ----------------------- Chunk representations ------------------------ */
2055
2056 /*
2057 (The following includes lightly edited explanations by Colin Plumb.)
2058
2059 The malloc_chunk declaration below is misleading (but accurate and
2060 necessary). It declares a "view" into memory allowing access to
2061 necessary fields at known offsets from a given base.
2062
2063 Chunks of memory are maintained using a `boundary tag' method as
2064 originally described by Knuth. (See the paper by Paul Wilson
2065 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
2066 techniques.) Sizes of free chunks are stored both in the front of
2067 each chunk and at the end. This makes consolidating fragmented
2068 chunks into bigger chunks fast. The head fields also hold bits
2069 representing whether chunks are free or in use.
2070
2071 Here are some pictures to make it clearer. They are "exploded" to
2072 show that the state of a chunk can be thought of as extending from
2073 the high 31 bits of the head field of its header through the
2074 prev_foot and PINUSE_BIT bit of the following chunk header.
2075
2076 A chunk that's in use looks like:
2077
2078 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2079 | Size of previous chunk (if P = 0) |
2080 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2081 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2082 | Size of this chunk 1| +-+
2083 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2084 | |
2085 +- -+
2086 | |
2087 +- -+
2088 | :
2089 +- size - sizeof(size_t) available payload bytes -+
2090 : |
2091 chunk-> +- -+
2092 | |
2093 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2094 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2095 | Size of next chunk (may or may not be in use) | +-+
2096 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2097
2098 And if it's free, it looks like this:
2099
2100 chunk-> +- -+
2101 | User payload (must be in use, or we would have merged!) |
2102 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2103 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2104 | Size of this chunk 0| +-+
2105 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2106 | Next pointer |
2107 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2108 | Prev pointer |
2109 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2110 | :
2111 +- size - sizeof(struct chunk) unused bytes -+
2112 : |
2113 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2114 | Size of this chunk |
2115 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2116 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2117 | Size of next chunk (must be in use, or we would have merged)| +-+
2118 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2119 | :
2120 +- User payload -+
2121 : |
2122 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2123 |0|
2124 +-+
2125 Note that since we always merge adjacent free chunks, the chunks
2126 adjacent to a free chunk must be in use.
2127
2128 Given a pointer to a chunk (which can be derived trivially from the
2129 payload pointer) we can, in O(1) time, find out whether the adjacent
2130 chunks are free, and if so, unlink them from the lists that they
2131 are on and merge them with the current chunk.
2132
2133 Chunks always begin on even word boundaries, so the mem portion
2134 (which is returned to the user) is also on an even word boundary, and
2135 thus at least double-word aligned.
2136
2137 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2138 chunk size (which is always a multiple of two words), is an in-use
2139 bit for the *previous* chunk. If that bit is *clear*, then the
2140 word before the current chunk size contains the previous chunk
2141 size, and can be used to find the front of the previous chunk.
2142 The very first chunk allocated always has this bit set, preventing
2143 access to non-existent (or non-owned) memory. If pinuse is set for
2144 any given chunk, then you CANNOT determine the size of the
2145 previous chunk, and might even get a memory addressing fault when
2146 trying to do so.
2147
2148 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2149 the chunk size redundantly records whether the current chunk is
2150 inuse (unless the chunk is mmapped). This redundancy enables usage
2151 checks within free and realloc, and reduces indirection when freeing
2152 and consolidating chunks.
2153
2154 Each freshly allocated chunk must have both cinuse and pinuse set.
2155 That is, each allocated chunk borders either a previously allocated
2156 and still in-use chunk, or the base of its memory arena. This is
2157 ensured by making all allocations from the `lowest' part of any
2158 found chunk. Further, no free chunk physically borders another one,
2159 so each free chunk is known to be preceded and followed by either
2160 inuse chunks or the ends of memory.
2161
2162 Note that the `foot' of the current chunk is actually represented
2163 as the prev_foot of the NEXT chunk. This makes it easier to
2164 deal with alignments etc but can be very confusing when trying
2165 to extend or adapt this code.
2166
2167 The exceptions to all this are
2168
2169 1. The special chunk `top' is the top-most available chunk (i.e.,
2170 the one bordering the end of available memory). It is treated
2171 specially. Top is never included in any bin, is used only if
2172 no other chunk is available, and is released back to the
2173 system if it is very large (see M_TRIM_THRESHOLD). In effect,
2174 the top chunk is treated as larger (and thus less well
2175 fitting) than any other available chunk. The top chunk
2176 doesn't update its trailing size field since there is no next
2177 contiguous chunk that would have to index off it. However,
2178 space is still allocated for it (TOP_FOOT_SIZE) to enable
2179 separation or merging when space is extended.
2180
2181 3. Chunks allocated via mmap, have both cinuse and pinuse bits
2182 cleared in their head fields. Because they are allocated
2183 one-by-one, each must carry its own prev_foot field, which is
2184 also used to hold the offset this chunk has within its mmapped
2185 region, which is needed to preserve alignment. Each mmapped
2186 chunk is trailed by the first two fields of a fake next-chunk
2187 for sake of usage checks.
2188
2189 */
2190
2191 struct malloc_chunk {
2192 size_t prev_foot; /* Size of previous chunk (if free). */
2193 size_t head; /* Size and inuse bits. */
2194 struct malloc_chunk* fd; /* double links -- used only if free. */
2195 struct malloc_chunk* bk;
2196 };
2197
2198 typedef struct malloc_chunk mchunk;
2199 typedef struct malloc_chunk* mchunkptr;
2200 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
2201 typedef unsigned int bindex_t; /* Described below */
2202 typedef unsigned int binmap_t; /* Described below */
2203 typedef unsigned int flag_t; /* The type of various bit flag sets */
2204
2205 /* ------------------- Chunks sizes and alignments ----------------------- */
2206
2207 #define MCHUNK_SIZE (sizeof(mchunk))
2208
2209 #if FOOTERS
2210 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2211 #else /* FOOTERS */
2212 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
2213 #endif /* FOOTERS */
2214
2215 /* MMapped chunks need a second word of overhead ... */
2216 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2217 /* ... and additional padding for fake next-chunk at foot */
2218 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
2219
2220 /* The smallest size we can malloc is an aligned minimal chunk */
2221 #define MIN_CHUNK_SIZE\
2222 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2223
2224 /* conversion from malloc headers to user pointers, and back */
2225 #define _chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
2226 #define _mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2227 #ifdef HWASAN_ENABLED
2228 /* TODO: sanitize malloc chunks */
2229 #define chunk2mem(p) (void*)hwasan_remove_ptr_tag(_chunk2mem(p))
2230 #define mem2chunk(mem) (mchunkptr)hwasan_remove_ptr_tag(_mem2chunk(mem))
2231 #else
2232 #define chunk2mem(p) _chunk2mem(p)
2233 #define mem2chunk(mem) _mem2chunk(mem)
2234 #endif
2235 /* chunk associated with aligned address A */
2236 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
2237
2238 /* Bounds on request (not chunk) sizes. */
2239 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
2240 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2241
2242 /* pad request bytes into a usable size */
2243 #define pad_request(req) \
2244 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2245
2246 /* pad request, checking for minimum (but not maximum) */
2247 #define request2size(req) \
2248 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2249
2250
2251 /* ------------------ Operations on head and foot fields ----------------- */
2252
2253 /*
2254 The head field of a chunk is or'ed with PINUSE_BIT when previous
2255 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2256 use, unless mmapped, in which case both bits are cleared.
2257
2258 FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2259 */
2260
2261 #define PINUSE_BIT (SIZE_T_ONE)
2262 #define CINUSE_BIT (SIZE_T_TWO)
2263 #define FLAG4_BIT (SIZE_T_FOUR)
2264 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
2265 #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2266
2267 /* Head value for fenceposts */
2268 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
2269
2270 /* extraction of fields from head words */
2271 #define cinuse(p) ((p)->head & CINUSE_BIT)
2272 #define pinuse(p) ((p)->head & PINUSE_BIT)
2273 #define flag4inuse(p) ((p)->head & FLAG4_BIT)
2274 #define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)
2275 #define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)
2276
2277 #define chunksize(p) ((p)->head & ~(FLAG_BITS))
2278
2279 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
2280 #define set_flag4(p) ((p)->head |= FLAG4_BIT)
2281 #define clear_flag4(p) ((p)->head &= ~FLAG4_BIT)
2282
2283 /* Treat space at ptr +/- offset as a chunk */
2284 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2285 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2286
2287 /* Ptr to next or previous physical malloc_chunk. */
2288 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2289 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2290
2291 /* extract next chunk's pinuse bit */
2292 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
2293
2294 /* Get/set size at footer */
2295 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2296 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2297
2298 /* Set size, pinuse bit, and foot */
2299 #define set_size_and_pinuse_of_free_chunk(p, s)\
2300 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2301
2302 /* Set size, pinuse bit, foot, and clear next pinuse */
2303 #define set_free_with_pinuse(p, s, n)\
2304 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2305
2306 /* Get the internal overhead associated with chunk p */
2307 #define overhead_for(p)\
2308 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2309
2310 /* Return true if malloced space is not necessarily cleared */
2311 #if MMAP_CLEARS
2312 #define calloc_must_clear(p) (!is_mmapped(p))
2313 #else /* MMAP_CLEARS */
2314 #define calloc_must_clear(p) (1)
2315 #endif /* MMAP_CLEARS */
2316
2317 /* ---------------------- Overlaid data structures ----------------------- */
2318
2319 /*
2320 When chunks are not in use, they are treated as nodes of either
2321 lists or trees.
2322
2323 "Small" chunks are stored in circular doubly-linked lists, and look
2324 like this:
2325
2326 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2327 | Size of previous chunk |
2328 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2329 `head:' | Size of chunk, in bytes |P|
2330 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2331 | Forward pointer to next chunk in list |
2332 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2333 | Back pointer to previous chunk in list |
2334 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2335 | Unused space (may be 0 bytes long) .
2336 . .
2337 . |
2338 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2339 `foot:' | Size of chunk, in bytes |
2340 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2341
2342 Larger chunks are kept in a form of bitwise digital trees (aka
2343 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
2344 free chunks greater than 256 bytes, their size doesn't impose any
2345 constraints on user chunk sizes. Each node looks like:
2346
2347 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2348 | Size of previous chunk |
2349 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2350 `head:' | Size of chunk, in bytes |P|
2351 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2352 | Forward pointer to next chunk of same size |
2353 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2354 | Back pointer to previous chunk of same size |
2355 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2356 | Pointer to left child (child[0]) |
2357 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2358 | Pointer to right child (child[1]) |
2359 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2360 | Pointer to parent |
2361 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2362 | bin index of this chunk |
2363 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2364 | Unused space .
2365 . |
2366 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2367 `foot:' | Size of chunk, in bytes |
2368 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2369
2370 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
2371 of the same size are arranged in a circularly-linked list, with only
2372 the oldest chunk (the next to be used, in our FIFO ordering)
2373 actually in the tree. (Tree members are distinguished by a non-null
2374 parent pointer.) If a chunk with the same size an an existing node
2375 is inserted, it is linked off the existing node using pointers that
2376 work in the same way as fd/bk pointers of small chunks.
2377
2378 Each tree contains a power of 2 sized range of chunk sizes (the
2379 smallest is 0x100 <= x < 0x180), which is is divided in half at each
2380 tree level, with the chunks in the smaller half of the range (0x100
2381 <= x < 0x140 for the top nose) in the left subtree and the larger
2382 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
2383 done by inspecting individual bits.
2384
2385 Using these rules, each node's left subtree contains all smaller
2386 sizes than its right subtree. However, the node at the root of each
2387 subtree has no particular ordering relationship to either. (The
2388 dividing line between the subtree sizes is based on trie relation.)
2389 If we remove the last chunk of a given size from the interior of the
2390 tree, we need to replace it with a leaf node. The tree ordering
2391 rules permit a node to be replaced by any leaf below it.
2392
2393 The smallest chunk in a tree (a common operation in a best-fit
2394 allocator) can be found by walking a path to the leftmost leaf in
2395 the tree. Unlike a usual binary tree, where we follow left child
2396 pointers until we reach a null, here we follow the right child
2397 pointer any time the left one is null, until we reach a leaf with
2398 both child pointers null. The smallest chunk in the tree will be
2399 somewhere along that path.
2400
2401 The worst case number of steps to add, find, or remove a node is
2402 bounded by the number of bits differentiating chunks within
2403 bins. Under current bin calculations, this ranges from 6 up to 21
2404 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2405 is of course much better.
2406 */
2407
2408 struct malloc_tree_chunk {
2409 /* The first four fields must be compatible with malloc_chunk */
2410 size_t prev_foot;
2411 size_t head;
2412 struct malloc_tree_chunk* fd;
2413 struct malloc_tree_chunk* bk;
2414
2415 struct malloc_tree_chunk* child[2];
2416 struct malloc_tree_chunk* parent;
2417 bindex_t index;
2418 };
2419
2420 typedef struct malloc_tree_chunk tchunk;
2421 typedef struct malloc_tree_chunk* tchunkptr;
2422 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2423
2424 /* A little helper macro for trees */
2425 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2426
2427 /* ----------------------------- Segments -------------------------------- */
2428
2429 /*
2430 Each malloc space may include non-contiguous segments, held in a
2431 list headed by an embedded malloc_segment record representing the
2432 top-most space. Segments also include flags holding properties of
2433 the space. Large chunks that are directly allocated by mmap are not
2434 included in this list. They are instead independently created and
2435 destroyed without otherwise keeping track of them.
2436
2437 Segment management mainly comes into play for spaces allocated by
2438 MMAP. Any call to MMAP might or might not return memory that is
2439 adjacent to an existing segment. MORECORE normally contiguously
2440 extends the current space, so this space is almost always adjacent,
2441 which is simpler and faster to deal with. (This is why MORECORE is
2442 used preferentially to MMAP when both are available -- see
2443 sys_alloc.) When allocating using MMAP, we don't use any of the
2444 hinting mechanisms (inconsistently) supported in various
2445 implementations of unix mmap, or distinguish reserving from
2446 committing memory. Instead, we just ask for space, and exploit
2447 contiguity when we get it. It is probably possible to do
2448 better than this on some systems, but no general scheme seems
2449 to be significantly better.
2450
2451 Management entails a simpler variant of the consolidation scheme
2452 used for chunks to reduce fragmentation -- new adjacent memory is
2453 normally prepended or appended to an existing segment. However,
2454 there are limitations compared to chunk consolidation that mostly
2455 reflect the fact that segment processing is relatively infrequent
2456 (occurring only when getting memory from system) and that we
2457 don't expect to have huge numbers of segments:
2458
2459 * Segments are not indexed, so traversal requires linear scans. (It
2460 would be possible to index these, but is not worth the extra
2461 overhead and complexity for most programs on most platforms.)
2462 * New segments are only appended to old ones when holding top-most
2463 memory; if they cannot be prepended to others, they are held in
2464 different segments.
2465
2466 Except for the top-most segment of an mstate, each segment record
2467 is kept at the tail of its segment. Segments are added by pushing
2468 segment records onto the list headed by &mstate.seg for the
2469 containing mstate.
2470
2471 Segment flags control allocation/merge/deallocation policies:
2472 * If EXTERN_BIT set, then we did not allocate this segment,
2473 and so should not try to deallocate or merge with others.
2474 (This currently holds only for the initial segment passed
2475 into create_mspace_with_base.)
2476 * If USE_MMAP_BIT set, the segment may be merged with
2477 other surrounding mmapped segments and trimmed/de-allocated
2478 using munmap.
2479 * If neither bit is set, then the segment was obtained using
2480 MORECORE so can be merged with surrounding MORECORE'd segments
2481 and deallocated/trimmed using MORECORE with negative arguments.
2482 */
2483
2484 struct malloc_segment {
2485 char* base; /* base address */
2486 size_t size; /* allocated size */
2487 struct malloc_segment* next; /* ptr to next segment */
2488 flag_t sflags; /* mmap and extern flag */
2489 };
2490
2491 #define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)
2492 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2493
2494 typedef struct malloc_segment msegment;
2495 typedef struct malloc_segment* msegmentptr;
2496
2497 /* ---------------------------- malloc_state ----------------------------- */
2498
2499 /*
2500 A malloc_state holds all of the bookkeeping for a space.
2501 The main fields are:
2502
2503 Top
2504 The topmost chunk of the currently active segment. Its size is
2505 cached in topsize. The actual size of topmost space is
2506 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2507 fenceposts and segment records if necessary when getting more
2508 space from the system. The size at which to autotrim top is
2509 cached from mparams in trim_check, except that it is disabled if
2510 an autotrim fails.
2511
2512 Designated victim (dv)
2513 This is the preferred chunk for servicing small requests that
2514 don't have exact fits. It is normally the chunk split off most
2515 recently to service another small request. Its size is cached in
2516 dvsize. The link fields of this chunk are not maintained since it
2517 is not kept in a bin.
2518
2519 SmallBins
2520 An array of bin headers for free chunks. These bins hold chunks
2521 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2522 chunks of all the same size, spaced 8 bytes apart. To simplify
2523 use in double-linked lists, each bin header acts as a malloc_chunk
2524 pointing to the real first node, if it exists (else pointing to
2525 itself). This avoids special-casing for headers. But to avoid
2526 waste, we allocate only the fd/bk pointers of bins, and then use
2527 repositioning tricks to treat these as the fields of a chunk.
2528
2529 TreeBins
2530 Treebins are pointers to the roots of trees holding a range of
2531 sizes. There are 2 equally spaced treebins for each power of two
2532 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2533 larger.
2534
2535 Bin maps
2536 There is one bit map for small bins ("smallmap") and one for
2537 treebins ("treemap). Each bin sets its bit when non-empty, and
2538 clears the bit when empty. Bit operations are then used to avoid
2539 bin-by-bin searching -- nearly all "search" is done without ever
2540 looking at bins that won't be selected. The bit maps
2541 conservatively use 32 bits per map word, even if on 64bit system.
2542 For a good description of some of the bit-based techniques used
2543 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2544 supplement at http://hackersdelight.org/). Many of these are
2545 intended to reduce the branchiness of paths through malloc etc, as
2546 well as to reduce the number of memory locations read or written.
2547
2548 Segments
2549 A list of segments headed by an embedded malloc_segment record
2550 representing the initial space.
2551
2552 Address check support
2553 The least_addr field is the least address ever obtained from
2554 MORECORE or MMAP. Attempted frees and reallocs of any address less
2555 than this are trapped (unless INSECURE is defined).
2556
2557 Magic tag
2558 A cross-check field that should always hold same value as mparams.magic.
2559
2560 Max allowed footprint
2561 The maximum allowed bytes to allocate from system (zero means no limit)
2562
2563 Flags
2564 Bits recording whether to use MMAP, locks, or contiguous MORECORE
2565
2566 Statistics
2567 Each space keeps track of current and maximum system memory
2568 obtained via MORECORE or MMAP.
2569
2570 Trim support
2571 Fields holding the amount of unused topmost memory that should trigger
2572 trimming, and a counter to force periodic scanning to release unused
2573 non-topmost segments.
2574
2575 Locking
2576 If USE_LOCKS is defined, the "mutex" lock is acquired and released
2577 around every public call using this mspace.
2578
2579 Extension support
2580 A void* pointer and a size_t field that can be used to help implement
2581 extensions to this malloc.
2582 */
2583
2584 /* Bin types, widths and sizes */
2585 #define NSMALLBINS (32U)
2586 #define NTREEBINS (32U)
2587 #define SMALLBIN_SHIFT (3U)
2588 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2589 #define TREEBIN_SHIFT (8U)
2590 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2591 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2592 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2593
2594 struct malloc_state {
2595 binmap_t smallmap;
2596 binmap_t treemap;
2597 size_t dvsize;
2598 size_t topsize;
2599 char* least_addr;
2600 mchunkptr dv;
2601 mchunkptr top;
2602 size_t trim_check;
2603 size_t release_checks;
2604 size_t magic;
2605 mchunkptr smallbins[(NSMALLBINS+1)*2];
2606 tbinptr treebins[NTREEBINS];
2607 size_t footprint;
2608 size_t max_footprint;
2609 size_t footprint_limit; /* zero means no limit */
2610 flag_t mflags;
2611 #if USE_LOCKS
2612 MLOCK_T mutex; /* locate lock among fields that rarely change */
2613 #endif /* USE_LOCKS */
2614 msegment seg;
2615 void* extp; /* Unused but available for extensions */
2616 size_t exts;
2617 };
2618
2619 typedef struct malloc_state* mstate;
2620
2621 /* ------------- Global malloc_state and malloc_params ------------------- */
2622
2623 /*
2624 malloc_params holds global properties, including those that can be
2625 dynamically set using mallopt. There is a single instance, mparams,
2626 initialized in init_mparams. Note that the non-zeroness of "magic"
2627 also serves as an initialization flag.
2628 */
2629
2630 struct malloc_params {
2631 size_t magic;
2632 size_t page_size;
2633 size_t granularity;
2634 size_t mmap_threshold;
2635 size_t trim_threshold;
2636 flag_t default_mflags;
2637 };
2638
2639 static struct malloc_params mparams;
2640
2641 /* Ensure mparams initialized */
2642 #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
2643
2644 #if !ONLY_MSPACES
2645
2646 /* The global malloc_state used for all non-"mspace" calls */
2647 static struct malloc_state _gm_;
2648 #define gm (&_gm_)
2649 #define is_global(M) ((M) == &_gm_)
2650
2651 #endif /* !ONLY_MSPACES */
2652
2653 #define is_initialized(M) ((M)->top != 0)
2654
2655 /* -------------------------- system alloc setup ------------------------- */
2656
2657 /* Operations on mflags */
2658
2659 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2660 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2661 #if USE_LOCKS
2662 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2663 #else
2664 #define disable_lock(M)
2665 #endif
2666
2667 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2668 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2669 #if HAVE_MMAP
2670 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2671 #else
2672 #define disable_mmap(M)
2673 #endif
2674
2675 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2676 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2677
2678 #define set_lock(M,L)\
2679 ((M)->mflags = (L)?\
2680 ((M)->mflags | USE_LOCK_BIT) :\
2681 ((M)->mflags & ~USE_LOCK_BIT))
2682
2683 /* page-align a size */
2684 #define page_align(S)\
2685 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2686
2687 /* granularity-align a size */
2688 #define granularity_align(S)\
2689 (((S) + (mparams.granularity - SIZE_T_ONE))\
2690 & ~(mparams.granularity - SIZE_T_ONE))
2691
2692
2693 /* For mmap, use granularity alignment on windows, else page-align */
2694 #ifdef WIN32
2695 #define mmap_align(S) granularity_align(S)
2696 #else
2697 #define mmap_align(S) page_align(S)
2698 #endif
2699
2700 /* For sys_alloc, enough padding to ensure can malloc request on success */
2701 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2702
2703 #define is_page_aligned(S)\
2704 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2705 #define is_granularity_aligned(S)\
2706 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2707
2708 /* True if segment S holds address A */
2709 #define segment_holds(S, A)\
2710 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2711
2712 /* Return segment holding given address */
segment_holding(mstate m,char * addr)2713 static msegmentptr segment_holding(mstate m, char* addr) {
2714 msegmentptr sp = &m->seg;
2715 for (;;) {
2716 if (addr >= sp->base && addr < sp->base + sp->size)
2717 return sp;
2718 if ((sp = sp->next) == 0)
2719 return 0;
2720 }
2721 }
2722
2723 /* Return true if segment contains a segment link */
has_segment_link(mstate m,msegmentptr ss)2724 static int has_segment_link(mstate m, msegmentptr ss) {
2725 msegmentptr sp = &m->seg;
2726 for (;;) {
2727 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2728 return 1;
2729 if ((sp = sp->next) == 0)
2730 return 0;
2731 }
2732 }
2733
2734 #ifndef MORECORE_CANNOT_TRIM
2735 #define should_trim(M,s) ((s) > (M)->trim_check)
2736 #else /* MORECORE_CANNOT_TRIM */
2737 #define should_trim(M,s) (0)
2738 #endif /* MORECORE_CANNOT_TRIM */
2739
2740 /*
2741 TOP_FOOT_SIZE is padding at the end of a segment, including space
2742 that may be needed to place segment records and fenceposts when new
2743 noncontiguous segments are added.
2744 */
2745 #define TOP_FOOT_SIZE\
2746 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2747
2748
2749 /* ------------------------------- Hooks -------------------------------- */
2750
2751 /*
2752 PREACTION should be defined to return 0 on success, and nonzero on
2753 failure. If you are not using locking, you can redefine these to do
2754 anything you like.
2755 */
2756
2757 #if USE_LOCKS
2758 #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2759 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2760 #else /* USE_LOCKS */
2761
2762 #ifndef PREACTION
2763 #define PREACTION(M) (0)
2764 #endif /* PREACTION */
2765
2766 #ifndef POSTACTION
2767 #define POSTACTION(M)
2768 #endif /* POSTACTION */
2769
2770 #endif /* USE_LOCKS */
2771
2772 /*
2773 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2774 USAGE_ERROR_ACTION is triggered on detected bad frees and
2775 reallocs. The argument p is an address that might have triggered the
2776 fault. It is ignored by the two predefined actions, but might be
2777 useful in custom actions that try to help diagnose errors.
2778 */
2779
2780 #if PROCEED_ON_ERROR
2781
2782 /* A count of the number of corruption errors causing resets */
2783 int malloc_corruption_error_count;
2784
2785 /* default corruption action */
2786 static void reset_on_error(mstate m);
2787
2788 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2789 #define USAGE_ERROR_ACTION(m, p)
2790
2791 #else /* PROCEED_ON_ERROR */
2792
2793 #ifndef CORRUPTION_ERROR_ACTION
2794 #define CORRUPTION_ERROR_ACTION(m) ABORT
2795 #endif /* CORRUPTION_ERROR_ACTION */
2796
2797 #ifndef USAGE_ERROR_ACTION
2798 #define USAGE_ERROR_ACTION(m,p) ABORT
2799 #endif /* USAGE_ERROR_ACTION */
2800
2801 #endif /* PROCEED_ON_ERROR */
2802
2803
2804 /* -------------------------- Debugging setup ---------------------------- */
2805
2806 #if ! DEBUG
2807
2808 #define check_free_chunk(M,P)
2809 #define check_inuse_chunk(M,P)
2810 #define check_malloced_chunk(M,P,N)
2811 #define check_mmapped_chunk(M,P)
2812 #define check_malloc_state(M)
2813 #define check_top_chunk(M,P)
2814
2815 #else /* DEBUG */
2816 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2817 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2818 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2819 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2820 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2821 #define check_malloc_state(M) do_check_malloc_state(M)
2822
2823 static void do_check_any_chunk(mstate m, mchunkptr p);
2824 static void do_check_top_chunk(mstate m, mchunkptr p);
2825 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2826 static void do_check_inuse_chunk(mstate m, mchunkptr p);
2827 static void do_check_free_chunk(mstate m, mchunkptr p);
2828 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2829 static void do_check_tree(mstate m, tchunkptr t);
2830 static void do_check_treebin(mstate m, bindex_t i);
2831 static void do_check_smallbin(mstate m, bindex_t i);
2832 static void do_check_malloc_state(mstate m);
2833 static int bin_find(mstate m, mchunkptr x);
2834 static size_t traverse_and_check(mstate m);
2835 #endif /* DEBUG */
2836
2837 /* ---------------------------- Indexing Bins ---------------------------- */
2838
2839 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2840 #define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT)
2841 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2842 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2843
2844 /* addressing by index. See above about smallbin repositioning */
2845 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2846 #define treebin_at(M,i) (&((M)->treebins[i]))
2847
2848 /* assign tree index for size S to variable I. Use x86 asm if possible */
2849 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2850 #define compute_tree_index(S, I)\
2851 {\
2852 unsigned int X = S >> TREEBIN_SHIFT;\
2853 if (X == 0)\
2854 I = 0;\
2855 else if (X > 0xFFFF)\
2856 I = NTREEBINS-1;\
2857 else {\
2858 unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
2859 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2860 }\
2861 }
2862
2863 #elif defined (__INTEL_COMPILER)
2864 #define compute_tree_index(S, I)\
2865 {\
2866 size_t X = S >> TREEBIN_SHIFT;\
2867 if (X == 0)\
2868 I = 0;\
2869 else if (X > 0xFFFF)\
2870 I = NTREEBINS-1;\
2871 else {\
2872 unsigned int K = _bit_scan_reverse (X); \
2873 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2874 }\
2875 }
2876
2877 #elif defined(_MSC_VER) && _MSC_VER>=1300
2878 #define compute_tree_index(S, I)\
2879 {\
2880 size_t X = S >> TREEBIN_SHIFT;\
2881 if (X == 0)\
2882 I = 0;\
2883 else if (X > 0xFFFF)\
2884 I = NTREEBINS-1;\
2885 else {\
2886 unsigned int K;\
2887 _BitScanReverse((DWORD *) &K, (DWORD) X);\
2888 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2889 }\
2890 }
2891
2892 #else /* GNUC */
2893 #define compute_tree_index(S, I)\
2894 {\
2895 size_t X = S >> TREEBIN_SHIFT;\
2896 if (X == 0)\
2897 I = 0;\
2898 else if (X > 0xFFFF)\
2899 I = NTREEBINS-1;\
2900 else {\
2901 unsigned int Y = (unsigned int)X;\
2902 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2903 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2904 N += K;\
2905 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2906 K = 14 - N + ((Y <<= K) >> 15);\
2907 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2908 }\
2909 }
2910 #endif /* GNUC */
2911
2912 /* Bit representing maximum resolved size in a treebin at i */
2913 #define bit_for_tree_index(i) \
2914 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2915
2916 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2917 #define leftshift_for_tree_index(i) \
2918 ((i == NTREEBINS-1)? 0 : \
2919 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2920
2921 /* The size of the smallest chunk held in bin with index i */
2922 #define minsize_for_tree_index(i) \
2923 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2924 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2925
2926
2927 /* ------------------------ Operations on bin maps ----------------------- */
2928
2929 /* bit corresponding to given index */
2930 #define idx2bit(i) ((binmap_t)(1) << (i))
2931
2932 /* Mark/Clear bits with given index */
2933 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2934 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2935 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2936
2937 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2938 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2939 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2940
2941 /* isolate the least set bit of a bitmap */
2942 #define least_bit(x) ((x) & -(x))
2943
2944 /* mask with all bits to left of least bit of x on */
2945 #define left_bits(x) ((x<<1) | -(x<<1))
2946
2947 /* mask with all bits to left of or equal to least bit of x on */
2948 #define same_or_left_bits(x) ((x) | -(x))
2949
2950 /* index corresponding to given bit. Use x86 asm if possible */
2951
2952 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2953 #define compute_bit2idx(X, I)\
2954 {\
2955 unsigned int J;\
2956 J = __builtin_ctz(X); \
2957 I = (bindex_t)J;\
2958 }
2959
2960 #elif defined (__INTEL_COMPILER)
2961 #define compute_bit2idx(X, I)\
2962 {\
2963 unsigned int J;\
2964 J = _bit_scan_forward (X); \
2965 I = (bindex_t)J;\
2966 }
2967
2968 #elif defined(_MSC_VER) && _MSC_VER>=1300
2969 #define compute_bit2idx(X, I)\
2970 {\
2971 unsigned int J;\
2972 _BitScanForward((DWORD *) &J, X);\
2973 I = (bindex_t)J;\
2974 }
2975
2976 #elif USE_BUILTIN_FFS
2977 #define compute_bit2idx(X, I) I = ffs(X)-1
2978
2979 #else
2980 #define compute_bit2idx(X, I)\
2981 {\
2982 unsigned int Y = X - 1;\
2983 unsigned int K = Y >> (16-4) & 16;\
2984 unsigned int N = K; Y >>= K;\
2985 N += K = Y >> (8-3) & 8; Y >>= K;\
2986 N += K = Y >> (4-2) & 4; Y >>= K;\
2987 N += K = Y >> (2-1) & 2; Y >>= K;\
2988 N += K = Y >> (1-0) & 1; Y >>= K;\
2989 I = (bindex_t)(N + Y);\
2990 }
2991 #endif /* GNUC */
2992
2993
2994 /* ----------------------- Runtime Check Support ------------------------- */
2995
2996 /*
2997 For security, the main invariant is that malloc/free/etc never
2998 writes to a static address other than malloc_state, unless static
2999 malloc_state itself has been corrupted, which cannot occur via
3000 malloc (because of these checks). In essence this means that we
3001 believe all pointers, sizes, maps etc held in malloc_state, but
3002 check all of those linked or offsetted from other embedded data
3003 structures. These checks are interspersed with main code in a way
3004 that tends to minimize their run-time cost.
3005
3006 When FOOTERS is defined, in addition to range checking, we also
3007 verify footer fields of inuse chunks, which can be used guarantee
3008 that the mstate controlling malloc/free is intact. This is a
3009 streamlined version of the approach described by William Robertson
3010 et al in "Run-time Detection of Heap-based Overflows" LISA'03
3011 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
3012 of an inuse chunk holds the xor of its mstate and a random seed,
3013 that is checked upon calls to free() and realloc(). This is
3014 (probabalistically) unguessable from outside the program, but can be
3015 computed by any code successfully malloc'ing any chunk, so does not
3016 itself provide protection against code that has already broken
3017 security through some other means. Unlike Robertson et al, we
3018 always dynamically check addresses of all offset chunks (previous,
3019 next, etc). This turns out to be cheaper than relying on hashes.
3020 */
3021
3022 #if !INSECURE
3023 /* Check if address a is at least as high as any from MORECORE or MMAP */
3024 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
3025 /* Check if address of next chunk n is higher than base chunk p */
3026 #define ok_next(p, n) ((char*)(p) < (char*)(n))
3027 /* Check if p has inuse status */
3028 #define ok_inuse(p) is_inuse(p)
3029 /* Check if p has its pinuse bit on */
3030 #define ok_pinuse(p) pinuse(p)
3031
3032 #else /* !INSECURE */
3033 #define ok_address(M, a) (1)
3034 #define ok_next(b, n) (1)
3035 #define ok_inuse(p) (1)
3036 #define ok_pinuse(p) (1)
3037 #endif /* !INSECURE */
3038
3039 #if (FOOTERS && !INSECURE)
3040 /* Check if (alleged) mstate m has expected magic field */
3041 #define ok_magic(M) ((M)->magic == mparams.magic)
3042 #else /* (FOOTERS && !INSECURE) */
3043 #define ok_magic(M) (1)
3044 #endif /* (FOOTERS && !INSECURE) */
3045
3046 /* In gcc, use __builtin_expect to minimize impact of checks */
3047 #if !INSECURE
3048 #if defined(__GNUC__) && __GNUC__ >= 3
3049 #define RTCHECK(e) __builtin_expect(e, 1)
3050 #else /* GNUC */
3051 #define RTCHECK(e) (e)
3052 #endif /* GNUC */
3053 #else /* !INSECURE */
3054 #define RTCHECK(e) (1)
3055 #endif /* !INSECURE */
3056
3057 /* macros to set up inuse chunks with or without footers */
3058
3059 #if !FOOTERS
3060
3061 #define mark_inuse_foot(M,p,s)
3062
3063 /* Macros for setting head/foot of non-mmapped chunks */
3064
3065 /* Set cinuse bit and pinuse bit of next chunk */
3066 #define set_inuse(M,p,s)\
3067 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3068 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3069
3070 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
3071 #define set_inuse_and_pinuse(M,p,s)\
3072 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3073 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3074
3075 /* Set size, cinuse and pinuse bit of this chunk */
3076 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3077 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
3078
3079 #else /* FOOTERS */
3080
3081 /* Set foot of inuse chunk to be xor of mstate and seed */
3082 #define mark_inuse_foot(M,p,s)\
3083 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
3084
3085 #define get_mstate_for(p)\
3086 ((mstate)(((mchunkptr)((char*)(p) +\
3087 (chunksize(p))))->prev_foot ^ mparams.magic))
3088
3089 #define set_inuse(M,p,s)\
3090 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3091 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
3092 mark_inuse_foot(M,p,s))
3093
3094 #define set_inuse_and_pinuse(M,p,s)\
3095 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3096 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
3097 mark_inuse_foot(M,p,s))
3098
3099 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3100 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3101 mark_inuse_foot(M, p, s))
3102
3103 #endif /* !FOOTERS */
3104
3105 /* ---------------------------- setting mparams -------------------------- */
3106
3107 #if LOCK_AT_FORK
pre_fork(void)3108 static void pre_fork(void) { ACQUIRE_LOCK(&(gm)->mutex); }
post_fork_parent(void)3109 static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); }
post_fork_child(void)3110 static void post_fork_child(void) { INITIAL_LOCK(&(gm)->mutex); }
3111 #endif /* LOCK_AT_FORK */
3112
3113 /* Initialize mparams */
init_mparams(void)3114 static int init_mparams(void) {
3115 #ifdef NEED_GLOBAL_LOCK_INIT
3116 if (malloc_global_mutex_status <= 0)
3117 init_malloc_global_mutex();
3118 #endif
3119
3120 ACQUIRE_MALLOC_GLOBAL_LOCK();
3121 if (mparams.magic == 0) {
3122 size_t magic;
3123 size_t psize;
3124 size_t gsize;
3125
3126 #ifndef WIN32
3127 psize = malloc_getpagesize;
3128 gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
3129 #else /* WIN32 */
3130 {
3131 SYSTEM_INFO system_info;
3132 GetSystemInfo(&system_info);
3133 psize = system_info.dwPageSize;
3134 gsize = ((DEFAULT_GRANULARITY != 0)?
3135 DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3136 }
3137 #endif /* WIN32 */
3138
3139 /* Sanity-check configuration:
3140 size_t must be unsigned and as wide as pointer type.
3141 ints must be at least 4 bytes.
3142 alignment must be at least 8.
3143 Alignment, min chunk size, and page size must all be powers of 2.
3144 */
3145 if ((sizeof(size_t) != sizeof(char*)) ||
3146 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
3147 (sizeof(int) < 4) ||
3148 (MALLOC_ALIGNMENT < (size_t)8U) ||
3149 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
3150 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
3151 ((gsize & (gsize-SIZE_T_ONE)) != 0) ||
3152 ((psize & (psize-SIZE_T_ONE)) != 0))
3153 ABORT;
3154 mparams.granularity = gsize;
3155 mparams.page_size = psize;
3156 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
3157 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
3158 #if MORECORE_CONTIGUOUS
3159 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
3160 #else /* MORECORE_CONTIGUOUS */
3161 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
3162 #endif /* MORECORE_CONTIGUOUS */
3163
3164 #if !ONLY_MSPACES
3165 /* Set up lock for main malloc area */
3166 gm->mflags = mparams.default_mflags;
3167 (void)INITIAL_LOCK(&gm->mutex);
3168 #endif
3169 #if LOCK_AT_FORK
3170 pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child);
3171 #endif
3172
3173 {
3174 #if USE_DEV_RANDOM
3175 int fd;
3176 unsigned char buf[sizeof(size_t)];
3177 /* Try to use /dev/urandom, else fall back on using time */
3178 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3179 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3180 magic = *((size_t *) buf);
3181 close(fd);
3182 }
3183 else
3184 #endif /* USE_DEV_RANDOM */
3185 #ifdef WIN32
3186 magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
3187 #elif defined(LACKS_TIME_H)
3188 magic = (size_t)&magic ^ (size_t)0x55555555U;
3189 #else
3190 magic = (size_t)(time(0) ^ (size_t)0x55555555U);
3191 #endif
3192 magic |= (size_t)8U; /* ensure nonzero */
3193 magic &= ~(size_t)7U; /* improve chances of fault for bad values */
3194 /* Until memory modes commonly available, use volatile-write */
3195 (*(volatile size_t *)(&(mparams.magic))) = magic;
3196 }
3197 }
3198
3199 RELEASE_MALLOC_GLOBAL_LOCK();
3200 return 1;
3201 }
3202
3203 /* support for mallopt */
change_mparam(int param_number,int value)3204 static int change_mparam(int param_number, int value) {
3205 size_t val;
3206 ensure_initialization();
3207 val = (value == -1)? MAX_SIZE_T : (size_t)value;
3208 switch(param_number) {
3209 case M_TRIM_THRESHOLD:
3210 mparams.trim_threshold = val;
3211 return 1;
3212 case M_GRANULARITY:
3213 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3214 mparams.granularity = val;
3215 return 1;
3216 }
3217 else
3218 return 0;
3219 case M_MMAP_THRESHOLD:
3220 mparams.mmap_threshold = val;
3221 return 1;
3222 default:
3223 return 0;
3224 }
3225 }
3226
3227 #if DEBUG
3228 /* ------------------------- Debugging Support --------------------------- */
3229
3230 /* Check properties of any chunk, whether free, inuse, mmapped etc */
do_check_any_chunk(mstate m,mchunkptr p)3231 static void do_check_any_chunk(mstate m, mchunkptr p) {
3232 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3233 assert(ok_address(m, p));
3234 }
3235
3236 /* Check properties of top chunk */
do_check_top_chunk(mstate m,mchunkptr p)3237 static void do_check_top_chunk(mstate m, mchunkptr p) {
3238 msegmentptr sp = segment_holding(m, (char*)p);
3239 size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3240 assert(sp != 0);
3241 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3242 assert(ok_address(m, p));
3243 assert(sz == m->topsize);
3244 assert(sz > 0);
3245 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3246 assert(pinuse(p));
3247 assert(!pinuse(chunk_plus_offset(p, sz)));
3248 }
3249
3250 /* Check properties of (inuse) mmapped chunks */
do_check_mmapped_chunk(mstate m,mchunkptr p)3251 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3252 size_t sz = chunksize(p);
3253 size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
3254 assert(is_mmapped(p));
3255 assert(use_mmap(m));
3256 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3257 assert(ok_address(m, p));
3258 assert(!is_small(sz));
3259 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3260 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3261 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3262 }
3263
3264 /* Check properties of inuse chunks */
do_check_inuse_chunk(mstate m,mchunkptr p)3265 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3266 do_check_any_chunk(m, p);
3267 assert(is_inuse(p));
3268 assert(next_pinuse(p));
3269 /* If not pinuse and not mmapped, previous chunk has OK offset */
3270 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3271 if (is_mmapped(p))
3272 do_check_mmapped_chunk(m, p);
3273 }
3274
3275 /* Check properties of free chunks */
do_check_free_chunk(mstate m,mchunkptr p)3276 static void do_check_free_chunk(mstate m, mchunkptr p) {
3277 size_t sz = chunksize(p);
3278 mchunkptr next = chunk_plus_offset(p, sz);
3279 do_check_any_chunk(m, p);
3280 assert(!is_inuse(p));
3281 assert(!next_pinuse(p));
3282 assert (!is_mmapped(p));
3283 if (p != m->dv && p != m->top) {
3284 if (sz >= MIN_CHUNK_SIZE) {
3285 assert((sz & CHUNK_ALIGN_MASK) == 0);
3286 assert(is_aligned(chunk2mem(p)));
3287 assert(next->prev_foot == sz);
3288 assert(pinuse(p));
3289 assert (next == m->top || is_inuse(next));
3290 assert(p->fd->bk == p);
3291 assert(p->bk->fd == p);
3292 }
3293 else /* markers are always of size SIZE_T_SIZE */
3294 assert(sz == SIZE_T_SIZE);
3295 }
3296 }
3297
3298 /* Check properties of malloced chunks at the point they are malloced */
do_check_malloced_chunk(mstate m,void * mem,size_t s)3299 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3300 if (mem != 0) {
3301 mchunkptr p = mem2chunk(mem);
3302 size_t sz = p->head & ~INUSE_BITS;
3303 do_check_inuse_chunk(m, p);
3304 assert((sz & CHUNK_ALIGN_MASK) == 0);
3305 assert(sz >= MIN_CHUNK_SIZE);
3306 assert(sz >= s);
3307 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3308 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3309 }
3310 }
3311
3312 /* Check a tree and its subtrees. */
do_check_tree(mstate m,tchunkptr t)3313 static void do_check_tree(mstate m, tchunkptr t) {
3314 tchunkptr head = 0;
3315 tchunkptr u = t;
3316 bindex_t tindex = t->index;
3317 size_t tsize = chunksize(t);
3318 bindex_t idx;
3319 compute_tree_index(tsize, idx);
3320 assert(tindex == idx);
3321 assert(tsize >= MIN_LARGE_SIZE);
3322 assert(tsize >= minsize_for_tree_index(idx));
3323 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3324
3325 do { /* traverse through chain of same-sized nodes */
3326 do_check_any_chunk(m, ((mchunkptr)u));
3327 assert(u->index == tindex);
3328 assert(chunksize(u) == tsize);
3329 assert(!is_inuse(u));
3330 assert(!next_pinuse(u));
3331 assert(u->fd->bk == u);
3332 assert(u->bk->fd == u);
3333 if (u->parent == 0) {
3334 assert(u->child[0] == 0);
3335 assert(u->child[1] == 0);
3336 }
3337 else {
3338 assert(head == 0); /* only one node on chain has parent */
3339 head = u;
3340 assert(u->parent != u);
3341 assert (u->parent->child[0] == u ||
3342 u->parent->child[1] == u ||
3343 *((tbinptr*)(u->parent)) == u);
3344 if (u->child[0] != 0) {
3345 assert(u->child[0]->parent == u);
3346 assert(u->child[0] != u);
3347 do_check_tree(m, u->child[0]);
3348 }
3349 if (u->child[1] != 0) {
3350 assert(u->child[1]->parent == u);
3351 assert(u->child[1] != u);
3352 do_check_tree(m, u->child[1]);
3353 }
3354 if (u->child[0] != 0 && u->child[1] != 0) {
3355 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3356 }
3357 }
3358 u = u->fd;
3359 } while (u != t);
3360 assert(head != 0);
3361 }
3362
3363 /* Check all the chunks in a treebin. */
do_check_treebin(mstate m,bindex_t i)3364 static void do_check_treebin(mstate m, bindex_t i) {
3365 tbinptr* tb = treebin_at(m, i);
3366 tchunkptr t = *tb;
3367 int empty = (m->treemap & (1U << i)) == 0;
3368 if (t == 0)
3369 assert(empty);
3370 if (!empty)
3371 do_check_tree(m, t);
3372 }
3373
3374 /* Check all the chunks in a smallbin. */
do_check_smallbin(mstate m,bindex_t i)3375 static void do_check_smallbin(mstate m, bindex_t i) {
3376 sbinptr b = smallbin_at(m, i);
3377 mchunkptr p = b->bk;
3378 unsigned int empty = (m->smallmap & (1U << i)) == 0;
3379 if (p == b)
3380 assert(empty);
3381 if (!empty) {
3382 for (; p != b; p = p->bk) {
3383 size_t size = chunksize(p);
3384 mchunkptr q;
3385 /* each chunk claims to be free */
3386 do_check_free_chunk(m, p);
3387 /* chunk belongs in bin */
3388 assert(small_index(size) == i);
3389 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3390 /* chunk is followed by an inuse chunk */
3391 q = next_chunk(p);
3392 if (q->head != FENCEPOST_HEAD)
3393 do_check_inuse_chunk(m, q);
3394 }
3395 }
3396 }
3397
3398 /* Find x in a bin. Used in other check functions. */
bin_find(mstate m,mchunkptr x)3399 static int bin_find(mstate m, mchunkptr x) {
3400 size_t size = chunksize(x);
3401 if (is_small(size)) {
3402 bindex_t sidx = small_index(size);
3403 sbinptr b = smallbin_at(m, sidx);
3404 if (smallmap_is_marked(m, sidx)) {
3405 mchunkptr p = b;
3406 do {
3407 if (p == x)
3408 return 1;
3409 } while ((p = p->fd) != b);
3410 }
3411 }
3412 else {
3413 bindex_t tidx;
3414 compute_tree_index(size, tidx);
3415 if (treemap_is_marked(m, tidx)) {
3416 tchunkptr t = *treebin_at(m, tidx);
3417 size_t sizebits = size << leftshift_for_tree_index(tidx);
3418 while (t != 0 && chunksize(t) != size) {
3419 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3420 sizebits <<= 1;
3421 }
3422 if (t != 0) {
3423 tchunkptr u = t;
3424 do {
3425 if (u == (tchunkptr)x)
3426 return 1;
3427 } while ((u = u->fd) != t);
3428 }
3429 }
3430 }
3431 return 0;
3432 }
3433
3434 /* Traverse each chunk and check it; return total */
traverse_and_check(mstate m)3435 static size_t traverse_and_check(mstate m) {
3436 size_t sum = 0;
3437 if (is_initialized(m)) {
3438 msegmentptr s = &m->seg;
3439 sum += m->topsize + TOP_FOOT_SIZE;
3440 while (s != 0) {
3441 mchunkptr q = align_as_chunk(s->base);
3442 mchunkptr lastq = 0;
3443 assert(pinuse(q));
3444 while (segment_holds(s, q) &&
3445 q != m->top && q->head != FENCEPOST_HEAD) {
3446 sum += chunksize(q);
3447 if (is_inuse(q)) {
3448 assert(!bin_find(m, q));
3449 do_check_inuse_chunk(m, q);
3450 }
3451 else {
3452 assert(q == m->dv || bin_find(m, q));
3453 assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
3454 do_check_free_chunk(m, q);
3455 }
3456 lastq = q;
3457 q = next_chunk(q);
3458 }
3459 s = s->next;
3460 }
3461 }
3462 return sum;
3463 }
3464
3465
3466 /* Check all properties of malloc_state. */
do_check_malloc_state(mstate m)3467 static void do_check_malloc_state(mstate m) {
3468 bindex_t i;
3469 size_t total;
3470 /* check bins */
3471 for (i = 0; i < NSMALLBINS; ++i)
3472 do_check_smallbin(m, i);
3473 for (i = 0; i < NTREEBINS; ++i)
3474 do_check_treebin(m, i);
3475
3476 if (m->dvsize != 0) { /* check dv chunk */
3477 do_check_any_chunk(m, m->dv);
3478 assert(m->dvsize == chunksize(m->dv));
3479 assert(m->dvsize >= MIN_CHUNK_SIZE);
3480 assert(bin_find(m, m->dv) == 0);
3481 }
3482
3483 if (m->top != 0) { /* check top chunk */
3484 do_check_top_chunk(m, m->top);
3485 /*assert(m->topsize == chunksize(m->top)); redundant */
3486 assert(m->topsize > 0);
3487 assert(bin_find(m, m->top) == 0);
3488 }
3489
3490 total = traverse_and_check(m);
3491 assert(total <= m->footprint);
3492 assert(m->footprint <= m->max_footprint);
3493 }
3494 #endif /* DEBUG */
3495
3496 /* ----------------------------- statistics ------------------------------ */
3497
3498 #if !NO_MALLINFO
internal_mallinfo(mstate m)3499 static struct mallinfo internal_mallinfo(mstate m) {
3500 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3501 ensure_initialization();
3502 if (!PREACTION(m)) {
3503 check_malloc_state(m);
3504 if (is_initialized(m)) {
3505 size_t nfree = SIZE_T_ONE; /* top always free */
3506 size_t mfree = m->topsize + TOP_FOOT_SIZE;
3507 size_t sum = mfree;
3508 msegmentptr s = &m->seg;
3509 while (s != 0) {
3510 mchunkptr q = align_as_chunk(s->base);
3511 while (segment_holds(s, q) &&
3512 q != m->top && q->head != FENCEPOST_HEAD) {
3513 size_t sz = chunksize(q);
3514 sum += sz;
3515 if (!is_inuse(q)) {
3516 mfree += sz;
3517 ++nfree;
3518 }
3519 q = next_chunk(q);
3520 }
3521 s = s->next;
3522 }
3523
3524 nm.arena = sum;
3525 nm.ordblks = nfree;
3526 nm.hblkhd = m->footprint - sum;
3527 nm.usmblks = m->max_footprint;
3528 nm.uordblks = m->footprint - mfree;
3529 nm.fordblks = mfree;
3530 nm.keepcost = m->topsize;
3531 }
3532
3533 POSTACTION(m);
3534 }
3535 return nm;
3536 }
3537 #endif /* !NO_MALLINFO */
3538
3539 #if !NO_MALLOC_STATS
internal_malloc_stats(mstate m)3540 static void internal_malloc_stats(mstate m) {
3541 ensure_initialization();
3542 if (!PREACTION(m)) {
3543 size_t maxfp = 0;
3544 size_t fp = 0;
3545 size_t used = 0;
3546 check_malloc_state(m);
3547 if (is_initialized(m)) {
3548 msegmentptr s = &m->seg;
3549 maxfp = m->max_footprint;
3550 fp = m->footprint;
3551 used = fp - (m->topsize + TOP_FOOT_SIZE);
3552
3553 while (s != 0) {
3554 mchunkptr q = align_as_chunk(s->base);
3555 while (segment_holds(s, q) &&
3556 q != m->top && q->head != FENCEPOST_HEAD) {
3557 if (!is_inuse(q))
3558 used -= chunksize(q);
3559 q = next_chunk(q);
3560 }
3561 s = s->next;
3562 }
3563 }
3564 POSTACTION(m); /* drop lock */
3565 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3566 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
3567 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
3568 }
3569 }
3570 #endif /* NO_MALLOC_STATS */
3571
3572 /* ----------------------- Operations on smallbins ----------------------- */
3573
3574 /*
3575 Various forms of linking and unlinking are defined as macros. Even
3576 the ones for trees, which are very long but have very short typical
3577 paths. This is ugly but reduces reliance on inlining support of
3578 compilers.
3579 */
3580
3581 /* Link a free chunk into a smallbin */
3582 #define insert_small_chunk(M, P, S) {\
3583 bindex_t I = small_index(S);\
3584 mchunkptr B = smallbin_at(M, I);\
3585 mchunkptr F = B;\
3586 assert(S >= MIN_CHUNK_SIZE);\
3587 if (!smallmap_is_marked(M, I))\
3588 mark_smallmap(M, I);\
3589 else if (RTCHECK(ok_address(M, B->fd)))\
3590 F = B->fd;\
3591 else {\
3592 CORRUPTION_ERROR_ACTION(M);\
3593 }\
3594 B->fd = P;\
3595 F->bk = P;\
3596 P->fd = F;\
3597 P->bk = B;\
3598 }
3599
3600 /* Unlink a chunk from a smallbin */
3601 #define unlink_small_chunk(M, P, S) {\
3602 mchunkptr F = P->fd;\
3603 mchunkptr B = P->bk;\
3604 bindex_t I = small_index(S);\
3605 assert(P != B);\
3606 assert(P != F);\
3607 assert(chunksize(P) == small_index2size(I));\
3608 if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
3609 if (B == F) {\
3610 clear_smallmap(M, I);\
3611 }\
3612 else if (RTCHECK(B == smallbin_at(M,I) ||\
3613 (ok_address(M, B) && B->fd == P))) {\
3614 F->bk = B;\
3615 B->fd = F;\
3616 }\
3617 else {\
3618 CORRUPTION_ERROR_ACTION(M);\
3619 }\
3620 }\
3621 else {\
3622 CORRUPTION_ERROR_ACTION(M);\
3623 }\
3624 }
3625
3626 /* Unlink the first chunk from a smallbin */
3627 #define unlink_first_small_chunk(M, B, P, I) {\
3628 mchunkptr F = P->fd;\
3629 assert(P != B);\
3630 assert(P != F);\
3631 assert(chunksize(P) == small_index2size(I));\
3632 if (B == F) {\
3633 clear_smallmap(M, I);\
3634 }\
3635 else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
3636 F->bk = B;\
3637 B->fd = F;\
3638 }\
3639 else {\
3640 CORRUPTION_ERROR_ACTION(M);\
3641 }\
3642 }
3643
3644 /* Replace dv node, binning the old one */
3645 /* Used only when dvsize known to be small */
3646 #define replace_dv(M, P, S) {\
3647 size_t DVS = M->dvsize;\
3648 assert(is_small(DVS));\
3649 if (DVS != 0) {\
3650 mchunkptr DV = M->dv;\
3651 insert_small_chunk(M, DV, DVS);\
3652 }\
3653 M->dvsize = S;\
3654 M->dv = P;\
3655 }
3656
3657 /* ------------------------- Operations on trees ------------------------- */
3658
3659 /* Insert chunk into tree */
3660 #define insert_large_chunk(M, X, S) {\
3661 tbinptr* H;\
3662 bindex_t I;\
3663 compute_tree_index(S, I);\
3664 H = treebin_at(M, I);\
3665 X->index = I;\
3666 X->child[0] = X->child[1] = 0;\
3667 if (!treemap_is_marked(M, I)) {\
3668 mark_treemap(M, I);\
3669 *H = X;\
3670 X->parent = (tchunkptr)H;\
3671 X->fd = X->bk = X;\
3672 }\
3673 else {\
3674 tchunkptr T = *H;\
3675 size_t K = S << leftshift_for_tree_index(I);\
3676 for (;;) {\
3677 if (chunksize(T) != S) {\
3678 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3679 K <<= 1;\
3680 if (*C != 0)\
3681 T = *C;\
3682 else if (RTCHECK(ok_address(M, C))) {\
3683 *C = X;\
3684 X->parent = T;\
3685 X->fd = X->bk = X;\
3686 break;\
3687 }\
3688 else {\
3689 CORRUPTION_ERROR_ACTION(M);\
3690 break;\
3691 }\
3692 }\
3693 else {\
3694 tchunkptr F = T->fd;\
3695 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3696 T->fd = F->bk = X;\
3697 X->fd = F;\
3698 X->bk = T;\
3699 X->parent = 0;\
3700 break;\
3701 }\
3702 else {\
3703 CORRUPTION_ERROR_ACTION(M);\
3704 break;\
3705 }\
3706 }\
3707 }\
3708 }\
3709 }
3710
3711 /*
3712 Unlink steps:
3713
3714 1. If x is a chained node, unlink it from its same-sized fd/bk links
3715 and choose its bk node as its replacement.
3716 2. If x was the last node of its size, but not a leaf node, it must
3717 be replaced with a leaf node (not merely one with an open left or
3718 right), to make sure that lefts and rights of descendents
3719 correspond properly to bit masks. We use the rightmost descendent
3720 of x. We could use any other leaf, but this is easy to locate and
3721 tends to counteract removal of leftmosts elsewhere, and so keeps
3722 paths shorter than minimally guaranteed. This doesn't loop much
3723 because on average a node in a tree is near the bottom.
3724 3. If x is the base of a chain (i.e., has parent links) relink
3725 x's parent and children to x's replacement (or null if none).
3726 */
3727
3728 #define unlink_large_chunk(M, X) {\
3729 tchunkptr XP = X->parent;\
3730 tchunkptr R;\
3731 if (X->bk != X) {\
3732 tchunkptr F = X->fd;\
3733 R = X->bk;\
3734 if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
3735 F->bk = R;\
3736 R->fd = F;\
3737 }\
3738 else {\
3739 CORRUPTION_ERROR_ACTION(M);\
3740 }\
3741 }\
3742 else {\
3743 tchunkptr* RP;\
3744 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3745 ((R = *(RP = &(X->child[0]))) != 0)) {\
3746 tchunkptr* CP;\
3747 while ((*(CP = &(R->child[1])) != 0) ||\
3748 (*(CP = &(R->child[0])) != 0)) {\
3749 R = *(RP = CP);\
3750 }\
3751 if (RTCHECK(ok_address(M, RP)))\
3752 *RP = 0;\
3753 else {\
3754 CORRUPTION_ERROR_ACTION(M);\
3755 }\
3756 }\
3757 }\
3758 if (XP != 0) {\
3759 tbinptr* H = treebin_at(M, X->index);\
3760 if (X == *H) {\
3761 if ((*H = R) == 0) \
3762 clear_treemap(M, X->index);\
3763 }\
3764 else if (RTCHECK(ok_address(M, XP))) {\
3765 if (XP->child[0] == X) \
3766 XP->child[0] = R;\
3767 else \
3768 XP->child[1] = R;\
3769 }\
3770 else\
3771 CORRUPTION_ERROR_ACTION(M);\
3772 if (R != 0) {\
3773 if (RTCHECK(ok_address(M, R))) {\
3774 tchunkptr C0, C1;\
3775 R->parent = XP;\
3776 if ((C0 = X->child[0]) != 0) {\
3777 if (RTCHECK(ok_address(M, C0))) {\
3778 R->child[0] = C0;\
3779 C0->parent = R;\
3780 }\
3781 else\
3782 CORRUPTION_ERROR_ACTION(M);\
3783 }\
3784 if ((C1 = X->child[1]) != 0) {\
3785 if (RTCHECK(ok_address(M, C1))) {\
3786 R->child[1] = C1;\
3787 C1->parent = R;\
3788 }\
3789 else\
3790 CORRUPTION_ERROR_ACTION(M);\
3791 }\
3792 }\
3793 else\
3794 CORRUPTION_ERROR_ACTION(M);\
3795 }\
3796 }\
3797 }
3798
3799 /* Relays to large vs small bin operations */
3800
3801 #define insert_chunk(M, P, S)\
3802 if (is_small(S)) insert_small_chunk(M, P, S)\
3803 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3804
3805 #define unlink_chunk(M, P, S)\
3806 if (is_small(S)) unlink_small_chunk(M, P, S)\
3807 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3808
3809
3810 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3811
3812 #if ONLY_MSPACES
3813 #define internal_malloc(m, b) mspace_malloc(m, b)
3814 #define internal_free(m, mem) mspace_free(m,mem);
3815 #else /* ONLY_MSPACES */
3816 #if MSPACES
3817 #define internal_malloc(m, b)\
3818 ((m == gm)? dlmalloc(b) : mspace_malloc(m, b))
3819 #define internal_free(m, mem)\
3820 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3821 #else /* MSPACES */
3822 #define internal_malloc(m, b) dlmalloc(b)
3823 #define internal_free(m, mem) dlfree(mem)
3824 #endif /* MSPACES */
3825 #endif /* ONLY_MSPACES */
3826
3827 /* ----------------------- Direct-mmapping chunks ----------------------- */
3828
3829 /*
3830 Directly mmapped chunks are set up with an offset to the start of
3831 the mmapped region stored in the prev_foot field of the chunk. This
3832 allows reconstruction of the required argument to MUNMAP when freed,
3833 and also allows adjustment of the returned chunk to meet alignment
3834 requirements (especially in memalign).
3835 */
3836
3837 /* Malloc using mmap */
mmap_alloc(mstate m,size_t nb)3838 static void* mmap_alloc(mstate m, size_t nb) {
3839 size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3840 if (m->footprint_limit != 0) {
3841 size_t fp = m->footprint + mmsize;
3842 if (fp <= m->footprint || fp > m->footprint_limit)
3843 return 0;
3844 }
3845 if (mmsize > nb) { /* Check for wrap around 0 */
3846 char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3847 if (mm != CMFAIL) {
3848 size_t offset = align_offset(chunk2mem(mm));
3849 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3850 mchunkptr p = (mchunkptr)(mm + offset);
3851 p->prev_foot = offset;
3852 p->head = psize;
3853 mark_inuse_foot(m, p, psize);
3854 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3855 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3856
3857 if (m->least_addr == 0 || mm < m->least_addr)
3858 m->least_addr = mm;
3859 if ((m->footprint += mmsize) > m->max_footprint)
3860 m->max_footprint = m->footprint;
3861 assert(is_aligned(chunk2mem(p)));
3862 check_mmapped_chunk(m, p);
3863 return chunk2mem(p);
3864 }
3865 }
3866 return 0;
3867 }
3868
3869 /* Realloc using mmap */
mmap_resize(mstate m,mchunkptr oldp,size_t nb,int flags)3870 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {
3871 size_t oldsize = chunksize(oldp);
3872 (void)flags; /* placate people compiling -Wunused */
3873 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3874 return 0;
3875 /* Keep old chunk if big enough but not too big */
3876 if (oldsize >= nb + SIZE_T_SIZE &&
3877 (oldsize - nb) <= (mparams.granularity << 1))
3878 return oldp;
3879 else {
3880 size_t offset = oldp->prev_foot;
3881 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3882 size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3883 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3884 oldmmsize, newmmsize, flags);
3885 if (cp != CMFAIL) {
3886 mchunkptr newp = (mchunkptr)(cp + offset);
3887 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3888 newp->head = psize;
3889 mark_inuse_foot(m, newp, psize);
3890 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3891 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3892
3893 if (cp < m->least_addr)
3894 m->least_addr = cp;
3895 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3896 m->max_footprint = m->footprint;
3897 check_mmapped_chunk(m, newp);
3898 return newp;
3899 }
3900 }
3901 return 0;
3902 }
3903
3904
3905 /* -------------------------- mspace management -------------------------- */
3906
3907 /* Initialize top chunk and its size */
init_top(mstate m,mchunkptr p,size_t psize)3908 static void init_top(mstate m, mchunkptr p, size_t psize) {
3909 /* Ensure alignment */
3910 size_t offset = align_offset(chunk2mem(p));
3911 p = (mchunkptr)((char*)p + offset);
3912 psize -= offset;
3913
3914 m->top = p;
3915 m->topsize = psize;
3916 p->head = psize | PINUSE_BIT;
3917 /* set size of fake trailing chunk holding overhead space only once */
3918 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3919 m->trim_check = mparams.trim_threshold; /* reset on each update */
3920 }
3921
3922 /* Initialize bins for a new mstate that is otherwise zeroed out */
init_bins(mstate m)3923 static void init_bins(mstate m) {
3924 /* Establish circular links for smallbins */
3925 bindex_t i;
3926 for (i = 0; i < NSMALLBINS; ++i) {
3927 sbinptr bin = smallbin_at(m,i);
3928 bin->fd = bin->bk = bin;
3929 }
3930 }
3931
3932 #if PROCEED_ON_ERROR
3933
3934 /* default corruption action */
reset_on_error(mstate m)3935 static void reset_on_error(mstate m) {
3936 int i;
3937 ++malloc_corruption_error_count;
3938 /* Reinitialize fields to forget about all memory */
3939 m->smallmap = m->treemap = 0;
3940 m->dvsize = m->topsize = 0;
3941 m->seg.base = 0;
3942 m->seg.size = 0;
3943 m->seg.next = 0;
3944 m->top = m->dv = 0;
3945 for (i = 0; i < NTREEBINS; ++i)
3946 *treebin_at(m, i) = 0;
3947 init_bins(m);
3948 }
3949 #endif /* PROCEED_ON_ERROR */
3950
3951 /* Allocate chunk and prepend remainder with chunk in successor base. */
prepend_alloc(mstate m,char * newbase,char * oldbase,size_t nb)3952 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3953 size_t nb) {
3954 mchunkptr p = align_as_chunk(newbase);
3955 mchunkptr oldfirst = align_as_chunk(oldbase);
3956 size_t psize = (char*)oldfirst - (char*)p;
3957 mchunkptr q = chunk_plus_offset(p, nb);
3958 size_t qsize = psize - nb;
3959 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3960
3961 assert((char*)oldfirst > (char*)q);
3962 assert(pinuse(oldfirst));
3963 assert(qsize >= MIN_CHUNK_SIZE);
3964
3965 /* consolidate remainder with first chunk of old base */
3966 if (oldfirst == m->top) {
3967 size_t tsize = m->topsize += qsize;
3968 m->top = q;
3969 q->head = tsize | PINUSE_BIT;
3970 check_top_chunk(m, q);
3971 }
3972 else if (oldfirst == m->dv) {
3973 size_t dsize = m->dvsize += qsize;
3974 m->dv = q;
3975 set_size_and_pinuse_of_free_chunk(q, dsize);
3976 }
3977 else {
3978 if (!is_inuse(oldfirst)) {
3979 size_t nsize = chunksize(oldfirst);
3980 unlink_chunk(m, oldfirst, nsize);
3981 oldfirst = chunk_plus_offset(oldfirst, nsize);
3982 qsize += nsize;
3983 }
3984 set_free_with_pinuse(q, qsize, oldfirst);
3985 insert_chunk(m, q, qsize);
3986 check_free_chunk(m, q);
3987 }
3988
3989 check_malloced_chunk(m, chunk2mem(p), nb);
3990 return chunk2mem(p);
3991 }
3992
3993 /* Add a segment to hold a new noncontiguous region */
add_segment(mstate m,char * tbase,size_t tsize,flag_t mmapped)3994 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3995 /* Determine locations and sizes of segment, fenceposts, old top */
3996 char* old_top = (char*)m->top;
3997 msegmentptr oldsp = segment_holding(m, old_top);
3998 char* old_end = oldsp->base + oldsp->size;
3999 size_t ssize = pad_request(sizeof(struct malloc_segment));
4000 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
4001 size_t offset = align_offset(chunk2mem(rawsp));
4002 char* asp = rawsp + offset;
4003 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
4004 mchunkptr sp = (mchunkptr)csp;
4005 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
4006 mchunkptr tnext = chunk_plus_offset(sp, ssize);
4007 mchunkptr p = tnext;
4008 __UNUSED int nfences = 0;
4009
4010 /* reset top to new space */
4011 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4012
4013 /* Set up segment record */
4014 assert(is_aligned(ss));
4015 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
4016 *ss = m->seg; /* Push current record */
4017 m->seg.base = tbase;
4018 m->seg.size = tsize;
4019 m->seg.sflags = mmapped;
4020 m->seg.next = ss;
4021
4022 /* Insert trailing fenceposts */
4023 for (;;) {
4024 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
4025 p->head = FENCEPOST_HEAD;
4026 ++nfences;
4027 if ((char*)(&(nextp->head)) < old_end)
4028 p = nextp;
4029 else
4030 break;
4031 }
4032 assert(nfences >= 2);
4033
4034 /* Insert the rest of old top into a bin as an ordinary free chunk */
4035 if (csp != old_top) {
4036 mchunkptr q = (mchunkptr)old_top;
4037 size_t psize = csp - old_top;
4038 mchunkptr tn = chunk_plus_offset(q, psize);
4039 set_free_with_pinuse(q, psize, tn);
4040 insert_chunk(m, q, psize);
4041 }
4042
4043 check_top_chunk(m, m->top);
4044 }
4045
4046 /* -------------------------- System allocation -------------------------- */
4047
4048 /* Get memory from system using MORECORE or MMAP */
sys_alloc(mstate m,size_t nb)4049 static void* sys_alloc(mstate m, size_t nb) {
4050 char* tbase = CMFAIL;
4051 size_t tsize = 0;
4052 flag_t mmap_flag = 0;
4053 size_t asize; /* allocation size */
4054
4055 ensure_initialization();
4056
4057 /* Directly map large chunks, but only if already initialized */
4058 if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
4059 void* mem = mmap_alloc(m, nb);
4060 if (mem != 0)
4061 return mem;
4062 }
4063
4064 asize = granularity_align(nb + SYS_ALLOC_PADDING);
4065 if (asize <= nb)
4066 return 0; /* wraparound */
4067 if (m->footprint_limit != 0) {
4068 size_t fp = m->footprint + asize;
4069 if (fp <= m->footprint || fp > m->footprint_limit)
4070 return 0;
4071 }
4072
4073 /*
4074 Try getting memory in any of three ways (in most-preferred to
4075 least-preferred order):
4076 1. A call to MORECORE that can normally contiguously extend memory.
4077 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
4078 or main space is mmapped or a previous contiguous call failed)
4079 2. A call to MMAP new space (disabled if not HAVE_MMAP).
4080 Note that under the default settings, if MORECORE is unable to
4081 fulfill a request, and HAVE_MMAP is true, then mmap is
4082 used as a noncontiguous system allocator. This is a useful backup
4083 strategy for systems with holes in address spaces -- in this case
4084 sbrk cannot contiguously expand the heap, but mmap may be able to
4085 find space.
4086 3. A call to MORECORE that cannot usually contiguously extend memory.
4087 (disabled if not HAVE_MORECORE)
4088
4089 In all cases, we need to request enough bytes from system to ensure
4090 we can malloc nb bytes upon success, so pad with enough space for
4091 top_foot, plus alignment-pad to make sure we don't lose bytes if
4092 not on boundary, and round this up to a granularity unit.
4093 */
4094
4095 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
4096 char* br = CMFAIL;
4097 size_t ssize = asize; /* sbrk call size */
4098 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
4099 ACQUIRE_MALLOC_GLOBAL_LOCK();
4100
4101 if (ss == 0) { /* First time through or recovery */
4102 char* base = (char*)CALL_MORECORE(0);
4103 if (base != CMFAIL) {
4104 size_t fp;
4105 /* Adjust to end on a page boundary */
4106 if (!is_page_aligned(base))
4107 ssize += (page_align((size_t)base) - (size_t)base);
4108 fp = m->footprint + ssize; /* recheck limits */
4109 if (ssize > nb && ssize < HALF_MAX_SIZE_T &&
4110 (m->footprint_limit == 0 ||
4111 (fp > m->footprint && fp <= m->footprint_limit)) &&
4112 (br = (char*)(CALL_MORECORE(ssize))) == base) {
4113 tbase = base;
4114 tsize = ssize;
4115 }
4116 }
4117 }
4118 else {
4119 /* Subtract out existing available top space from MORECORE request. */
4120 ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
4121 /* Use mem here only if it did continuously extend old space */
4122 if (ssize < HALF_MAX_SIZE_T &&
4123 (br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) {
4124 tbase = br;
4125 tsize = ssize;
4126 }
4127 }
4128
4129 if (tbase == CMFAIL) { /* Cope with partial failure */
4130 if (br != CMFAIL) { /* Try to use/extend the space we did get */
4131 if (ssize < HALF_MAX_SIZE_T &&
4132 ssize < nb + SYS_ALLOC_PADDING) {
4133 size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize);
4134 if (esize < HALF_MAX_SIZE_T) {
4135 char* end = (char*)CALL_MORECORE(esize);
4136 if (end != CMFAIL)
4137 ssize += esize;
4138 else { /* Can't use; try to release */
4139 (void) CALL_MORECORE(-ssize);
4140 br = CMFAIL;
4141 }
4142 }
4143 }
4144 }
4145 if (br != CMFAIL) { /* Use the space we did get */
4146 tbase = br;
4147 tsize = ssize;
4148 }
4149 else
4150 disable_contiguous(m); /* Don't try contiguous path in the future */
4151 }
4152
4153 RELEASE_MALLOC_GLOBAL_LOCK();
4154 }
4155
4156 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
4157 char* mp = (char*)(CALL_MMAP(asize));
4158 if (mp != CMFAIL) {
4159 tbase = mp;
4160 tsize = asize;
4161 mmap_flag = USE_MMAP_BIT;
4162 }
4163 }
4164
4165 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4166 if (asize < HALF_MAX_SIZE_T) {
4167 char* br = CMFAIL;
4168 char* end = CMFAIL;
4169 ACQUIRE_MALLOC_GLOBAL_LOCK();
4170 br = (char*)(CALL_MORECORE(asize));
4171 end = (char*)(CALL_MORECORE(0));
4172 RELEASE_MALLOC_GLOBAL_LOCK();
4173 if (br != CMFAIL && end != CMFAIL && br < end) {
4174 size_t ssize = end - br;
4175 if (ssize > nb + TOP_FOOT_SIZE) {
4176 tbase = br;
4177 tsize = ssize;
4178 }
4179 }
4180 }
4181 }
4182
4183 if (tbase != CMFAIL) {
4184
4185 if ((m->footprint += tsize) > m->max_footprint)
4186 m->max_footprint = m->footprint;
4187
4188 if (!is_initialized(m)) { /* first-time initialization */
4189 if (m->least_addr == 0 || tbase < m->least_addr)
4190 m->least_addr = tbase;
4191 m->seg.base = tbase;
4192 m->seg.size = tsize;
4193 m->seg.sflags = mmap_flag;
4194 m->magic = mparams.magic;
4195 m->release_checks = MAX_RELEASE_CHECK_RATE;
4196 init_bins(m);
4197 #if !ONLY_MSPACES
4198 if (is_global(m))
4199 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4200 else
4201 #endif
4202 {
4203 /* Offset top by embedded malloc_state */
4204 mchunkptr mn = next_chunk(mem2chunk(m));
4205 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4206 }
4207 }
4208
4209 else {
4210 /* Try to merge with an existing segment */
4211 msegmentptr sp = &m->seg;
4212 /* Only consider most recent segment if traversal suppressed */
4213 while (sp != 0 && tbase != sp->base + sp->size)
4214 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4215 if (sp != 0 &&
4216 !is_extern_segment(sp) &&
4217 (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
4218 segment_holds(sp, m->top)) { /* append */
4219 sp->size += tsize;
4220 init_top(m, m->top, m->topsize + tsize);
4221 }
4222 else {
4223 if (tbase < m->least_addr)
4224 m->least_addr = tbase;
4225 sp = &m->seg;
4226 while (sp != 0 && sp->base != tbase + tsize)
4227 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4228 if (sp != 0 &&
4229 !is_extern_segment(sp) &&
4230 (sp->sflags & USE_MMAP_BIT) == mmap_flag) {
4231 char* oldbase = sp->base;
4232 sp->base = tbase;
4233 sp->size += tsize;
4234 return prepend_alloc(m, tbase, oldbase, nb);
4235 }
4236 else
4237 add_segment(m, tbase, tsize, mmap_flag);
4238 }
4239 }
4240
4241 if (nb < m->topsize) { /* Allocate from new or extended top space */
4242 size_t rsize = m->topsize -= nb;
4243 mchunkptr p = m->top;
4244 mchunkptr r = m->top = chunk_plus_offset(p, nb);
4245 r->head = rsize | PINUSE_BIT;
4246 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
4247 check_top_chunk(m, m->top);
4248 check_malloced_chunk(m, chunk2mem(p), nb);
4249 return chunk2mem(p);
4250 }
4251 }
4252
4253 MALLOC_FAILURE_ACTION;
4254 return 0;
4255 }
4256
4257 /* ----------------------- system deallocation -------------------------- */
4258
4259 /* Unmap and unlink any mmapped segments that don't contain used chunks */
release_unused_segments(mstate m)4260 static size_t release_unused_segments(mstate m) {
4261 size_t released = 0;
4262 int nsegs = 0;
4263 msegmentptr pred = &m->seg;
4264 msegmentptr sp = pred->next;
4265 while (sp != 0) {
4266 char* base = sp->base;
4267 size_t size = sp->size;
4268 msegmentptr next = sp->next;
4269 ++nsegs;
4270 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4271 mchunkptr p = align_as_chunk(base);
4272 size_t psize = chunksize(p);
4273 /* Can unmap if first chunk holds entire segment and not pinned */
4274 if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4275 tchunkptr tp = (tchunkptr)p;
4276 assert(segment_holds(sp, (char*)sp));
4277 if (p == m->dv) {
4278 m->dv = 0;
4279 m->dvsize = 0;
4280 }
4281 else {
4282 unlink_large_chunk(m, tp);
4283 }
4284 if (CALL_MUNMAP(base, size) == 0) {
4285 released += size;
4286 m->footprint -= size;
4287 /* unlink obsoleted record */
4288 sp = pred;
4289 sp->next = next;
4290 }
4291 else { /* back out if cannot unmap */
4292 insert_large_chunk(m, tp, psize);
4293 }
4294 }
4295 }
4296 if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4297 break;
4298 pred = sp;
4299 sp = next;
4300 }
4301 /* Reset check counter */
4302 m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)?
4303 (size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE);
4304 return released;
4305 }
4306
sys_trim(mstate m,size_t pad)4307 static int sys_trim(mstate m, size_t pad) {
4308 size_t released = 0;
4309 ensure_initialization();
4310 if (pad < MAX_REQUEST && is_initialized(m)) {
4311 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4312
4313 if (m->topsize > pad) {
4314 /* Shrink top space in granularity-size units, keeping at least one */
4315 size_t unit = mparams.granularity;
4316 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4317 SIZE_T_ONE) * unit;
4318 msegmentptr sp = segment_holding(m, (char*)m->top);
4319
4320 if (!is_extern_segment(sp)) {
4321 if (is_mmapped_segment(sp)) {
4322 if (HAVE_MMAP &&
4323 sp->size >= extra &&
4324 !has_segment_link(m, sp)) { /* can't shrink if pinned */
4325 size_t newsize = sp->size - extra;
4326 (void)newsize; /* placate people compiling -Wunused-variable */
4327 /* Prefer mremap, fall back to munmap */
4328 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4329 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4330 released = extra;
4331 }
4332 }
4333 }
4334 else if (HAVE_MORECORE) {
4335 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4336 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4337 ACQUIRE_MALLOC_GLOBAL_LOCK();
4338 {
4339 /* Make sure end of memory is where we last set it. */
4340 char* old_br = (char*)(CALL_MORECORE(0));
4341 if (old_br == sp->base + sp->size) {
4342 char* rel_br = (char*)(CALL_MORECORE(-extra));
4343 char* new_br = (char*)(CALL_MORECORE(0));
4344 if (rel_br != CMFAIL && new_br < old_br)
4345 released = old_br - new_br;
4346 }
4347 }
4348 RELEASE_MALLOC_GLOBAL_LOCK();
4349 }
4350 }
4351
4352 if (released != 0) {
4353 sp->size -= released;
4354 m->footprint -= released;
4355 init_top(m, m->top, m->topsize - released);
4356 check_top_chunk(m, m->top);
4357 }
4358 }
4359
4360 /* Unmap any unused mmapped segments */
4361 if (HAVE_MMAP)
4362 released += release_unused_segments(m);
4363
4364 /* On failure, disable autotrim to avoid repeated failed future calls */
4365 if (released == 0 && m->topsize > m->trim_check)
4366 m->trim_check = MAX_SIZE_T;
4367 }
4368
4369 return (released != 0)? 1 : 0;
4370 }
4371
4372 /* Consolidate and bin a chunk. Differs from exported versions
4373 of free mainly in that the chunk need not be marked as inuse.
4374 */
dispose_chunk(mstate m,mchunkptr p,size_t psize)4375 static void dispose_chunk(mstate m, mchunkptr p, size_t psize) {
4376 mchunkptr next = chunk_plus_offset(p, psize);
4377 if (!pinuse(p)) {
4378 mchunkptr prev;
4379 size_t prevsize = p->prev_foot;
4380 if (is_mmapped(p)) {
4381 psize += prevsize + MMAP_FOOT_PAD;
4382 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4383 m->footprint -= psize;
4384 return;
4385 }
4386 prev = chunk_minus_offset(p, prevsize);
4387 psize += prevsize;
4388 p = prev;
4389 if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */
4390 if (p != m->dv) {
4391 unlink_chunk(m, p, prevsize);
4392 }
4393 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4394 m->dvsize = psize;
4395 set_free_with_pinuse(p, psize, next);
4396 return;
4397 }
4398 }
4399 else {
4400 CORRUPTION_ERROR_ACTION(m);
4401 return;
4402 }
4403 }
4404 if (RTCHECK(ok_address(m, next))) {
4405 if (!cinuse(next)) { /* consolidate forward */
4406 if (next == m->top) {
4407 size_t tsize = m->topsize += psize;
4408 m->top = p;
4409 p->head = tsize | PINUSE_BIT;
4410 if (p == m->dv) {
4411 m->dv = 0;
4412 m->dvsize = 0;
4413 }
4414 return;
4415 }
4416 else if (next == m->dv) {
4417 size_t dsize = m->dvsize += psize;
4418 m->dv = p;
4419 set_size_and_pinuse_of_free_chunk(p, dsize);
4420 return;
4421 }
4422 else {
4423 size_t nsize = chunksize(next);
4424 psize += nsize;
4425 unlink_chunk(m, next, nsize);
4426 set_size_and_pinuse_of_free_chunk(p, psize);
4427 if (p == m->dv) {
4428 m->dvsize = psize;
4429 return;
4430 }
4431 }
4432 }
4433 else {
4434 set_free_with_pinuse(p, psize, next);
4435 }
4436 insert_chunk(m, p, psize);
4437 }
4438 else {
4439 CORRUPTION_ERROR_ACTION(m);
4440 }
4441 }
4442
4443 /* ---------------------------- malloc --------------------------- */
4444
4445 /* allocate a large request from the best fitting chunk in a treebin */
tmalloc_large(mstate m,size_t nb)4446 static void* tmalloc_large(mstate m, size_t nb) {
4447 tchunkptr v = 0;
4448 size_t rsize = -nb; /* Unsigned negation */
4449 tchunkptr t;
4450 bindex_t idx;
4451 compute_tree_index(nb, idx);
4452 if ((t = *treebin_at(m, idx)) != 0) {
4453 /* Traverse tree for this bin looking for node with size == nb */
4454 size_t sizebits = nb << leftshift_for_tree_index(idx);
4455 tchunkptr rst = 0; /* The deepest untaken right subtree */
4456 for (;;) {
4457 tchunkptr rt;
4458 size_t trem = chunksize(t) - nb;
4459 if (trem < rsize) {
4460 v = t;
4461 if ((rsize = trem) == 0)
4462 break;
4463 }
4464 rt = t->child[1];
4465 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4466 if (rt != 0 && rt != t)
4467 rst = rt;
4468 if (t == 0) {
4469 t = rst; /* set t to least subtree holding sizes > nb */
4470 break;
4471 }
4472 sizebits <<= 1;
4473 }
4474 }
4475 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4476 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4477 if (leftbits != 0) {
4478 bindex_t i;
4479 binmap_t leastbit = least_bit(leftbits);
4480 compute_bit2idx(leastbit, i);
4481 t = *treebin_at(m, i);
4482 }
4483 }
4484
4485 while (t != 0) { /* find smallest of tree or subtree */
4486 size_t trem = chunksize(t) - nb;
4487 if (trem < rsize) {
4488 rsize = trem;
4489 v = t;
4490 }
4491 t = leftmost_child(t);
4492 }
4493
4494 /* If dv is a better fit, return 0 so malloc will use it */
4495 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4496 if (RTCHECK(ok_address(m, v))) { /* split */
4497 mchunkptr r = chunk_plus_offset(v, nb);
4498 assert(chunksize(v) == rsize + nb);
4499 if (RTCHECK(ok_next(v, r))) {
4500 unlink_large_chunk(m, v);
4501 if (rsize < MIN_CHUNK_SIZE)
4502 set_inuse_and_pinuse(m, v, (rsize + nb));
4503 else {
4504 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4505 set_size_and_pinuse_of_free_chunk(r, rsize);
4506 insert_chunk(m, r, rsize);
4507 }
4508 return chunk2mem(v);
4509 }
4510 }
4511 CORRUPTION_ERROR_ACTION(m);
4512 }
4513 return 0;
4514 }
4515
4516 /* allocate a small request from the best fitting chunk in a treebin */
tmalloc_small(mstate m,size_t nb)4517 static void* tmalloc_small(mstate m, size_t nb) {
4518 tchunkptr t, v;
4519 size_t rsize;
4520 bindex_t i;
4521 binmap_t leastbit = least_bit(m->treemap);
4522 compute_bit2idx(leastbit, i);
4523 v = t = *treebin_at(m, i);
4524 rsize = chunksize(t) - nb;
4525
4526 while ((t = leftmost_child(t)) != 0) {
4527 size_t trem = chunksize(t) - nb;
4528 if (trem < rsize) {
4529 rsize = trem;
4530 v = t;
4531 }
4532 }
4533
4534 if (RTCHECK(ok_address(m, v))) {
4535 mchunkptr r = chunk_plus_offset(v, nb);
4536 assert(chunksize(v) == rsize + nb);
4537 if (RTCHECK(ok_next(v, r))) {
4538 unlink_large_chunk(m, v);
4539 if (rsize < MIN_CHUNK_SIZE)
4540 set_inuse_and_pinuse(m, v, (rsize + nb));
4541 else {
4542 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4543 set_size_and_pinuse_of_free_chunk(r, rsize);
4544 replace_dv(m, r, rsize);
4545 }
4546 return chunk2mem(v);
4547 }
4548 }
4549
4550 CORRUPTION_ERROR_ACTION(m);
4551 return 0;
4552 }
4553
4554 #if !ONLY_MSPACES
4555
dlmalloc(size_t bytes)4556 void* dlmalloc(size_t bytes) {
4557 /*
4558 Basic algorithm:
4559 If a small request (< 256 bytes minus per-chunk overhead):
4560 1. If one exists, use a remainderless chunk in associated smallbin.
4561 (Remainderless means that there are too few excess bytes to
4562 represent as a chunk.)
4563 2. If it is big enough, use the dv chunk, which is normally the
4564 chunk adjacent to the one used for the most recent small request.
4565 3. If one exists, split the smallest available chunk in a bin,
4566 saving remainder in dv.
4567 4. If it is big enough, use the top chunk.
4568 5. If available, get memory from system and use it
4569 Otherwise, for a large request:
4570 1. Find the smallest available binned chunk that fits, and use it
4571 if it is better fitting than dv chunk, splitting if necessary.
4572 2. If better fitting than any binned chunk, use the dv chunk.
4573 3. If it is big enough, use the top chunk.
4574 4. If request size >= mmap threshold, try to directly mmap this chunk.
4575 5. If available, get memory from system and use it
4576
4577 The ugly goto's here ensure that postaction occurs along all paths.
4578 */
4579
4580 #if USE_LOCKS
4581 ensure_initialization(); /* initialize in sys_alloc if not using locks */
4582 #endif
4583
4584 if (!PREACTION(gm)) {
4585 void* mem;
4586 size_t nb;
4587 if (bytes <= MAX_SMALL_REQUEST) {
4588 bindex_t idx;
4589 binmap_t smallbits;
4590 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4591 idx = small_index(nb);
4592 smallbits = gm->smallmap >> idx;
4593
4594 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4595 mchunkptr b, p;
4596 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4597 b = smallbin_at(gm, idx);
4598 p = b->fd;
4599 assert(chunksize(p) == small_index2size(idx));
4600 unlink_first_small_chunk(gm, b, p, idx);
4601 set_inuse_and_pinuse(gm, p, small_index2size(idx));
4602 mem = chunk2mem(p);
4603 check_malloced_chunk(gm, mem, nb);
4604 goto postaction;
4605 }
4606
4607 else if (nb > gm->dvsize) {
4608 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4609 mchunkptr b, p, r;
4610 size_t rsize;
4611 bindex_t i;
4612 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4613 binmap_t leastbit = least_bit(leftbits);
4614 compute_bit2idx(leastbit, i);
4615 b = smallbin_at(gm, i);
4616 p = b->fd;
4617 assert(chunksize(p) == small_index2size(i));
4618 unlink_first_small_chunk(gm, b, p, i);
4619 rsize = small_index2size(i) - nb;
4620 /* Fit here cannot be remainderless if 4byte sizes */
4621 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4622 set_inuse_and_pinuse(gm, p, small_index2size(i));
4623 else {
4624 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4625 r = chunk_plus_offset(p, nb);
4626 set_size_and_pinuse_of_free_chunk(r, rsize);
4627 replace_dv(gm, r, rsize);
4628 }
4629 mem = chunk2mem(p);
4630 check_malloced_chunk(gm, mem, nb);
4631 goto postaction;
4632 }
4633
4634 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4635 check_malloced_chunk(gm, mem, nb);
4636 goto postaction;
4637 }
4638 }
4639 }
4640 else if (bytes >= MAX_REQUEST)
4641 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4642 else {
4643 nb = pad_request(bytes);
4644 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4645 check_malloced_chunk(gm, mem, nb);
4646 goto postaction;
4647 }
4648 }
4649
4650 if (nb <= gm->dvsize) {
4651 size_t rsize = gm->dvsize - nb;
4652 mchunkptr p = gm->dv;
4653 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4654 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4655 gm->dvsize = rsize;
4656 set_size_and_pinuse_of_free_chunk(r, rsize);
4657 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4658 }
4659 else { /* exhaust dv */
4660 size_t dvs = gm->dvsize;
4661 gm->dvsize = 0;
4662 gm->dv = 0;
4663 set_inuse_and_pinuse(gm, p, dvs);
4664 }
4665 mem = chunk2mem(p);
4666 check_malloced_chunk(gm, mem, nb);
4667 goto postaction;
4668 }
4669
4670 else if (nb < gm->topsize) { /* Split top */
4671 size_t rsize = gm->topsize -= nb;
4672 mchunkptr p = gm->top;
4673 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4674 r->head = rsize | PINUSE_BIT;
4675 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4676 mem = chunk2mem(p);
4677 check_top_chunk(gm, gm->top);
4678 check_malloced_chunk(gm, mem, nb);
4679 goto postaction;
4680 }
4681
4682 mem = sys_alloc(gm, nb);
4683
4684 postaction:
4685 POSTACTION(gm);
4686 #ifdef HWASAN_ENABLED
4687 return hwasan_tag_memory(mem, bytes);
4688 #else
4689 return mem;
4690 #endif
4691 }
4692
4693 return 0;
4694 }
4695
4696 /* ---------------------------- free --------------------------- */
4697
dlfree(void * mem)4698 void dlfree(void* mem) {
4699 /*
4700 Consolidate freed chunks with preceeding or succeeding bordering
4701 free chunks, if they exist, and then place in a bin. Intermixed
4702 with special cases for top, dv, mmapped chunks, and usage errors.
4703 */
4704
4705 if (mem != 0) {
4706 mchunkptr p = mem2chunk(mem);
4707 #ifdef HWASAN_ENABLED
4708 hwasan_untag_memory(mem, chunksize(p));
4709 mem = hwasan_remove_ptr_tag(mem);
4710 #endif
4711 #if FOOTERS
4712 mstate fm = get_mstate_for(p);
4713 if (!ok_magic(fm)) {
4714 USAGE_ERROR_ACTION(fm, p);
4715 return;
4716 }
4717 #else /* FOOTERS */
4718 #define fm gm
4719 #endif /* FOOTERS */
4720 if (!PREACTION(fm)) {
4721 check_inuse_chunk(fm, p);
4722 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
4723 size_t psize = chunksize(p);
4724 mchunkptr next = chunk_plus_offset(p, psize);
4725 if (!pinuse(p)) {
4726 size_t prevsize = p->prev_foot;
4727 if (is_mmapped(p)) {
4728 psize += prevsize + MMAP_FOOT_PAD;
4729 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4730 fm->footprint -= psize;
4731 goto postaction;
4732 }
4733 else {
4734 mchunkptr prev = chunk_minus_offset(p, prevsize);
4735 psize += prevsize;
4736 p = prev;
4737 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4738 if (p != fm->dv) {
4739 unlink_chunk(fm, p, prevsize);
4740 }
4741 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4742 fm->dvsize = psize;
4743 set_free_with_pinuse(p, psize, next);
4744 goto postaction;
4745 }
4746 }
4747 else
4748 goto erroraction;
4749 }
4750 }
4751
4752 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4753 if (!cinuse(next)) { /* consolidate forward */
4754 if (next == fm->top) {
4755 size_t tsize = fm->topsize += psize;
4756 fm->top = p;
4757 p->head = tsize | PINUSE_BIT;
4758 if (p == fm->dv) {
4759 fm->dv = 0;
4760 fm->dvsize = 0;
4761 }
4762 if (should_trim(fm, tsize))
4763 sys_trim(fm, 0);
4764 goto postaction;
4765 }
4766 else if (next == fm->dv) {
4767 size_t dsize = fm->dvsize += psize;
4768 fm->dv = p;
4769 set_size_and_pinuse_of_free_chunk(p, dsize);
4770 goto postaction;
4771 }
4772 else {
4773 size_t nsize = chunksize(next);
4774 psize += nsize;
4775 unlink_chunk(fm, next, nsize);
4776 set_size_and_pinuse_of_free_chunk(p, psize);
4777 if (p == fm->dv) {
4778 fm->dvsize = psize;
4779 goto postaction;
4780 }
4781 }
4782 }
4783 else
4784 set_free_with_pinuse(p, psize, next);
4785
4786 if (is_small(psize)) {
4787 insert_small_chunk(fm, p, psize);
4788 check_free_chunk(fm, p);
4789 }
4790 else {
4791 tchunkptr tp = (tchunkptr)p;
4792 insert_large_chunk(fm, tp, psize);
4793 check_free_chunk(fm, p);
4794 if (--fm->release_checks == 0)
4795 release_unused_segments(fm);
4796 }
4797 goto postaction;
4798 }
4799 }
4800 erroraction:
4801 USAGE_ERROR_ACTION(fm, p);
4802 postaction:
4803 POSTACTION(fm);
4804 }
4805 }
4806 #if !FOOTERS
4807 #undef fm
4808 #endif /* FOOTERS */
4809 }
4810
dlcalloc(size_t n_elements,size_t elem_size)4811 void* dlcalloc(size_t n_elements, size_t elem_size) {
4812 void* mem;
4813 size_t req = 0;
4814 if (n_elements != 0) {
4815 req = n_elements * elem_size;
4816 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4817 (req / n_elements != elem_size))
4818 req = MAX_SIZE_T; /* force downstream failure on overflow */
4819 }
4820 mem = dlmalloc(req);
4821 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4822 memset(mem, 0, req);
4823 return mem;
4824 }
4825
4826 #endif /* !ONLY_MSPACES */
4827
4828 /* ------------ Internal support for realloc, memalign, etc -------------- */
4829
4830 /* Try to realloc; only in-place unless can_move true */
try_realloc_chunk(mstate m,mchunkptr p,size_t nb,int can_move)4831 static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb,
4832 int can_move) {
4833 mchunkptr newp = 0;
4834 size_t oldsize = chunksize(p);
4835 mchunkptr next = chunk_plus_offset(p, oldsize);
4836 if (RTCHECK(ok_address(m, p) && ok_inuse(p) &&
4837 ok_next(p, next) && ok_pinuse(next))) {
4838 if (is_mmapped(p)) {
4839 newp = mmap_resize(m, p, nb, can_move);
4840 }
4841 else if (oldsize >= nb) { /* already big enough */
4842 size_t rsize = oldsize - nb;
4843 if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */
4844 mchunkptr r = chunk_plus_offset(p, nb);
4845 set_inuse(m, p, nb);
4846 set_inuse(m, r, rsize);
4847 dispose_chunk(m, r, rsize);
4848 }
4849 newp = p;
4850 }
4851 else if (next == m->top) { /* extend into top */
4852 if (oldsize + m->topsize > nb) {
4853 size_t newsize = oldsize + m->topsize;
4854 size_t newtopsize = newsize - nb;
4855 mchunkptr newtop = chunk_plus_offset(p, nb);
4856 set_inuse(m, p, nb);
4857 newtop->head = newtopsize |PINUSE_BIT;
4858 m->top = newtop;
4859 m->topsize = newtopsize;
4860 newp = p;
4861 }
4862 }
4863 else if (next == m->dv) { /* extend into dv */
4864 size_t dvs = m->dvsize;
4865 if (oldsize + dvs >= nb) {
4866 size_t dsize = oldsize + dvs - nb;
4867 if (dsize >= MIN_CHUNK_SIZE) {
4868 mchunkptr r = chunk_plus_offset(p, nb);
4869 mchunkptr n = chunk_plus_offset(r, dsize);
4870 set_inuse(m, p, nb);
4871 set_size_and_pinuse_of_free_chunk(r, dsize);
4872 clear_pinuse(n);
4873 m->dvsize = dsize;
4874 m->dv = r;
4875 }
4876 else { /* exhaust dv */
4877 size_t newsize = oldsize + dvs;
4878 set_inuse(m, p, newsize);
4879 m->dvsize = 0;
4880 m->dv = 0;
4881 }
4882 newp = p;
4883 }
4884 }
4885 else if (!cinuse(next)) { /* extend into next free chunk */
4886 size_t nextsize = chunksize(next);
4887 if (oldsize + nextsize >= nb) {
4888 size_t rsize = oldsize + nextsize - nb;
4889 unlink_chunk(m, next, nextsize);
4890 if (rsize < MIN_CHUNK_SIZE) {
4891 size_t newsize = oldsize + nextsize;
4892 set_inuse(m, p, newsize);
4893 }
4894 else {
4895 mchunkptr r = chunk_plus_offset(p, nb);
4896 set_inuse(m, p, nb);
4897 set_inuse(m, r, rsize);
4898 dispose_chunk(m, r, rsize);
4899 }
4900 newp = p;
4901 }
4902 }
4903 }
4904 else {
4905 USAGE_ERROR_ACTION(m, chunk2mem(p));
4906 }
4907 return newp;
4908 }
4909
internal_memalign(mstate m,size_t alignment,size_t bytes)4910 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4911 void* mem = 0;
4912 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4913 alignment = MIN_CHUNK_SIZE;
4914 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4915 size_t a = MALLOC_ALIGNMENT << 1;
4916 while (a < alignment) a <<= 1;
4917 alignment = a;
4918 }
4919 if (bytes >= MAX_REQUEST - alignment) {
4920 if (m != 0) { /* Test isn't needed but avoids compiler warning */
4921 MALLOC_FAILURE_ACTION;
4922 }
4923 }
4924 else {
4925 size_t nb = request2size(bytes);
4926 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4927 mem = internal_malloc(m, req);
4928 if (mem != 0) {
4929 mchunkptr p = mem2chunk(mem);
4930 #ifdef HWASAN_ENABLED
4931 hwasan_untag_memory(mem, chunksize(p));
4932 mem = hwasan_remove_ptr_tag(mem);
4933 #endif
4934 if (PREACTION(m))
4935 return 0;
4936 if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */
4937 /*
4938 Find an aligned spot inside chunk. Since we need to give
4939 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4940 the first calculation places us at a spot with less than
4941 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4942 We've allocated enough total room so that this is always
4943 possible.
4944 */
4945 char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment -
4946 SIZE_T_ONE)) &
4947 -alignment));
4948 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4949 br : br+alignment;
4950 mchunkptr newp = (mchunkptr)pos;
4951 size_t leadsize = pos - (char*)(p);
4952 size_t newsize = chunksize(p) - leadsize;
4953
4954 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4955 newp->prev_foot = p->prev_foot + leadsize;
4956 newp->head = newsize;
4957 }
4958 else { /* Otherwise, give back leader, use the rest */
4959 set_inuse(m, newp, newsize);
4960 set_inuse(m, p, leadsize);
4961 dispose_chunk(m, p, leadsize);
4962 }
4963 p = newp;
4964 }
4965
4966 /* Give back spare room at the end */
4967 if (!is_mmapped(p)) {
4968 size_t size = chunksize(p);
4969 if (size > nb + MIN_CHUNK_SIZE) {
4970 size_t remainder_size = size - nb;
4971 mchunkptr remainder = chunk_plus_offset(p, nb);
4972 set_inuse(m, p, nb);
4973 set_inuse(m, remainder, remainder_size);
4974 dispose_chunk(m, remainder, remainder_size);
4975 }
4976 }
4977
4978 mem = chunk2mem(p);
4979 assert (chunksize(p) >= nb);
4980 assert(((size_t)mem & (alignment - 1)) == 0);
4981 check_inuse_chunk(m, p);
4982 POSTACTION(m);
4983 }
4984 }
4985 #ifdef HWASAN_ENABLED
4986 mem = hwasan_tag_memory(mem, bytes);
4987 #endif
4988 return mem;
4989 }
4990
4991 /*
4992 Common support for independent_X routines, handling
4993 all of the combinations that can result.
4994 The opts arg has:
4995 bit 0 set if all elements are same size (using sizes[0])
4996 bit 1 set if elements should be zeroed
4997 */
ialloc(mstate m,size_t n_elements,size_t * sizes,int opts,void * chunks[])4998 static void** ialloc(mstate m,
4999 size_t n_elements,
5000 size_t* sizes,
5001 int opts,
5002 void* chunks[]) {
5003
5004 size_t element_size; /* chunksize of each element, if all same */
5005 size_t contents_size; /* total size of elements */
5006 size_t array_size; /* request size of pointer array */
5007 void* mem; /* malloced aggregate space */
5008 mchunkptr p; /* corresponding chunk */
5009 size_t remainder_size; /* remaining bytes while splitting */
5010 void** marray; /* either "chunks" or malloced ptr array */
5011 mchunkptr array_chunk; /* chunk for malloced ptr array */
5012 flag_t was_enabled; /* to disable mmap */
5013 size_t size;
5014 size_t i;
5015
5016 ensure_initialization();
5017 /* compute array length, if needed */
5018 if (chunks != 0) {
5019 if (n_elements == 0)
5020 return chunks; /* nothing to do */
5021 marray = chunks;
5022 array_size = 0;
5023 }
5024 else {
5025 /* if empty req, must still return chunk representing empty array */
5026 if (n_elements == 0)
5027 return (void**)internal_malloc(m, 0);
5028 marray = 0;
5029 array_size = request2size(n_elements * (sizeof(void*)));
5030 }
5031
5032 /* compute total element size */
5033 if (opts & 0x1) { /* all-same-size */
5034 element_size = request2size(*sizes);
5035 contents_size = n_elements * element_size;
5036 }
5037 else { /* add up all the sizes */
5038 element_size = 0;
5039 contents_size = 0;
5040 for (i = 0; i != n_elements; ++i)
5041 contents_size += request2size(sizes[i]);
5042 }
5043
5044 size = contents_size + array_size;
5045
5046 /*
5047 Allocate the aggregate chunk. First disable direct-mmapping so
5048 malloc won't use it, since we would not be able to later
5049 free/realloc space internal to a segregated mmap region.
5050 */
5051 was_enabled = use_mmap(m);
5052 disable_mmap(m);
5053 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
5054 if (was_enabled)
5055 enable_mmap(m);
5056 if (mem == 0)
5057 return 0;
5058
5059 if (PREACTION(m)) return 0;
5060 p = mem2chunk(mem);
5061 remainder_size = chunksize(p);
5062
5063 assert(!is_mmapped(p));
5064
5065 if (opts & 0x2) { /* optionally clear the elements */
5066 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
5067 }
5068
5069 /* If not provided, allocate the pointer array as final part of chunk */
5070 if (marray == 0) {
5071 size_t array_chunk_size;
5072 array_chunk = chunk_plus_offset(p, contents_size);
5073 array_chunk_size = remainder_size - contents_size;
5074 marray = (void**) (chunk2mem(array_chunk));
5075 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
5076 remainder_size = contents_size;
5077 }
5078
5079 /* split out elements */
5080 for (i = 0; ; ++i) {
5081 marray[i] = chunk2mem(p);
5082 if (i != n_elements-1) {
5083 if (element_size != 0)
5084 size = element_size;
5085 else
5086 size = request2size(sizes[i]);
5087 remainder_size -= size;
5088 set_size_and_pinuse_of_inuse_chunk(m, p, size);
5089 p = chunk_plus_offset(p, size);
5090 }
5091 else { /* the final element absorbs any overallocation slop */
5092 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
5093 break;
5094 }
5095 }
5096
5097 #if DEBUG
5098 if (marray != chunks) {
5099 /* final element must have exactly exhausted chunk */
5100 if (element_size != 0) {
5101 assert(remainder_size == element_size);
5102 }
5103 else {
5104 assert(remainder_size == request2size(sizes[i]));
5105 }
5106 check_inuse_chunk(m, mem2chunk(marray));
5107 }
5108 for (i = 0; i != n_elements; ++i)
5109 check_inuse_chunk(m, mem2chunk(marray[i]));
5110
5111 #endif /* DEBUG */
5112
5113 POSTACTION(m);
5114 return marray;
5115 }
5116
5117 /* Try to free all pointers in the given array.
5118 Note: this could be made faster, by delaying consolidation,
5119 at the price of disabling some user integrity checks, We
5120 still optimize some consolidations by combining adjacent
5121 chunks before freeing, which will occur often if allocated
5122 with ialloc or the array is sorted.
5123 */
internal_bulk_free(mstate m,void * array[],size_t nelem)5124 static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) {
5125 size_t unfreed = 0;
5126 if (!PREACTION(m)) {
5127 void** a;
5128 void** fence = &(array[nelem]);
5129 for (a = array; a != fence; ++a) {
5130 void* mem = *a;
5131 if (mem != 0) {
5132 mchunkptr p = mem2chunk(mem);
5133 size_t psize = chunksize(p);
5134 #if FOOTERS
5135 if (get_mstate_for(p) != m) {
5136 ++unfreed;
5137 continue;
5138 }
5139 #endif
5140 check_inuse_chunk(m, p);
5141 *a = 0;
5142 if (RTCHECK(ok_address(m, p) && ok_inuse(p))) {
5143 void ** b = a + 1; /* try to merge with next chunk */
5144 mchunkptr next = next_chunk(p);
5145 if (b != fence && *b == chunk2mem(next)) {
5146 size_t newsize = chunksize(next) + psize;
5147 set_inuse(m, p, newsize);
5148 *b = chunk2mem(p);
5149 }
5150 else
5151 dispose_chunk(m, p, psize);
5152 }
5153 else {
5154 CORRUPTION_ERROR_ACTION(m);
5155 break;
5156 }
5157 }
5158 }
5159 if (should_trim(m, m->topsize))
5160 sys_trim(m, 0);
5161 POSTACTION(m);
5162 }
5163 return unfreed;
5164 }
5165
5166 /* Traversal */
5167 #if MALLOC_INSPECT_ALL
internal_inspect_all(mstate m,void (* handler)(void * start,void * end,size_t used_bytes,void * callback_arg),void * arg)5168 static void internal_inspect_all(mstate m,
5169 void(*handler)(void *start,
5170 void *end,
5171 size_t used_bytes,
5172 void* callback_arg),
5173 void* arg) {
5174 if (is_initialized(m)) {
5175 mchunkptr top = m->top;
5176 msegmentptr s;
5177 for (s = &m->seg; s != 0; s = s->next) {
5178 mchunkptr q = align_as_chunk(s->base);
5179 while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {
5180 mchunkptr next = next_chunk(q);
5181 size_t sz = chunksize(q);
5182 size_t used;
5183 void* start;
5184 if (is_inuse(q)) {
5185 used = sz - CHUNK_OVERHEAD; /* must not be mmapped */
5186 start = chunk2mem(q);
5187 }
5188 else {
5189 used = 0;
5190 if (is_small(sz)) { /* offset by possible bookkeeping */
5191 start = (void*)((char*)q + sizeof(struct malloc_chunk));
5192 }
5193 else {
5194 start = (void*)((char*)q + sizeof(struct malloc_tree_chunk));
5195 }
5196 }
5197 if (start < (void*)next) /* skip if all space is bookkeeping */
5198 handler(start, next, used, arg);
5199 if (q == top)
5200 break;
5201 q = next;
5202 }
5203 }
5204 }
5205 }
5206 #endif /* MALLOC_INSPECT_ALL */
5207
5208 /* ------------------ Exported realloc, memalign, etc -------------------- */
5209
5210 #if !ONLY_MSPACES
5211
dlrealloc(void * oldmem,size_t bytes)5212 void* dlrealloc(void* oldmem, size_t bytes) {
5213 void* mem = 0;
5214 if (oldmem == 0) {
5215 mem = dlmalloc(bytes);
5216 }
5217 else if (bytes >= MAX_REQUEST) {
5218 MALLOC_FAILURE_ACTION;
5219 }
5220 #ifdef REALLOC_ZERO_BYTES_FREES
5221 else if (bytes == 0) {
5222 dlfree(oldmem);
5223 }
5224 #endif /* REALLOC_ZERO_BYTES_FREES */
5225 else {
5226 size_t nb = request2size(bytes);
5227 mchunkptr oldp = mem2chunk(oldmem);
5228 #ifdef HWASAN_ENABLED
5229 hwasan_untag_memory(oldmem, chunksize(oldp));
5230 oldmem = hwasan_remove_ptr_tag(oldmem);
5231 #endif
5232 #if ! FOOTERS
5233 mstate m = gm;
5234 #else /* FOOTERS */
5235 mstate m = get_mstate_for(oldp);
5236 if (!ok_magic(m)) {
5237 USAGE_ERROR_ACTION(m, oldmem);
5238 return 0;
5239 }
5240 #endif /* FOOTERS */
5241 if (!PREACTION(m)) {
5242 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5243 POSTACTION(m);
5244 if (newp != 0) {
5245 check_inuse_chunk(m, newp);
5246 mem = chunk2mem(newp);
5247 }
5248 else {
5249 mem = internal_malloc(m, bytes);
5250 if (mem != 0) {
5251 size_t oc = chunksize(oldp) - overhead_for(oldp);
5252 memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5253 internal_free(m, oldmem);
5254 }
5255 }
5256 }
5257 }
5258 #ifdef HWASAN_ENABLED
5259 return hwasan_tag_memory(mem, bytes);
5260 #else
5261 return mem;
5262 #endif
5263 }
5264
dlrealloc_in_place(void * oldmem,size_t bytes)5265 void* dlrealloc_in_place(void* oldmem, size_t bytes) {
5266 void* mem = 0;
5267 if (oldmem != 0) {
5268 if (bytes >= MAX_REQUEST) {
5269 MALLOC_FAILURE_ACTION;
5270 }
5271 else {
5272 size_t nb = request2size(bytes);
5273 mchunkptr oldp = mem2chunk(oldmem);
5274 #if ! FOOTERS
5275 mstate m = gm;
5276 #else /* FOOTERS */
5277 mstate m = get_mstate_for(oldp);
5278 if (!ok_magic(m)) {
5279 USAGE_ERROR_ACTION(m, oldmem);
5280 return 0;
5281 }
5282 #endif /* FOOTERS */
5283 if (!PREACTION(m)) {
5284 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5285 POSTACTION(m);
5286 if (newp == oldp) {
5287 check_inuse_chunk(m, newp);
5288 mem = oldmem;
5289 }
5290 }
5291 }
5292 }
5293 return mem;
5294 }
5295
dlmemalign(size_t alignment,size_t bytes)5296 void* dlmemalign(size_t alignment, size_t bytes) {
5297 if (alignment <= MALLOC_ALIGNMENT) {
5298 return dlmalloc(bytes);
5299 }
5300 return internal_memalign(gm, alignment, bytes);
5301 }
5302
dlposix_memalign(void ** pp,size_t alignment,size_t bytes)5303 int dlposix_memalign(void** pp, size_t alignment, size_t bytes) {
5304 void* mem = 0;
5305 if (alignment == MALLOC_ALIGNMENT)
5306 mem = dlmalloc(bytes);
5307 else {
5308 size_t d = alignment / sizeof(void*);
5309 size_t r = alignment % sizeof(void*);
5310 if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0)
5311 return EINVAL;
5312 else if (bytes <= MAX_REQUEST - alignment) {
5313 if (alignment < MIN_CHUNK_SIZE)
5314 alignment = MIN_CHUNK_SIZE;
5315 mem = internal_memalign(gm, alignment, bytes);
5316 }
5317 }
5318 if (mem == 0)
5319 return ENOMEM;
5320 else {
5321 *pp = mem;
5322 return 0;
5323 }
5324 }
5325
dlvalloc(size_t bytes)5326 void* dlvalloc(size_t bytes) {
5327 size_t pagesz;
5328 ensure_initialization();
5329 pagesz = mparams.page_size;
5330 return dlmemalign(pagesz, bytes);
5331 }
5332
dlpvalloc(size_t bytes)5333 void* dlpvalloc(size_t bytes) {
5334 size_t pagesz;
5335 ensure_initialization();
5336 pagesz = mparams.page_size;
5337 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
5338 }
5339
dlindependent_calloc(size_t n_elements,size_t elem_size,void * chunks[])5340 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
5341 void* chunks[]) {
5342 size_t sz = elem_size; /* serves as 1-element array */
5343 return ialloc(gm, n_elements, &sz, 3, chunks);
5344 }
5345
dlindependent_comalloc(size_t n_elements,size_t sizes[],void * chunks[])5346 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
5347 void* chunks[]) {
5348 return ialloc(gm, n_elements, sizes, 0, chunks);
5349 }
5350
dlbulk_free(void * array[],size_t nelem)5351 size_t dlbulk_free(void* array[], size_t nelem) {
5352 return internal_bulk_free(gm, array, nelem);
5353 }
5354
5355 #if MALLOC_INSPECT_ALL
dlmalloc_inspect_all(void (* handler)(void * start,void * end,size_t used_bytes,void * callback_arg),void * arg)5356 void dlmalloc_inspect_all(void(*handler)(void *start,
5357 void *end,
5358 size_t used_bytes,
5359 void* callback_arg),
5360 void* arg) {
5361 ensure_initialization();
5362 if (!PREACTION(gm)) {
5363 internal_inspect_all(gm, handler, arg);
5364 POSTACTION(gm);
5365 }
5366 }
5367 #endif /* MALLOC_INSPECT_ALL */
5368
dlmalloc_trim(size_t pad)5369 int dlmalloc_trim(size_t pad) {
5370 int result = 0;
5371 ensure_initialization();
5372 if (!PREACTION(gm)) {
5373 result = sys_trim(gm, pad);
5374 POSTACTION(gm);
5375 }
5376 return result;
5377 }
5378
dlmalloc_footprint(void)5379 size_t dlmalloc_footprint(void) {
5380 return gm->footprint;
5381 }
5382
dlmalloc_max_footprint(void)5383 size_t dlmalloc_max_footprint(void) {
5384 return gm->max_footprint;
5385 }
5386
dlmalloc_footprint_limit(void)5387 size_t dlmalloc_footprint_limit(void) {
5388 size_t maf = gm->footprint_limit;
5389 return maf == 0 ? MAX_SIZE_T : maf;
5390 }
5391
dlmalloc_set_footprint_limit(size_t bytes)5392 size_t dlmalloc_set_footprint_limit(size_t bytes) {
5393 size_t result; /* invert sense of 0 */
5394 if (bytes == 0)
5395 result = granularity_align(1); /* Use minimal size */
5396 if (bytes == MAX_SIZE_T)
5397 result = 0; /* disable */
5398 else
5399 result = granularity_align(bytes);
5400 return gm->footprint_limit = result;
5401 }
5402
5403 #if !NO_MALLINFO
dlmallinfo(void)5404 struct mallinfo dlmallinfo(void) {
5405 return internal_mallinfo(gm);
5406 }
5407 #endif /* NO_MALLINFO */
5408
5409 #if !NO_MALLOC_STATS
dlmalloc_stats(void)5410 void dlmalloc_stats(void) {
5411 internal_malloc_stats(gm);
5412 }
5413 #endif /* NO_MALLOC_STATS */
5414
dlmallopt(int param_number,int value)5415 int dlmallopt(int param_number, int value) {
5416 return change_mparam(param_number, value);
5417 }
5418
dlmalloc_usable_size(void * mem)5419 size_t dlmalloc_usable_size(void* mem) {
5420 if (mem != 0) {
5421 mchunkptr p = mem2chunk(mem);
5422 if (is_inuse(p))
5423 return chunksize(p) - overhead_for(p);
5424 }
5425 return 0;
5426 }
5427
5428 #endif /* !ONLY_MSPACES */
5429
5430 /* ----------------------------- user mspaces ---------------------------- */
5431
5432 #if MSPACES
5433
init_user_mstate(char * tbase,size_t tsize)5434 static mstate init_user_mstate(char* tbase, size_t tsize) {
5435 size_t msize = pad_request(sizeof(struct malloc_state));
5436 mchunkptr mn;
5437 mchunkptr msp = align_as_chunk(tbase);
5438 mstate m = (mstate)(chunk2mem(msp));
5439 memset(m, 0, msize);
5440 (void)INITIAL_LOCK(&m->mutex);
5441 msp->head = (msize|INUSE_BITS);
5442 m->seg.base = m->least_addr = tbase;
5443 m->seg.size = m->footprint = m->max_footprint = tsize;
5444 m->magic = mparams.magic;
5445 m->release_checks = MAX_RELEASE_CHECK_RATE;
5446 m->mflags = mparams.default_mflags;
5447 m->extp = 0;
5448 m->exts = 0;
5449 disable_contiguous(m);
5450 init_bins(m);
5451 mn = next_chunk(mem2chunk(m));
5452 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
5453 check_top_chunk(m, m->top);
5454 return m;
5455 }
5456
create_mspace(size_t capacity,int locked)5457 mspace create_mspace(size_t capacity, int locked) {
5458 mstate m = 0;
5459 size_t msize;
5460 ensure_initialization();
5461 msize = pad_request(sizeof(struct malloc_state));
5462 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5463 size_t rs = ((capacity == 0)? mparams.granularity :
5464 (capacity + TOP_FOOT_SIZE + msize));
5465 size_t tsize = granularity_align(rs);
5466 char* tbase = (char*)(CALL_MMAP(tsize));
5467 if (tbase != CMFAIL) {
5468 m = init_user_mstate(tbase, tsize);
5469 m->seg.sflags = USE_MMAP_BIT;
5470 set_lock(m, locked);
5471 }
5472 }
5473 return (mspace)m;
5474 }
5475
create_mspace_with_base(void * base,size_t capacity,int locked)5476 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
5477 mstate m = 0;
5478 size_t msize;
5479 ensure_initialization();
5480 msize = pad_request(sizeof(struct malloc_state));
5481 if (capacity > msize + TOP_FOOT_SIZE &&
5482 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5483 m = init_user_mstate((char*)base, capacity);
5484 m->seg.sflags = EXTERN_BIT;
5485 set_lock(m, locked);
5486 }
5487 return (mspace)m;
5488 }
5489
mspace_track_large_chunks(mspace msp,int enable)5490 int mspace_track_large_chunks(mspace msp, int enable) {
5491 int ret = 0;
5492 mstate ms = (mstate)msp;
5493 if (!PREACTION(ms)) {
5494 if (!use_mmap(ms)) {
5495 ret = 1;
5496 }
5497 if (!enable) {
5498 enable_mmap(ms);
5499 } else {
5500 disable_mmap(ms);
5501 }
5502 POSTACTION(ms);
5503 }
5504 return ret;
5505 }
5506
destroy_mspace(mspace msp)5507 size_t destroy_mspace(mspace msp) {
5508 size_t freed = 0;
5509 mstate ms = (mstate)msp;
5510 if (ok_magic(ms)) {
5511 msegmentptr sp = &ms->seg;
5512 (void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */
5513 while (sp != 0) {
5514 char* base = sp->base;
5515 size_t size = sp->size;
5516 flag_t flag = sp->sflags;
5517 (void)base; /* placate people compiling -Wunused-variable */
5518 sp = sp->next;
5519 if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
5520 CALL_MUNMAP(base, size) == 0)
5521 freed += size;
5522 }
5523 }
5524 else {
5525 USAGE_ERROR_ACTION(ms,ms);
5526 }
5527 return freed;
5528 }
5529
5530 /*
5531 mspace versions of routines are near-clones of the global
5532 versions. This is not so nice but better than the alternatives.
5533 */
5534
mspace_malloc(mspace msp,size_t bytes)5535 void* mspace_malloc(mspace msp, size_t bytes) {
5536 mstate ms = (mstate)msp;
5537 if (!ok_magic(ms)) {
5538 USAGE_ERROR_ACTION(ms,ms);
5539 return 0;
5540 }
5541 if (!PREACTION(ms)) {
5542 void* mem;
5543 size_t nb;
5544 if (bytes <= MAX_SMALL_REQUEST) {
5545 bindex_t idx;
5546 binmap_t smallbits;
5547 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5548 idx = small_index(nb);
5549 smallbits = ms->smallmap >> idx;
5550
5551 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5552 mchunkptr b, p;
5553 idx += ~smallbits & 1; /* Uses next bin if idx empty */
5554 b = smallbin_at(ms, idx);
5555 p = b->fd;
5556 assert(chunksize(p) == small_index2size(idx));
5557 unlink_first_small_chunk(ms, b, p, idx);
5558 set_inuse_and_pinuse(ms, p, small_index2size(idx));
5559 mem = chunk2mem(p);
5560 check_malloced_chunk(ms, mem, nb);
5561 goto postaction;
5562 }
5563
5564 else if (nb > ms->dvsize) {
5565 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5566 mchunkptr b, p, r;
5567 size_t rsize;
5568 bindex_t i;
5569 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5570 binmap_t leastbit = least_bit(leftbits);
5571 compute_bit2idx(leastbit, i);
5572 b = smallbin_at(ms, i);
5573 p = b->fd;
5574 assert(chunksize(p) == small_index2size(i));
5575 unlink_first_small_chunk(ms, b, p, i);
5576 rsize = small_index2size(i) - nb;
5577 /* Fit here cannot be remainderless if 4byte sizes */
5578 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5579 set_inuse_and_pinuse(ms, p, small_index2size(i));
5580 else {
5581 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5582 r = chunk_plus_offset(p, nb);
5583 set_size_and_pinuse_of_free_chunk(r, rsize);
5584 replace_dv(ms, r, rsize);
5585 }
5586 mem = chunk2mem(p);
5587 check_malloced_chunk(ms, mem, nb);
5588 goto postaction;
5589 }
5590
5591 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5592 check_malloced_chunk(ms, mem, nb);
5593 goto postaction;
5594 }
5595 }
5596 }
5597 else if (bytes >= MAX_REQUEST)
5598 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5599 else {
5600 nb = pad_request(bytes);
5601 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5602 check_malloced_chunk(ms, mem, nb);
5603 goto postaction;
5604 }
5605 }
5606
5607 if (nb <= ms->dvsize) {
5608 size_t rsize = ms->dvsize - nb;
5609 mchunkptr p = ms->dv;
5610 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5611 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5612 ms->dvsize = rsize;
5613 set_size_and_pinuse_of_free_chunk(r, rsize);
5614 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5615 }
5616 else { /* exhaust dv */
5617 size_t dvs = ms->dvsize;
5618 ms->dvsize = 0;
5619 ms->dv = 0;
5620 set_inuse_and_pinuse(ms, p, dvs);
5621 }
5622 mem = chunk2mem(p);
5623 check_malloced_chunk(ms, mem, nb);
5624 goto postaction;
5625 }
5626
5627 else if (nb < ms->topsize) { /* Split top */
5628 size_t rsize = ms->topsize -= nb;
5629 mchunkptr p = ms->top;
5630 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5631 r->head = rsize | PINUSE_BIT;
5632 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5633 mem = chunk2mem(p);
5634 check_top_chunk(ms, ms->top);
5635 check_malloced_chunk(ms, mem, nb);
5636 goto postaction;
5637 }
5638
5639 mem = sys_alloc(ms, nb);
5640
5641 postaction:
5642 POSTACTION(ms);
5643 return mem;
5644 }
5645
5646 return 0;
5647 }
5648
mspace_free(mspace msp,void * mem)5649 void mspace_free(mspace msp, void* mem) {
5650 if (mem != 0) {
5651 mchunkptr p = mem2chunk(mem);
5652 #if FOOTERS
5653 mstate fm = get_mstate_for(p);
5654 (void)msp; /* placate people compiling -Wunused */
5655 #else /* FOOTERS */
5656 mstate fm = (mstate)msp;
5657 #endif /* FOOTERS */
5658 if (!ok_magic(fm)) {
5659 USAGE_ERROR_ACTION(fm, p);
5660 return;
5661 }
5662 if (!PREACTION(fm)) {
5663 check_inuse_chunk(fm, p);
5664 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
5665 size_t psize = chunksize(p);
5666 mchunkptr next = chunk_plus_offset(p, psize);
5667 if (!pinuse(p)) {
5668 size_t prevsize = p->prev_foot;
5669 if (is_mmapped(p)) {
5670 psize += prevsize + MMAP_FOOT_PAD;
5671 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5672 fm->footprint -= psize;
5673 goto postaction;
5674 }
5675 else {
5676 mchunkptr prev = chunk_minus_offset(p, prevsize);
5677 psize += prevsize;
5678 p = prev;
5679 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5680 if (p != fm->dv) {
5681 unlink_chunk(fm, p, prevsize);
5682 }
5683 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5684 fm->dvsize = psize;
5685 set_free_with_pinuse(p, psize, next);
5686 goto postaction;
5687 }
5688 }
5689 else
5690 goto erroraction;
5691 }
5692 }
5693
5694 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5695 if (!cinuse(next)) { /* consolidate forward */
5696 if (next == fm->top) {
5697 size_t tsize = fm->topsize += psize;
5698 fm->top = p;
5699 p->head = tsize | PINUSE_BIT;
5700 if (p == fm->dv) {
5701 fm->dv = 0;
5702 fm->dvsize = 0;
5703 }
5704 if (should_trim(fm, tsize))
5705 sys_trim(fm, 0);
5706 goto postaction;
5707 }
5708 else if (next == fm->dv) {
5709 size_t dsize = fm->dvsize += psize;
5710 fm->dv = p;
5711 set_size_and_pinuse_of_free_chunk(p, dsize);
5712 goto postaction;
5713 }
5714 else {
5715 size_t nsize = chunksize(next);
5716 psize += nsize;
5717 unlink_chunk(fm, next, nsize);
5718 set_size_and_pinuse_of_free_chunk(p, psize);
5719 if (p == fm->dv) {
5720 fm->dvsize = psize;
5721 goto postaction;
5722 }
5723 }
5724 }
5725 else
5726 set_free_with_pinuse(p, psize, next);
5727
5728 if (is_small(psize)) {
5729 insert_small_chunk(fm, p, psize);
5730 check_free_chunk(fm, p);
5731 }
5732 else {
5733 tchunkptr tp = (tchunkptr)p;
5734 insert_large_chunk(fm, tp, psize);
5735 check_free_chunk(fm, p);
5736 if (--fm->release_checks == 0)
5737 release_unused_segments(fm);
5738 }
5739 goto postaction;
5740 }
5741 }
5742 erroraction:
5743 USAGE_ERROR_ACTION(fm, p);
5744 postaction:
5745 POSTACTION(fm);
5746 }
5747 }
5748 }
5749
mspace_calloc(mspace msp,size_t n_elements,size_t elem_size)5750 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5751 void* mem;
5752 size_t req = 0;
5753 mstate ms = (mstate)msp;
5754 if (!ok_magic(ms)) {
5755 USAGE_ERROR_ACTION(ms,ms);
5756 return 0;
5757 }
5758 if (n_elements != 0) {
5759 req = n_elements * elem_size;
5760 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5761 (req / n_elements != elem_size))
5762 req = MAX_SIZE_T; /* force downstream failure on overflow */
5763 }
5764 mem = internal_malloc(ms, req);
5765 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5766 memset(mem, 0, req);
5767 return mem;
5768 }
5769
mspace_realloc(mspace msp,void * oldmem,size_t bytes)5770 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5771 void* mem = 0;
5772 if (oldmem == 0) {
5773 mem = mspace_malloc(msp, bytes);
5774 }
5775 else if (bytes >= MAX_REQUEST) {
5776 MALLOC_FAILURE_ACTION;
5777 }
5778 #ifdef REALLOC_ZERO_BYTES_FREES
5779 else if (bytes == 0) {
5780 mspace_free(msp, oldmem);
5781 }
5782 #endif /* REALLOC_ZERO_BYTES_FREES */
5783 else {
5784 size_t nb = request2size(bytes);
5785 mchunkptr oldp = mem2chunk(oldmem);
5786 #if ! FOOTERS
5787 mstate m = (mstate)msp;
5788 #else /* FOOTERS */
5789 mstate m = get_mstate_for(oldp);
5790 if (!ok_magic(m)) {
5791 USAGE_ERROR_ACTION(m, oldmem);
5792 return 0;
5793 }
5794 #endif /* FOOTERS */
5795 if (!PREACTION(m)) {
5796 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5797 POSTACTION(m);
5798 if (newp != 0) {
5799 check_inuse_chunk(m, newp);
5800 mem = chunk2mem(newp);
5801 }
5802 else {
5803 mem = mspace_malloc(m, bytes);
5804 if (mem != 0) {
5805 size_t oc = chunksize(oldp) - overhead_for(oldp);
5806 memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5807 mspace_free(m, oldmem);
5808 }
5809 }
5810 }
5811 }
5812 return mem;
5813 }
5814
mspace_realloc_in_place(mspace msp,void * oldmem,size_t bytes)5815 void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) {
5816 void* mem = 0;
5817 if (oldmem != 0) {
5818 if (bytes >= MAX_REQUEST) {
5819 MALLOC_FAILURE_ACTION;
5820 }
5821 else {
5822 size_t nb = request2size(bytes);
5823 mchunkptr oldp = mem2chunk(oldmem);
5824 #if ! FOOTERS
5825 mstate m = (mstate)msp;
5826 #else /* FOOTERS */
5827 mstate m = get_mstate_for(oldp);
5828 (void)msp; /* placate people compiling -Wunused */
5829 if (!ok_magic(m)) {
5830 USAGE_ERROR_ACTION(m, oldmem);
5831 return 0;
5832 }
5833 #endif /* FOOTERS */
5834 if (!PREACTION(m)) {
5835 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5836 POSTACTION(m);
5837 if (newp == oldp) {
5838 check_inuse_chunk(m, newp);
5839 mem = oldmem;
5840 }
5841 }
5842 }
5843 }
5844 return mem;
5845 }
5846
mspace_memalign(mspace msp,size_t alignment,size_t bytes)5847 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5848 mstate ms = (mstate)msp;
5849 if (!ok_magic(ms)) {
5850 USAGE_ERROR_ACTION(ms,ms);
5851 return 0;
5852 }
5853 if (alignment <= MALLOC_ALIGNMENT)
5854 return mspace_malloc(msp, bytes);
5855 return internal_memalign(ms, alignment, bytes);
5856 }
5857
mspace_independent_calloc(mspace msp,size_t n_elements,size_t elem_size,void * chunks[])5858 void** mspace_independent_calloc(mspace msp, size_t n_elements,
5859 size_t elem_size, void* chunks[]) {
5860 size_t sz = elem_size; /* serves as 1-element array */
5861 mstate ms = (mstate)msp;
5862 if (!ok_magic(ms)) {
5863 USAGE_ERROR_ACTION(ms,ms);
5864 return 0;
5865 }
5866 return ialloc(ms, n_elements, &sz, 3, chunks);
5867 }
5868
mspace_independent_comalloc(mspace msp,size_t n_elements,size_t sizes[],void * chunks[])5869 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5870 size_t sizes[], void* chunks[]) {
5871 mstate ms = (mstate)msp;
5872 if (!ok_magic(ms)) {
5873 USAGE_ERROR_ACTION(ms,ms);
5874 return 0;
5875 }
5876 return ialloc(ms, n_elements, sizes, 0, chunks);
5877 }
5878
mspace_bulk_free(mspace msp,void * array[],size_t nelem)5879 size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) {
5880 return internal_bulk_free((mstate)msp, array, nelem);
5881 }
5882
5883 #if MALLOC_INSPECT_ALL
mspace_inspect_all(mspace msp,void (* handler)(void * start,void * end,size_t used_bytes,void * callback_arg),void * arg)5884 void mspace_inspect_all(mspace msp,
5885 void(*handler)(void *start,
5886 void *end,
5887 size_t used_bytes,
5888 void* callback_arg),
5889 void* arg) {
5890 mstate ms = (mstate)msp;
5891 if (ok_magic(ms)) {
5892 if (!PREACTION(ms)) {
5893 internal_inspect_all(ms, handler, arg);
5894 POSTACTION(ms);
5895 }
5896 }
5897 else {
5898 USAGE_ERROR_ACTION(ms,ms);
5899 }
5900 }
5901 #endif /* MALLOC_INSPECT_ALL */
5902
mspace_trim(mspace msp,size_t pad)5903 int mspace_trim(mspace msp, size_t pad) {
5904 int result = 0;
5905 mstate ms = (mstate)msp;
5906 if (ok_magic(ms)) {
5907 if (!PREACTION(ms)) {
5908 result = sys_trim(ms, pad);
5909 POSTACTION(ms);
5910 }
5911 }
5912 else {
5913 USAGE_ERROR_ACTION(ms,ms);
5914 }
5915 return result;
5916 }
5917
5918 #if !NO_MALLOC_STATS
mspace_malloc_stats(mspace msp)5919 void mspace_malloc_stats(mspace msp) {
5920 mstate ms = (mstate)msp;
5921 if (ok_magic(ms)) {
5922 internal_malloc_stats(ms);
5923 }
5924 else {
5925 USAGE_ERROR_ACTION(ms,ms);
5926 }
5927 }
5928 #endif /* NO_MALLOC_STATS */
5929
mspace_footprint(mspace msp)5930 size_t mspace_footprint(mspace msp) {
5931 size_t result = 0;
5932 mstate ms = (mstate)msp;
5933 if (ok_magic(ms)) {
5934 result = ms->footprint;
5935 }
5936 else {
5937 USAGE_ERROR_ACTION(ms,ms);
5938 }
5939 return result;
5940 }
5941
mspace_max_footprint(mspace msp)5942 size_t mspace_max_footprint(mspace msp) {
5943 size_t result = 0;
5944 mstate ms = (mstate)msp;
5945 if (ok_magic(ms)) {
5946 result = ms->max_footprint;
5947 }
5948 else {
5949 USAGE_ERROR_ACTION(ms,ms);
5950 }
5951 return result;
5952 }
5953
mspace_footprint_limit(mspace msp)5954 size_t mspace_footprint_limit(mspace msp) {
5955 size_t result = 0;
5956 mstate ms = (mstate)msp;
5957 if (ok_magic(ms)) {
5958 size_t maf = ms->footprint_limit;
5959 result = (maf == 0) ? MAX_SIZE_T : maf;
5960 }
5961 else {
5962 USAGE_ERROR_ACTION(ms,ms);
5963 }
5964 return result;
5965 }
5966
mspace_set_footprint_limit(mspace msp,size_t bytes)5967 size_t mspace_set_footprint_limit(mspace msp, size_t bytes) {
5968 size_t result = 0;
5969 mstate ms = (mstate)msp;
5970 if (ok_magic(ms)) {
5971 if (bytes == 0)
5972 result = granularity_align(1); /* Use minimal size */
5973 if (bytes == MAX_SIZE_T)
5974 result = 0; /* disable */
5975 else
5976 result = granularity_align(bytes);
5977 ms->footprint_limit = result;
5978 }
5979 else {
5980 USAGE_ERROR_ACTION(ms,ms);
5981 }
5982 return result;
5983 }
5984
5985 #if !NO_MALLINFO
mspace_mallinfo(mspace msp)5986 struct mallinfo mspace_mallinfo(mspace msp) {
5987 mstate ms = (mstate)msp;
5988 if (!ok_magic(ms)) {
5989 USAGE_ERROR_ACTION(ms,ms);
5990 }
5991 return internal_mallinfo(ms);
5992 }
5993 #endif /* NO_MALLINFO */
5994
mspace_usable_size(const void * mem)5995 size_t mspace_usable_size(const void* mem) {
5996 if (mem != 0) {
5997 mchunkptr p = mem2chunk(mem);
5998 if (is_inuse(p))
5999 return chunksize(p) - overhead_for(p);
6000 }
6001 return 0;
6002 }
6003
mspace_mallopt(int param_number,int value)6004 int mspace_mallopt(int param_number, int value) {
6005 return change_mparam(param_number, value);
6006 }
6007
6008 #endif /* MSPACES */
6009
6010
6011 /* -------------------- Alternative MORECORE functions ------------------- */
6012
6013 /*
6014 Guidelines for creating a custom version of MORECORE:
6015
6016 * For best performance, MORECORE should allocate in multiples of pagesize.
6017 * MORECORE may allocate more memory than requested. (Or even less,
6018 but this will usually result in a malloc failure.)
6019 * MORECORE must not allocate memory when given argument zero, but
6020 instead return one past the end address of memory from previous
6021 nonzero call.
6022 * For best performance, consecutive calls to MORECORE with positive
6023 arguments should return increasing addresses, indicating that
6024 space has been contiguously extended.
6025 * Even though consecutive calls to MORECORE need not return contiguous
6026 addresses, it must be OK for malloc'ed chunks to span multiple
6027 regions in those cases where they do happen to be contiguous.
6028 * MORECORE need not handle negative arguments -- it may instead
6029 just return MFAIL when given negative arguments.
6030 Negative arguments are always multiples of pagesize. MORECORE
6031 must not misinterpret negative args as large positive unsigned
6032 args. You can suppress all such calls from even occurring by defining
6033 MORECORE_CANNOT_TRIM,
6034
6035 As an example alternative MORECORE, here is a custom allocator
6036 kindly contributed for pre-OSX macOS. It uses virtually but not
6037 necessarily physically contiguous non-paged memory (locked in,
6038 present and won't get swapped out). You can use it by uncommenting
6039 this section, adding some #includes, and setting up the appropriate
6040 defines above:
6041
6042 #define MORECORE osMoreCore
6043
6044 There is also a shutdown routine that should somehow be called for
6045 cleanup upon program exit.
6046
6047 #define MAX_POOL_ENTRIES 100
6048 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
6049 static int next_os_pool;
6050 void *our_os_pools[MAX_POOL_ENTRIES];
6051
6052 void *osMoreCore(int size)
6053 {
6054 void *ptr = 0;
6055 static void *sbrk_top = 0;
6056
6057 if (size > 0)
6058 {
6059 if (size < MINIMUM_MORECORE_SIZE)
6060 size = MINIMUM_MORECORE_SIZE;
6061 if (CurrentExecutionLevel() == kTaskLevel)
6062 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
6063 if (ptr == 0)
6064 {
6065 return (void *) MFAIL;
6066 }
6067 // save ptrs so they can be freed during cleanup
6068 our_os_pools[next_os_pool] = ptr;
6069 next_os_pool++;
6070 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
6071 sbrk_top = (char *) ptr + size;
6072 return ptr;
6073 }
6074 else if (size < 0)
6075 {
6076 // we don't currently support shrink behavior
6077 return (void *) MFAIL;
6078 }
6079 else
6080 {
6081 return sbrk_top;
6082 }
6083 }
6084
6085 // cleanup any allocated memory pools
6086 // called as last thing before shutting down driver
6087
6088 void osCleanupMem(void)
6089 {
6090 void **ptr;
6091
6092 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
6093 if (*ptr)
6094 {
6095 PoolDeallocate(*ptr);
6096 *ptr = 0;
6097 }
6098 }
6099
6100 */
6101
6102
6103 /* -----------------------------------------------------------------------
6104 History:
6105 v2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
6106 * fix bad comparison in dlposix_memalign
6107 * don't reuse adjusted asize in sys_alloc
6108 * add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion
6109 * reduce compiler warnings -- thanks to all who reported/suggested these
6110
6111 v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
6112 * Always perform unlink checks unless INSECURE
6113 * Add posix_memalign.
6114 * Improve realloc to expand in more cases; expose realloc_in_place.
6115 Thanks to Peter Buhr for the suggestion.
6116 * Add footprint_limit, inspect_all, bulk_free. Thanks
6117 to Barry Hayes and others for the suggestions.
6118 * Internal refactorings to avoid calls while holding locks
6119 * Use non-reentrant locks by default. Thanks to Roland McGrath
6120 for the suggestion.
6121 * Small fixes to mspace_destroy, reset_on_error.
6122 * Various configuration extensions/changes. Thanks
6123 to all who contributed these.
6124
6125 V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
6126 * Update Creative Commons URL
6127
6128 V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
6129 * Use zeros instead of prev foot for is_mmapped
6130 * Add mspace_track_large_chunks; thanks to Jean Brouwers
6131 * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
6132 * Fix insufficient sys_alloc padding when using 16byte alignment
6133 * Fix bad error check in mspace_footprint
6134 * Adaptations for ptmalloc; thanks to Wolfram Gloger.
6135 * Reentrant spin locks; thanks to Earl Chew and others
6136 * Win32 improvements; thanks to Niall Douglas and Earl Chew
6137 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
6138 * Extension hook in malloc_state
6139 * Various small adjustments to reduce warnings on some compilers
6140 * Various configuration extensions/changes for more platforms. Thanks
6141 to all who contributed these.
6142
6143 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
6144 * Add max_footprint functions
6145 * Ensure all appropriate literals are size_t
6146 * Fix conditional compilation problem for some #define settings
6147 * Avoid concatenating segments with the one provided
6148 in create_mspace_with_base
6149 * Rename some variables to avoid compiler shadowing warnings
6150 * Use explicit lock initialization.
6151 * Better handling of sbrk interference.
6152 * Simplify and fix segment insertion, trimming and mspace_destroy
6153 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
6154 * Thanks especially to Dennis Flanagan for help on these.
6155
6156 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
6157 * Fix memalign brace error.
6158
6159 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
6160 * Fix improper #endif nesting in C++
6161 * Add explicit casts needed for C++
6162
6163 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
6164 * Use trees for large bins
6165 * Support mspaces
6166 * Use segments to unify sbrk-based and mmap-based system allocation,
6167 removing need for emulation on most platforms without sbrk.
6168 * Default safety checks
6169 * Optional footer checks. Thanks to William Robertson for the idea.
6170 * Internal code refactoring
6171 * Incorporate suggestions and platform-specific changes.
6172 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
6173 Aaron Bachmann, Emery Berger, and others.
6174 * Speed up non-fastbin processing enough to remove fastbins.
6175 * Remove useless cfree() to avoid conflicts with other apps.
6176 * Remove internal memcpy, memset. Compilers handle builtins better.
6177 * Remove some options that no one ever used and rename others.
6178
6179 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
6180 * Fix malloc_state bitmap array misdeclaration
6181
6182 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
6183 * Allow tuning of FIRST_SORTED_BIN_SIZE
6184 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
6185 * Better detection and support for non-contiguousness of MORECORE.
6186 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
6187 * Bypass most of malloc if no frees. Thanks To Emery Berger.
6188 * Fix freeing of old top non-contiguous chunk im sysmalloc.
6189 * Raised default trim and map thresholds to 256K.
6190 * Fix mmap-related #defines. Thanks to Lubos Lunak.
6191 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
6192 * Branch-free bin calculation
6193 * Default trim and mmap thresholds now 256K.
6194
6195 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
6196 * Introduce independent_comalloc and independent_calloc.
6197 Thanks to Michael Pachos for motivation and help.
6198 * Make optional .h file available
6199 * Allow > 2GB requests on 32bit systems.
6200 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
6201 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
6202 and Anonymous.
6203 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
6204 helping test this.)
6205 * memalign: check alignment arg
6206 * realloc: don't try to shift chunks backwards, since this
6207 leads to more fragmentation in some programs and doesn't
6208 seem to help in any others.
6209 * Collect all cases in malloc requiring system memory into sysmalloc
6210 * Use mmap as backup to sbrk
6211 * Place all internal state in malloc_state
6212 * Introduce fastbins (although similar to 2.5.1)
6213 * Many minor tunings and cosmetic improvements
6214 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
6215 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
6216 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
6217 * Include errno.h to support default failure action.
6218
6219 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
6220 * return null for negative arguments
6221 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
6222 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
6223 (e.g. WIN32 platforms)
6224 * Cleanup header file inclusion for WIN32 platforms
6225 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
6226 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
6227 memory allocation routines
6228 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
6229 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
6230 usage of 'assert' in non-WIN32 code
6231 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
6232 avoid infinite loop
6233 * Always call 'fREe()' rather than 'free()'
6234
6235 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
6236 * Fixed ordering problem with boundary-stamping
6237
6238 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
6239 * Added pvalloc, as recommended by H.J. Liu
6240 * Added 64bit pointer support mainly from Wolfram Gloger
6241 * Added anonymously donated WIN32 sbrk emulation
6242 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
6243 * malloc_extend_top: fix mask error that caused wastage after
6244 foreign sbrks
6245 * Add linux mremap support code from HJ Liu
6246
6247 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
6248 * Integrated most documentation with the code.
6249 * Add support for mmap, with help from
6250 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6251 * Use last_remainder in more cases.
6252 * Pack bins using idea from colin@nyx10.cs.du.edu
6253 * Use ordered bins instead of best-fit threshhold
6254 * Eliminate block-local decls to simplify tracing and debugging.
6255 * Support another case of realloc via move into top
6256 * Fix error occuring when initial sbrk_base not word-aligned.
6257 * Rely on page size for units instead of SBRK_UNIT to
6258 avoid surprises about sbrk alignment conventions.
6259 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
6260 (raymond@es.ele.tue.nl) for the suggestion.
6261 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
6262 * More precautions for cases where other routines call sbrk,
6263 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6264 * Added macros etc., allowing use in linux libc from
6265 H.J. Lu (hjl@gnu.ai.mit.edu)
6266 * Inverted this history list
6267
6268 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
6269 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
6270 * Removed all preallocation code since under current scheme
6271 the work required to undo bad preallocations exceeds
6272 the work saved in good cases for most test programs.
6273 * No longer use return list or unconsolidated bins since
6274 no scheme using them consistently outperforms those that don't
6275 given above changes.
6276 * Use best fit for very large chunks to prevent some worst-cases.
6277 * Added some support for debugging
6278
6279 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
6280 * Removed footers when chunks are in use. Thanks to
6281 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
6282
6283 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
6284 * Added malloc_trim, with help from Wolfram Gloger
6285 (wmglo@Dent.MED.Uni-Muenchen.DE).
6286
6287 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
6288
6289 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
6290 * realloc: try to expand in both directions
6291 * malloc: swap order of clean-bin strategy;
6292 * realloc: only conditionally expand backwards
6293 * Try not to scavenge used bins
6294 * Use bin counts as a guide to preallocation
6295 * Occasionally bin return list chunks in first scan
6296 * Add a few optimizations from colin@nyx10.cs.du.edu
6297
6298 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
6299 * faster bin computation & slightly different binning
6300 * merged all consolidations to one part of malloc proper
6301 (eliminating old malloc_find_space & malloc_clean_bin)
6302 * Scan 2 returns chunks (not just 1)
6303 * Propagate failure in realloc if malloc returns 0
6304 * Add stuff to allow compilation on non-ANSI compilers
6305 from kpv@research.att.com
6306
6307 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
6308 * removed potential for odd address access in prev_chunk
6309 * removed dependency on getpagesize.h
6310 * misc cosmetics and a bit more internal documentation
6311 * anticosmetics: mangled names in macros to evade debugger strangeness
6312 * tested on sparc, hp-700, dec-mips, rs6000
6313 with gcc & native cc (hp, dec only) allowing
6314 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
6315
6316 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
6317 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
6318 structure of old version, but most details differ.)
6319
6320 */
6321