1 /*
2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain, as explained at
4 http://creativecommons.org/licenses/publicdomain. Send questions,
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
6
7 * Version 2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
8
9 Note: There may be an updated version of this malloc obtainable at
10 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11 Check before installing!
12
13 * Quickstart
14
15 This library is all in one file to simplify the most common usage:
16 ftp it, compile it (-O3), and link it into another program. All of
17 the compile-time options default to reasonable values for use on
18 most platforms. You might later want to step through various
19 compile-time and dynamic tuning options.
20
21 For convenience, an include file for code using this malloc is at:
22 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.4.h
23 You don't really need this .h file unless you call functions not
24 defined in your system include files. The .h file contains only the
25 excerpts from this file needed for using this malloc on ANSI C/C++
26 systems, so long as you haven't changed compile-time options about
27 naming and tuning parameters. If you do, then you can create your
28 own malloc.h that does include all settings by cutting at the point
29 indicated below. Note that you may already by default be using a C
30 library containing a malloc that is based on some version of this
31 malloc (for example in linux). You might still want to use the one
32 in this file to customize settings or to avoid overheads associated
33 with library versions.
34
35 * Vital statistics:
36
37 Supported pointer/size_t representation: 4 or 8 bytes
38 size_t MUST be an unsigned type of the same width as
39 pointers. (If you are using an ancient system that declares
40 size_t as a signed type, or need it to be a different width
41 than pointers, you can use a previous release of this malloc
42 (e.g. 2.7.2) supporting these.)
43
44 Alignment: 8 bytes (default)
45 This suffices for nearly all current machines and C compilers.
46 However, you can define MALLOC_ALIGNMENT to be wider than this
47 if necessary (up to 128bytes), at the expense of using more space.
48
49 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
50 8 or 16 bytes (if 8byte sizes)
51 Each malloced chunk has a hidden word of overhead holding size
52 and status information, and additional cross-check word
53 if FOOTERS is defined.
54
55 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
56 8-byte ptrs: 32 bytes (including overhead)
57
58 Even a request for zero bytes (i.e., malloc(0)) returns a
59 pointer to something of the minimum allocatable size.
60 The maximum overhead wastage (i.e., number of extra bytes
61 allocated than were requested in malloc) is less than or equal
62 to the minimum size, except for requests >= mmap_threshold that
63 are serviced via mmap(), where the worst case wastage is about
64 32 bytes plus the remainder from a system page (the minimal
65 mmap unit); typically 4096 or 8192 bytes.
66
67 Security: static-safe; optionally more or less
68 The "security" of malloc refers to the ability of malicious
69 code to accentuate the effects of errors (for example, freeing
70 space that is not currently malloc'ed or overwriting past the
71 ends of chunks) in code that calls malloc. This malloc
72 guarantees not to modify any memory locations below the base of
73 heap, i.e., static variables, even in the presence of usage
74 errors. The routines additionally detect most improper frees
75 and reallocs. All this holds as long as the static bookkeeping
76 for malloc itself is not corrupted by some other means. This
77 is only one aspect of security -- these checks do not, and
78 cannot, detect all possible programming errors.
79
80 If FOOTERS is defined nonzero, then each allocated chunk
81 carries an additional check word to verify that it was malloced
82 from its space. These check words are the same within each
83 execution of a program using malloc, but differ across
84 executions, so externally crafted fake chunks cannot be
85 freed. This improves security by rejecting frees/reallocs that
86 could corrupt heap memory, in addition to the checks preventing
87 writes to statics that are always on. This may further improve
88 security at the expense of time and space overhead. (Note that
89 FOOTERS may also be worth using with MSPACES.)
90
91 By default detected errors cause the program to abort (calling
92 "abort()"). You can override this to instead proceed past
93 errors by defining PROCEED_ON_ERROR. In this case, a bad free
94 has no effect, and a malloc that encounters a bad address
95 caused by user overwrites will ignore the bad address by
96 dropping pointers and indices to all known memory. This may
97 be appropriate for programs that should continue if at all
98 possible in the face of programming errors, although they may
99 run out of memory because dropped memory is never reclaimed.
100
101 If you don't like either of these options, you can define
102 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103 else. And if if you are sure that your program using malloc has
104 no errors or vulnerabilities, you can define INSECURE to 1,
105 which might (or might not) provide a small performance improvement.
106
107 Thread-safety: NOT thread-safe unless USE_LOCKS defined
108 When USE_LOCKS is defined, each public call to malloc, free,
109 etc is surrounded with either a pthread mutex or a win32
110 spinlock (depending on WIN32). This is not especially fast, and
111 can be a major bottleneck. It is designed only to provide
112 minimal protection in concurrent environments, and to provide a
113 basis for extensions. If you are using malloc in a concurrent
114 program, consider instead using nedmalloc
115 (http://www.nedprod.com/programs/portable/nedmalloc/) or
116 ptmalloc (See http://www.malloc.de), which are derived
117 from versions of this malloc.
118
119 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
120 This malloc can use unix sbrk or any emulation (invoked using
121 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
122 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
123 memory. On most unix systems, it tends to work best if both
124 MORECORE and MMAP are enabled. On Win32, it uses emulations
125 based on VirtualAlloc. It also uses common C library functions
126 like memset.
127
128 Compliance: I believe it is compliant with the Single Unix Specification
129 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
130 others as well.
131
132 * Overview of algorithms
133
134 This is not the fastest, most space-conserving, most portable, or
135 most tunable malloc ever written. However it is among the fastest
136 while also being among the most space-conserving, portable and
137 tunable. Consistent balance across these factors results in a good
138 general-purpose allocator for malloc-intensive programs.
139
140 In most ways, this malloc is a best-fit allocator. Generally, it
141 chooses the best-fitting existing chunk for a request, with ties
142 broken in approximately least-recently-used order. (This strategy
143 normally maintains low fragmentation.) However, for requests less
144 than 256bytes, it deviates from best-fit when there is not an
145 exactly fitting available chunk by preferring to use space adjacent
146 to that used for the previous small request, as well as by breaking
147 ties in approximately most-recently-used order. (These enhance
148 locality of series of small allocations.) And for very large requests
149 (>= 256Kb by default), it relies on system memory mapping
150 facilities, if supported. (This helps avoid carrying around and
151 possibly fragmenting memory used only for large chunks.)
152
153 All operations (except malloc_stats and mallinfo) have execution
154 times that are bounded by a constant factor of the number of bits in
155 a size_t, not counting any clearing in calloc or copying in realloc,
156 or actions surrounding MORECORE and MMAP that have times
157 proportional to the number of non-contiguous regions returned by
158 system allocation routines, which is often just 1. In real-time
159 applications, you can optionally suppress segment traversals using
160 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
161 system allocators return non-contiguous spaces, at the typical
162 expense of carrying around more memory and increased fragmentation.
163
164 The implementation is not very modular and seriously overuses
165 macros. Perhaps someday all C compilers will do as good a job
166 inlining modular code as can now be done by brute-force expansion,
167 but now, enough of them seem not to.
168
169 Some compilers issue a lot of warnings about code that is
170 dead/unreachable only on some platforms, and also about intentional
171 uses of negation on unsigned types. All known cases of each can be
172 ignored.
173
174 For a longer but out of date high-level description, see
175 http://gee.cs.oswego.edu/dl/html/malloc.html
176
177 * MSPACES
178 If MSPACES is defined, then in addition to malloc, free, etc.,
179 this file also defines mspace_malloc, mspace_free, etc. These
180 are versions of malloc routines that take an "mspace" argument
181 obtained using create_mspace, to control all internal bookkeeping.
182 If ONLY_MSPACES is defined, only these versions are compiled.
183 So if you would like to use this allocator for only some allocations,
184 and your system malloc for others, you can compile with
185 ONLY_MSPACES and then do something like...
186 static mspace mymspace = create_mspace(0,0); // for example
187 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
188
189 (Note: If you only need one instance of an mspace, you can instead
190 use "USE_DL_PREFIX" to relabel the global malloc.)
191
192 You can similarly create thread-local allocators by storing
193 mspaces as thread-locals. For example:
194 static __thread mspace tlms = 0;
195 void* tlmalloc(size_t bytes) {
196 if (tlms == 0) tlms = create_mspace(0, 0);
197 return mspace_malloc(tlms, bytes);
198 }
199 void tlfree(void* mem) { mspace_free(tlms, mem); }
200
201 Unless FOOTERS is defined, each mspace is completely independent.
202 You cannot allocate from one and free to another (although
203 conformance is only weakly checked, so usage errors are not always
204 caught). If FOOTERS is defined, then each chunk carries around a tag
205 indicating its originating mspace, and frees are directed to their
206 originating spaces.
207
208 ------------------------- Compile-time options ---------------------------
209
210 Be careful in setting #define values for numerical constants of type
211 size_t. On some systems, literal values are not automatically extended
212 to size_t precision unless they are explicitly casted. You can also
213 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
214
215 WIN32 default: defined if _WIN32 defined
216 Defining WIN32 sets up defaults for MS environment and compilers.
217 Otherwise defaults are for unix. Beware that there seem to be some
218 cases where this malloc might not be a pure drop-in replacement for
219 Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
220 SetDIBits()) may be due to bugs in some video driver implementations
221 when pixel buffers are malloc()ed, and the region spans more than
222 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
223 default granularity, pixel buffers may straddle virtual allocation
224 regions more often than when using the Microsoft allocator. You can
225 avoid this by using VirtualAlloc() and VirtualFree() for all pixel
226 buffers rather than using malloc(). If this is not possible,
227 recompile this malloc with a larger DEFAULT_GRANULARITY.
228
229 MALLOC_ALIGNMENT default: (size_t)8
230 Controls the minimum alignment for malloc'ed chunks. It must be a
231 power of two and at least 8, even on machines for which smaller
232 alignments would suffice. It may be defined as larger than this
233 though. Note however that code and data structures are optimized for
234 the case of 8-byte alignment.
235
236 MSPACES default: 0 (false)
237 If true, compile in support for independent allocation spaces.
238 This is only supported if HAVE_MMAP is true.
239
240 ONLY_MSPACES default: 0 (false)
241 If true, only compile in mspace versions, not regular versions.
242
243 USE_LOCKS default: 0 (false)
244 Causes each call to each public routine to be surrounded with
245 pthread or WIN32 mutex lock/unlock. (If set true, this can be
246 overridden on a per-mspace basis for mspace versions.) If set to a
247 non-zero value other than 1, locks are used, but their
248 implementation is left out, so lock functions must be supplied manually,
249 as described below.
250
251 USE_SPIN_LOCKS default: 1 iff USE_LOCKS and on x86 using gcc or MSC
252 If true, uses custom spin locks for locking. This is currently
253 supported only for x86 platforms using gcc or recent MS compilers.
254 Otherwise, posix locks or win32 critical sections are used.
255
256 FOOTERS default: 0
257 If true, provide extra checking and dispatching by placing
258 information in the footers of allocated chunks. This adds
259 space and time overhead.
260
261 INSECURE default: 0
262 If true, omit checks for usage errors and heap space overwrites.
263
264 USE_DL_PREFIX default: NOT defined
265 Causes compiler to prefix all public routines with the string 'dl'.
266 This can be useful when you only want to use this malloc in one part
267 of a program, using your regular system malloc elsewhere.
268
269 ABORT default: defined as abort()
270 Defines how to abort on failed checks. On most systems, a failed
271 check cannot die with an "assert" or even print an informative
272 message, because the underlying print routines in turn call malloc,
273 which will fail again. Generally, the best policy is to simply call
274 abort(). It's not very useful to do more than this because many
275 errors due to overwriting will show up as address faults (null, odd
276 addresses etc) rather than malloc-triggered checks, so will also
277 abort. Also, most compilers know that abort() does not return, so
278 can better optimize code conditionally calling it.
279
280 PROCEED_ON_ERROR default: defined as 0 (false)
281 Controls whether detected bad addresses cause them to bypassed
282 rather than aborting. If set, detected bad arguments to free and
283 realloc are ignored. And all bookkeeping information is zeroed out
284 upon a detected overwrite of freed heap space, thus losing the
285 ability to ever return it from malloc again, but enabling the
286 application to proceed. If PROCEED_ON_ERROR is defined, the
287 static variable malloc_corruption_error_count is compiled in
288 and can be examined to see if errors have occurred. This option
289 generates slower code than the default abort policy.
290
291 DEBUG default: NOT defined
292 The DEBUG setting is mainly intended for people trying to modify
293 this code or diagnose problems when porting to new platforms.
294 However, it may also be able to better isolate user errors than just
295 using runtime checks. The assertions in the check routines spell
296 out in more detail the assumptions and invariants underlying the
297 algorithms. The checking is fairly extensive, and will slow down
298 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
299 set will attempt to check every non-mmapped allocated and free chunk
300 in the course of computing the summaries.
301
302 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
303 Debugging assertion failures can be nearly impossible if your
304 version of the assert macro causes malloc to be called, which will
305 lead to a cascade of further failures, blowing the runtime stack.
306 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
307 which will usually make debugging easier.
308
309 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
310 The action to take before "return 0" when malloc fails to be able to
311 return memory because there is none available.
312
313 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
314 True if this system supports sbrk or an emulation of it.
315
316 MORECORE default: sbrk
317 The name of the sbrk-style system routine to call to obtain more
318 memory. See below for guidance on writing custom MORECORE
319 functions. The type of the argument to sbrk/MORECORE varies across
320 systems. It cannot be size_t, because it supports negative
321 arguments, so it is normally the signed type of the same width as
322 size_t (sometimes declared as "intptr_t"). It doesn't much matter
323 though. Internally, we only call it with arguments less than half
324 the max value of a size_t, which should work across all reasonable
325 possibilities, although sometimes generating compiler warnings.
326
327 MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
328 If true, take advantage of fact that consecutive calls to MORECORE
329 with positive arguments always return contiguous increasing
330 addresses. This is true of unix sbrk. It does not hurt too much to
331 set it true anyway, since malloc copes with non-contiguities.
332 Setting it false when definitely non-contiguous saves time
333 and possibly wasted space it would take to discover this though.
334
335 MORECORE_CANNOT_TRIM default: NOT defined
336 True if MORECORE cannot release space back to the system when given
337 negative arguments. This is generally necessary only if you are
338 using a hand-crafted MORECORE function that cannot handle negative
339 arguments.
340
341 NO_SEGMENT_TRAVERSAL default: 0
342 If non-zero, suppresses traversals of memory segments
343 returned by either MORECORE or CALL_MMAP. This disables
344 merging of segments that are contiguous, and selectively
345 releasing them to the OS if unused, but bounds execution times.
346
347 HAVE_MMAP default: 1 (true)
348 True if this system supports mmap or an emulation of it. If so, and
349 HAVE_MORECORE is not true, MMAP is used for all system
350 allocation. If set and HAVE_MORECORE is true as well, MMAP is
351 primarily used to directly allocate very large blocks. It is also
352 used as a backup strategy in cases where MORECORE fails to provide
353 space from system. Note: A single call to MUNMAP is assumed to be
354 able to unmap memory that may have be allocated using multiple calls
355 to MMAP, so long as they are adjacent.
356
357 HAVE_MREMAP default: 1 on linux, else 0
358 If true realloc() uses mremap() to re-allocate large blocks and
359 extend or shrink allocation spaces.
360
361 MMAP_CLEARS default: 1 except on WINCE.
362 True if mmap clears memory so calloc doesn't need to. This is true
363 for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
364
365 USE_BUILTIN_FFS default: 0 (i.e., not used)
366 Causes malloc to use the builtin ffs() function to compute indices.
367 Some compilers may recognize and intrinsify ffs to be faster than the
368 supplied C version. Also, the case of x86 using gcc is special-cased
369 to an asm instruction, so is already as fast as it can be, and so
370 this setting has no effect. Similarly for Win32 under recent MS compilers.
371 (On most x86s, the asm version is only slightly faster than the C version.)
372
373 malloc_getpagesize default: derive from system includes, or 4096.
374 The system page size. To the extent possible, this malloc manages
375 memory from the system in page-size units. This may be (and
376 usually is) a function rather than a constant. This is ignored
377 if WIN32, where page size is determined using getSystemInfo during
378 initialization.
379
380 USE_DEV_RANDOM default: 0 (i.e., not used)
381 Causes malloc to use /dev/random to initialize secure magic seed for
382 stamping footers. Otherwise, the current time is used.
383
384 NO_MALLINFO default: 0
385 If defined, don't compile "mallinfo". This can be a simple way
386 of dealing with mismatches between system declarations and
387 those in this file.
388
389 MALLINFO_FIELD_TYPE default: size_t
390 The type of the fields in the mallinfo struct. This was originally
391 defined as "int" in SVID etc, but is more usefully defined as
392 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
393
394 REALLOC_ZERO_BYTES_FREES default: not defined
395 This should be set if a call to realloc with zero bytes should
396 be the same as a call to free. Some people think it should. Otherwise,
397 since this malloc returns a unique pointer for malloc(0), so does
398 realloc(p, 0).
399
400 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
401 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
402 LACKS_STDLIB_H default: NOT defined unless on WIN32
403 Define these if your system does not have these header files.
404 You might need to manually insert some of the declarations they provide.
405
406 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
407 system_info.dwAllocationGranularity in WIN32,
408 otherwise 64K.
409 Also settable using mallopt(M_GRANULARITY, x)
410 The unit for allocating and deallocating memory from the system. On
411 most systems with contiguous MORECORE, there is no reason to
412 make this more than a page. However, systems with MMAP tend to
413 either require or encourage larger granularities. You can increase
414 this value to prevent system allocation functions to be called so
415 often, especially if they are slow. The value must be at least one
416 page and must be a power of two. Setting to 0 causes initialization
417 to either page size or win32 region size. (Note: In previous
418 versions of malloc, the equivalent of this option was called
419 "TOP_PAD")
420
421 DEFAULT_TRIM_THRESHOLD default: 2MB
422 Also settable using mallopt(M_TRIM_THRESHOLD, x)
423 The maximum amount of unused top-most memory to keep before
424 releasing via malloc_trim in free(). Automatic trimming is mainly
425 useful in long-lived programs using contiguous MORECORE. Because
426 trimming via sbrk can be slow on some systems, and can sometimes be
427 wasteful (in cases where programs immediately afterward allocate
428 more large chunks) the value should be high enough so that your
429 overall system performance would improve by releasing this much
430 memory. As a rough guide, you might set to a value close to the
431 average size of a process (program) running on your system.
432 Releasing this much memory would allow such a process to run in
433 memory. Generally, it is worth tuning trim thresholds when a
434 program undergoes phases where several large chunks are allocated
435 and released in ways that can reuse each other's storage, perhaps
436 mixed with phases where there are no such chunks at all. The trim
437 value must be greater than page size to have any useful effect. To
438 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
439 some people use of mallocing a huge space and then freeing it at
440 program startup, in an attempt to reserve system memory, doesn't
441 have the intended effect under automatic trimming, since that memory
442 will immediately be returned to the system.
443
444 DEFAULT_MMAP_THRESHOLD default: 256K
445 Also settable using mallopt(M_MMAP_THRESHOLD, x)
446 The request size threshold for using MMAP to directly service a
447 request. Requests of at least this size that cannot be allocated
448 using already-existing space will be serviced via mmap. (If enough
449 normal freed space already exists it is used instead.) Using mmap
450 segregates relatively large chunks of memory so that they can be
451 individually obtained and released from the host system. A request
452 serviced through mmap is never reused by any other request (at least
453 not directly; the system may just so happen to remap successive
454 requests to the same locations). Segregating space in this way has
455 the benefits that: Mmapped space can always be individually released
456 back to the system, which helps keep the system level memory demands
457 of a long-lived program low. Also, mapped memory doesn't become
458 `locked' between other chunks, as can happen with normally allocated
459 chunks, which means that even trimming via malloc_trim would not
460 release them. However, it has the disadvantage that the space
461 cannot be reclaimed, consolidated, and then used to service later
462 requests, as happens with normal chunks. The advantages of mmap
463 nearly always outweigh disadvantages for "large" chunks, but the
464 value of "large" may vary across systems. The default is an
465 empirically derived value that works well in most systems. You can
466 disable mmap by setting to MAX_SIZE_T.
467
468 MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
469 The number of consolidated frees between checks to release
470 unused segments when freeing. When using non-contiguous segments,
471 especially with multiple mspaces, checking only for topmost space
472 doesn't always suffice to trigger trimming. To compensate for this,
473 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
474 current number of segments, if greater) try to release unused
475 segments to the OS when freeing chunks that result in
476 consolidation. The best value for this parameter is a compromise
477 between slowing down frees with relatively costly checks that
478 rarely trigger versus holding on to unused memory. To effectively
479 disable, set to MAX_SIZE_T. This may lead to a very slight speed
480 improvement at the expense of carrying around more memory.
481 */
482
483 #define USE_DL_PREFIX
484 //#define HAVE_USR_INCLUDE_MALLOC_H
485 //#define MSPACES 1
486 #define NO_SEGMENT_TRAVERSAL 1
487
488 /* Version identifier to allow people to support multiple versions */
489 #ifndef DLMALLOC_VERSION
490 #define DLMALLOC_VERSION 20804
491 #endif /* DLMALLOC_VERSION */
492
493 #ifndef WIN32
494 #ifdef _WIN32
495 #define WIN32 1
496 #endif /* _WIN32 */
497 #ifdef _WIN32_WCE
498 #define LACKS_FCNTL_H
499 #define WIN32 1
500 #endif /* _WIN32_WCE */
501 #endif /* WIN32 */
502 #ifdef WIN32
503 #define WIN32_LEAN_AND_MEAN
504 #include <windows.h>
505 #define HAVE_MMAP 1
506 #define HAVE_MORECORE 0
507 #define LACKS_UNISTD_H
508 #define LACKS_SYS_PARAM_H
509 #define LACKS_SYS_MMAN_H
510 #define LACKS_STRING_H
511 #define LACKS_STRINGS_H
512 #define LACKS_SYS_TYPES_H
513 #define LACKS_ERRNO_H
514 #ifndef MALLOC_FAILURE_ACTION
515 #define MALLOC_FAILURE_ACTION
516 #endif /* MALLOC_FAILURE_ACTION */
517 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
518 #define MMAP_CLEARS 0
519 #else
520 #define MMAP_CLEARS 1
521 #endif /* _WIN32_WCE */
522 #endif /* WIN32 */
523
524 #if defined(DARWIN) || defined(_DARWIN)
525 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
526 #ifndef HAVE_MORECORE
527 #define HAVE_MORECORE 0
528 #define HAVE_MMAP 1
529 /* OSX allocators provide 16 byte alignment */
530 #ifndef MALLOC_ALIGNMENT
531 #define MALLOC_ALIGNMENT ((size_t)16U)
532 #endif
533 #endif /* HAVE_MORECORE */
534 #endif /* DARWIN */
535
536 #ifndef LACKS_SYS_TYPES_H
537 #include <sys/types.h> /* For size_t */
538 #endif /* LACKS_SYS_TYPES_H */
539
540 #if (defined(__GNUC__) && ((defined(__i386__) || defined(__x86_64__)))) || (defined(_MSC_VER) && _MSC_VER>=1310)
541 #define SPIN_LOCKS_AVAILABLE 1
542 #else
543 #define SPIN_LOCKS_AVAILABLE 0
544 #endif
545
546 /* The maximum possible size_t value has all bits set */
547 #define MAX_SIZE_T (~(size_t)0)
548
549 #ifndef ONLY_MSPACES
550 #define ONLY_MSPACES 0 /* define to a value */
551 #else
552 #define ONLY_MSPACES 1
553 #endif /* ONLY_MSPACES */
554 #ifndef MSPACES
555 #if ONLY_MSPACES
556 #define MSPACES 1
557 #else /* ONLY_MSPACES */
558 #define MSPACES 0
559 #endif /* ONLY_MSPACES */
560 #endif /* MSPACES */
561 #ifndef MALLOC_ALIGNMENT
562 #define MALLOC_ALIGNMENT ((size_t)8U)
563 #endif /* MALLOC_ALIGNMENT */
564 #ifndef FOOTERS
565 #define FOOTERS 0
566 #endif /* FOOTERS */
567 #ifndef ABORT
568 #define ABORT abort()
569 #endif /* ABORT */
570 #ifndef ABORT_ON_ASSERT_FAILURE
571 #define ABORT_ON_ASSERT_FAILURE 1
572 #endif /* ABORT_ON_ASSERT_FAILURE */
573 #ifndef PROCEED_ON_ERROR
574 #define PROCEED_ON_ERROR 0
575 #endif /* PROCEED_ON_ERROR */
576 #ifndef USE_LOCKS
577 #define USE_LOCKS 0
578 #endif /* USE_LOCKS */
579 #ifndef USE_SPIN_LOCKS
580 #if USE_LOCKS && SPIN_LOCKS_AVAILABLE
581 #define USE_SPIN_LOCKS 1
582 #else
583 #define USE_SPIN_LOCKS 0
584 #endif /* USE_LOCKS && SPIN_LOCKS_AVAILABLE. */
585 #endif /* USE_SPIN_LOCKS */
586 #ifndef INSECURE
587 #define INSECURE 0
588 #endif /* INSECURE */
589 #ifndef HAVE_MMAP
590 #define HAVE_MMAP 1
591 #endif /* HAVE_MMAP */
592 #ifndef MMAP_CLEARS
593 #define MMAP_CLEARS 1
594 #endif /* MMAP_CLEARS */
595 #ifndef HAVE_MREMAP
596 #ifdef linux
597 #define HAVE_MREMAP 1
598 #else /* linux */
599 #define HAVE_MREMAP 0
600 #endif /* linux */
601 #endif /* HAVE_MREMAP */
602 #ifndef MALLOC_FAILURE_ACTION
603 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
604 #endif /* MALLOC_FAILURE_ACTION */
605 #ifndef HAVE_MORECORE
606 #if ONLY_MSPACES
607 #define HAVE_MORECORE 0
608 #else /* ONLY_MSPACES */
609 #define HAVE_MORECORE 1
610 #endif /* ONLY_MSPACES */
611 #endif /* HAVE_MORECORE */
612 #if !HAVE_MORECORE
613 #define MORECORE_CONTIGUOUS 0
614 #else /* !HAVE_MORECORE */
615 #define MORECORE_DEFAULT sbrk
616 #ifndef MORECORE_CONTIGUOUS
617 #define MORECORE_CONTIGUOUS 1
618 #endif /* MORECORE_CONTIGUOUS */
619 #endif /* HAVE_MORECORE */
620 #ifndef DEFAULT_GRANULARITY
621 #if (MORECORE_CONTIGUOUS || defined(WIN32))
622 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
623 #else /* MORECORE_CONTIGUOUS */
624 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
625 #endif /* MORECORE_CONTIGUOUS */
626 #endif /* DEFAULT_GRANULARITY */
627 #ifndef DEFAULT_TRIM_THRESHOLD
628 #ifndef MORECORE_CANNOT_TRIM
629 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
630 #else /* MORECORE_CANNOT_TRIM */
631 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
632 #endif /* MORECORE_CANNOT_TRIM */
633 #endif /* DEFAULT_TRIM_THRESHOLD */
634 #ifndef DEFAULT_MMAP_THRESHOLD
635 #if HAVE_MMAP
636 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
637 #else /* HAVE_MMAP */
638 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
639 #endif /* HAVE_MMAP */
640 #endif /* DEFAULT_MMAP_THRESHOLD */
641 #ifndef MAX_RELEASE_CHECK_RATE
642 #if HAVE_MMAP
643 #define MAX_RELEASE_CHECK_RATE 4095
644 #else
645 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
646 #endif /* HAVE_MMAP */
647 #endif /* MAX_RELEASE_CHECK_RATE */
648 #ifndef USE_BUILTIN_FFS
649 #define USE_BUILTIN_FFS 0
650 #endif /* USE_BUILTIN_FFS */
651 #ifndef USE_DEV_RANDOM
652 #define USE_DEV_RANDOM 0
653 #endif /* USE_DEV_RANDOM */
654 #ifndef NO_MALLINFO
655 #define NO_MALLINFO 0
656 #endif /* NO_MALLINFO */
657 #ifndef MALLINFO_FIELD_TYPE
658 #define MALLINFO_FIELD_TYPE size_t
659 #endif /* MALLINFO_FIELD_TYPE */
660 #ifndef NO_SEGMENT_TRAVERSAL
661 #define NO_SEGMENT_TRAVERSAL 0
662 #endif /* NO_SEGMENT_TRAVERSAL */
663
664 /*
665 mallopt tuning options. SVID/XPG defines four standard parameter
666 numbers for mallopt, normally defined in malloc.h. None of these
667 are used in this malloc, so setting them has no effect. But this
668 malloc does support the following options.
669 */
670
671 #define M_TRIM_THRESHOLD (-1)
672 #define M_GRANULARITY (-2)
673 #define M_MMAP_THRESHOLD (-3)
674
675 /* ------------------------ Mallinfo declarations ------------------------ */
676
677 #if !NO_MALLINFO
678 /*
679 This version of malloc supports the standard SVID/XPG mallinfo
680 routine that returns a struct containing usage properties and
681 statistics. It should work on any system that has a
682 /usr/include/malloc.h defining struct mallinfo. The main
683 declaration needed is the mallinfo struct that is returned (by-copy)
684 by mallinfo(). The malloinfo struct contains a bunch of fields that
685 are not even meaningful in this version of malloc. These fields are
686 are instead filled by mallinfo() with other numbers that might be of
687 interest.
688
689 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
690 /usr/include/malloc.h file that includes a declaration of struct
691 mallinfo. If so, it is included; else a compliant version is
692 declared below. These must be precisely the same for mallinfo() to
693 work. The original SVID version of this struct, defined on most
694 systems with mallinfo, declares all fields as ints. But some others
695 define as unsigned long. If your system defines the fields using a
696 type of different width than listed here, you MUST #include your
697 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
698 */
699
700 /* #define HAVE_USR_INCLUDE_MALLOC_H */
701
702 #ifdef HAVE_USR_INCLUDE_MALLOC_H
703 #include "/usr/include/malloc.h"
704 #else /* HAVE_USR_INCLUDE_MALLOC_H */
705 #ifndef STRUCT_MALLINFO_DECLARED
706 #define STRUCT_MALLINFO_DECLARED 1
707 struct mallinfo {
708 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
709 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
710 MALLINFO_FIELD_TYPE smblks; /* always 0 */
711 MALLINFO_FIELD_TYPE hblks; /* always 0 */
712 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
713 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
714 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
715 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
716 MALLINFO_FIELD_TYPE fordblks; /* total free space */
717 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
718 };
719 #endif /* STRUCT_MALLINFO_DECLARED */
720 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
721 #endif /* NO_MALLINFO */
722
723 /*
724 Try to persuade compilers to inline. The most critical functions for
725 inlining are defined as macros, so these aren't used for them.
726 */
727
728 #ifndef FORCEINLINE
729 #if defined(__GNUC__)
730 #define FORCEINLINE __inline __attribute__ ((always_inline))
731 #elif defined(_MSC_VER)
732 #define FORCEINLINE __forceinline
733 #endif
734 #endif
735 #ifndef NOINLINE
736 #if defined(__GNUC__)
737 #define NOINLINE __attribute__ ((noinline))
738 #elif defined(_MSC_VER)
739 #define NOINLINE __declspec(noinline)
740 #else
741 #define NOINLINE
742 #endif
743 #endif
744
745 #ifdef __cplusplus
746 extern "C" {
747 #ifndef FORCEINLINE
748 #define FORCEINLINE inline
749 #endif
750 #endif /* __cplusplus */
751 #ifndef FORCEINLINE
752 #define FORCEINLINE
753 #endif
754
755 #if !ONLY_MSPACES
756
757 /* ------------------- Declarations of public routines ------------------- */
758
759 #ifndef USE_DL_PREFIX
760 #define dlcalloc calloc
761 #define dlfree free
762 #define dlmalloc malloc
763 #define dlmemalign memalign
764 #define dlrealloc realloc
765 #define dlvalloc valloc
766 #define dlpvalloc pvalloc
767 #define dlmallinfo mallinfo
768 #define dlmallopt mallopt
769 #define dlmalloc_trim malloc_trim
770 #define dlmalloc_stats malloc_stats
771 #define dlmalloc_usable_size malloc_usable_size
772 #define dlmalloc_footprint malloc_footprint
773 #define dlmalloc_max_footprint malloc_max_footprint
774 #define dlindependent_calloc independent_calloc
775 #define dlindependent_comalloc independent_comalloc
776 #endif /* USE_DL_PREFIX */
777
778
779 /*
780 malloc(size_t n)
781 Returns a pointer to a newly allocated chunk of at least n bytes, or
782 null if no space is available, in which case errno is set to ENOMEM
783 on ANSI C systems.
784
785 If n is zero, malloc returns a minimum-sized chunk. (The minimum
786 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
787 systems.) Note that size_t is an unsigned type, so calls with
788 arguments that would be negative if signed are interpreted as
789 requests for huge amounts of space, which will often fail. The
790 maximum supported value of n differs across systems, but is in all
791 cases less than the maximum representable value of a size_t.
792 */
793 void* dlmalloc(size_t);
794
795 /*
796 free(void* p)
797 Releases the chunk of memory pointed to by p, that had been previously
798 allocated using malloc or a related routine such as realloc.
799 It has no effect if p is null. If p was not malloced or already
800 freed, free(p) will by default cause the current program to abort.
801 */
802 void dlfree(void*);
803
804 /*
805 calloc(size_t n_elements, size_t element_size);
806 Returns a pointer to n_elements * element_size bytes, with all locations
807 set to zero.
808 */
809 void* dlcalloc(size_t, size_t);
810
811 /*
812 realloc(void* p, size_t n)
813 Returns a pointer to a chunk of size n that contains the same data
814 as does chunk p up to the minimum of (n, p's size) bytes, or null
815 if no space is available.
816
817 The returned pointer may or may not be the same as p. The algorithm
818 prefers extending p in most cases when possible, otherwise it
819 employs the equivalent of a malloc-copy-free sequence.
820
821 If p is null, realloc is equivalent to malloc.
822
823 If space is not available, realloc returns null, errno is set (if on
824 ANSI) and p is NOT freed.
825
826 if n is for fewer bytes than already held by p, the newly unused
827 space is lopped off and freed if possible. realloc with a size
828 argument of zero (re)allocates a minimum-sized chunk.
829
830 The old unix realloc convention of allowing the last-free'd chunk
831 to be used as an argument to realloc is not supported.
832 */
833
834 void* dlrealloc(void*, size_t);
835
836 /*
837 memalign(size_t alignment, size_t n);
838 Returns a pointer to a newly allocated chunk of n bytes, aligned
839 in accord with the alignment argument.
840
841 The alignment argument should be a power of two. If the argument is
842 not a power of two, the nearest greater power is used.
843 8-byte alignment is guaranteed by normal malloc calls, so don't
844 bother calling memalign with an argument of 8 or less.
845
846 Overreliance on memalign is a sure way to fragment space.
847 */
848 void* dlmemalign(size_t, size_t);
849
850 /*
851 valloc(size_t n);
852 Equivalent to memalign(pagesize, n), where pagesize is the page
853 size of the system. If the pagesize is unknown, 4096 is used.
854 */
855 void* dlvalloc(size_t);
856
857 /*
858 mallopt(int parameter_number, int parameter_value)
859 Sets tunable parameters The format is to provide a
860 (parameter-number, parameter-value) pair. mallopt then sets the
861 corresponding parameter to the argument value if it can (i.e., so
862 long as the value is meaningful), and returns 1 if successful else
863 0. To workaround the fact that mallopt is specified to use int,
864 not size_t parameters, the value -1 is specially treated as the
865 maximum unsigned size_t value.
866
867 SVID/XPG/ANSI defines four standard param numbers for mallopt,
868 normally defined in malloc.h. None of these are use in this malloc,
869 so setting them has no effect. But this malloc also supports other
870 options in mallopt. See below for details. Briefly, supported
871 parameters are as follows (listed defaults are for "typical"
872 configurations).
873
874 Symbol param # default allowed param values
875 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
876 M_GRANULARITY -2 page size any power of 2 >= page size
877 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
878 */
879 int dlmallopt(int, int);
880
881 /*
882 malloc_footprint();
883 Returns the number of bytes obtained from the system. The total
884 number of bytes allocated by malloc, realloc etc., is less than this
885 value. Unlike mallinfo, this function returns only a precomputed
886 result, so can be called frequently to monitor memory consumption.
887 Even if locks are otherwise defined, this function does not use them,
888 so results might not be up to date.
889 */
890 size_t dlmalloc_footprint(void);
891
892 /*
893 malloc_max_footprint();
894 Returns the maximum number of bytes obtained from the system. This
895 value will be greater than current footprint if deallocated space
896 has been reclaimed by the system. The peak number of bytes allocated
897 by malloc, realloc etc., is less than this value. Unlike mallinfo,
898 this function returns only a precomputed result, so can be called
899 frequently to monitor memory consumption. Even if locks are
900 otherwise defined, this function does not use them, so results might
901 not be up to date.
902 */
903 size_t dlmalloc_max_footprint(void);
904
905 #if !NO_MALLINFO
906 /*
907 mallinfo()
908 Returns (by copy) a struct containing various summary statistics:
909
910 arena: current total non-mmapped bytes allocated from system
911 ordblks: the number of free chunks
912 smblks: always zero.
913 hblks: current number of mmapped regions
914 hblkhd: total bytes held in mmapped regions
915 usmblks: the maximum total allocated space. This will be greater
916 than current total if trimming has occurred.
917 fsmblks: always zero
918 uordblks: current total allocated space (normal or mmapped)
919 fordblks: total free space
920 keepcost: the maximum number of bytes that could ideally be released
921 back to system via malloc_trim. ("ideally" means that
922 it ignores page restrictions etc.)
923
924 Because these fields are ints, but internal bookkeeping may
925 be kept as longs, the reported values may wrap around zero and
926 thus be inaccurate.
927 */
928 struct mallinfo dlmallinfo(void);
929 #endif /* NO_MALLINFO */
930
931 /*
932 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
933
934 independent_calloc is similar to calloc, but instead of returning a
935 single cleared space, it returns an array of pointers to n_elements
936 independent elements that can hold contents of size elem_size, each
937 of which starts out cleared, and can be independently freed,
938 realloc'ed etc. The elements are guaranteed to be adjacently
939 allocated (this is not guaranteed to occur with multiple callocs or
940 mallocs), which may also improve cache locality in some
941 applications.
942
943 The "chunks" argument is optional (i.e., may be null, which is
944 probably the most typical usage). If it is null, the returned array
945 is itself dynamically allocated and should also be freed when it is
946 no longer needed. Otherwise, the chunks array must be of at least
947 n_elements in length. It is filled in with the pointers to the
948 chunks.
949
950 In either case, independent_calloc returns this pointer array, or
951 null if the allocation failed. If n_elements is zero and "chunks"
952 is null, it returns a chunk representing an array with zero elements
953 (which should be freed if not wanted).
954
955 Each element must be individually freed when it is no longer
956 needed. If you'd like to instead be able to free all at once, you
957 should instead use regular calloc and assign pointers into this
958 space to represent elements. (In this case though, you cannot
959 independently free elements.)
960
961 independent_calloc simplifies and speeds up implementations of many
962 kinds of pools. It may also be useful when constructing large data
963 structures that initially have a fixed number of fixed-sized nodes,
964 but the number is not known at compile time, and some of the nodes
965 may later need to be freed. For example:
966
967 struct Node { int item; struct Node* next; };
968
969 struct Node* build_list() {
970 struct Node** pool;
971 int n = read_number_of_nodes_needed();
972 if (n <= 0) return 0;
973 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
974 if (pool == 0) die();
975 // organize into a linked list...
976 struct Node* first = pool[0];
977 for (i = 0; i < n-1; ++i)
978 pool[i]->next = pool[i+1];
979 free(pool); // Can now free the array (or not, if it is needed later)
980 return first;
981 }
982 */
983 void** dlindependent_calloc(size_t, size_t, void**);
984
985 /*
986 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
987
988 independent_comalloc allocates, all at once, a set of n_elements
989 chunks with sizes indicated in the "sizes" array. It returns
990 an array of pointers to these elements, each of which can be
991 independently freed, realloc'ed etc. The elements are guaranteed to
992 be adjacently allocated (this is not guaranteed to occur with
993 multiple callocs or mallocs), which may also improve cache locality
994 in some applications.
995
996 The "chunks" argument is optional (i.e., may be null). If it is null
997 the returned array is itself dynamically allocated and should also
998 be freed when it is no longer needed. Otherwise, the chunks array
999 must be of at least n_elements in length. It is filled in with the
1000 pointers to the chunks.
1001
1002 In either case, independent_comalloc returns this pointer array, or
1003 null if the allocation failed. If n_elements is zero and chunks is
1004 null, it returns a chunk representing an array with zero elements
1005 (which should be freed if not wanted).
1006
1007 Each element must be individually freed when it is no longer
1008 needed. If you'd like to instead be able to free all at once, you
1009 should instead use a single regular malloc, and assign pointers at
1010 particular offsets in the aggregate space. (In this case though, you
1011 cannot independently free elements.)
1012
1013 independent_comallac differs from independent_calloc in that each
1014 element may have a different size, and also that it does not
1015 automatically clear elements.
1016
1017 independent_comalloc can be used to speed up allocation in cases
1018 where several structs or objects must always be allocated at the
1019 same time. For example:
1020
1021 struct Head { ... }
1022 struct Foot { ... }
1023
1024 void send_message(char* msg) {
1025 int msglen = strlen(msg);
1026 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1027 void* chunks[3];
1028 if (independent_comalloc(3, sizes, chunks) == 0)
1029 die();
1030 struct Head* head = (struct Head*)(chunks[0]);
1031 char* body = (char*)(chunks[1]);
1032 struct Foot* foot = (struct Foot*)(chunks[2]);
1033 // ...
1034 }
1035
1036 In general though, independent_comalloc is worth using only for
1037 larger values of n_elements. For small values, you probably won't
1038 detect enough difference from series of malloc calls to bother.
1039
1040 Overuse of independent_comalloc can increase overall memory usage,
1041 since it cannot reuse existing noncontiguous small chunks that
1042 might be available for some of the elements.
1043 */
1044 void** dlindependent_comalloc(size_t, size_t*, void**);
1045
1046
1047 /*
1048 pvalloc(size_t n);
1049 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1050 round up n to nearest pagesize.
1051 */
1052 void* dlpvalloc(size_t);
1053
1054 /*
1055 malloc_trim(size_t pad);
1056
1057 If possible, gives memory back to the system (via negative arguments
1058 to sbrk) if there is unused memory at the `high' end of the malloc
1059 pool or in unused MMAP segments. You can call this after freeing
1060 large blocks of memory to potentially reduce the system-level memory
1061 requirements of a program. However, it cannot guarantee to reduce
1062 memory. Under some allocation patterns, some large free blocks of
1063 memory will be locked between two used chunks, so they cannot be
1064 given back to the system.
1065
1066 The `pad' argument to malloc_trim represents the amount of free
1067 trailing space to leave untrimmed. If this argument is zero, only
1068 the minimum amount of memory to maintain internal data structures
1069 will be left. Non-zero arguments can be supplied to maintain enough
1070 trailing space to service future expected allocations without having
1071 to re-obtain memory from the system.
1072
1073 Malloc_trim returns 1 if it actually released any memory, else 0.
1074 */
1075 int dlmalloc_trim(size_t);
1076
1077 /*
1078 malloc_stats();
1079 Prints on stderr the amount of space obtained from the system (both
1080 via sbrk and mmap), the maximum amount (which may be more than
1081 current if malloc_trim and/or munmap got called), and the current
1082 number of bytes allocated via malloc (or realloc, etc) but not yet
1083 freed. Note that this is the number of bytes allocated, not the
1084 number requested. It will be larger than the number requested
1085 because of alignment and bookkeeping overhead. Because it includes
1086 alignment wastage as being in use, this figure may be greater than
1087 zero even when no user-level chunks are allocated.
1088
1089 The reported current and maximum system memory can be inaccurate if
1090 a program makes other calls to system memory allocation functions
1091 (normally sbrk) outside of malloc.
1092
1093 malloc_stats prints only the most commonly interesting statistics.
1094 More information can be obtained by calling mallinfo.
1095 */
1096 void dlmalloc_stats(void);
1097
1098 #endif /* ONLY_MSPACES */
1099
1100 /*
1101 malloc_usable_size(void* p);
1102
1103 Returns the number of bytes you can actually use in
1104 an allocated chunk, which may be more than you requested (although
1105 often not) due to alignment and minimum size constraints.
1106 You can use this many bytes without worrying about
1107 overwriting other allocated objects. This is not a particularly great
1108 programming practice. malloc_usable_size can be more useful in
1109 debugging and assertions, for example:
1110
1111 p = malloc(n);
1112 assert(malloc_usable_size(p) >= 256);
1113 */
1114 size_t dlmalloc_usable_size(void*);
1115
1116
1117 #if MSPACES
1118
1119 /*
1120 mspace is an opaque type representing an independent
1121 region of space that supports mspace_malloc, etc.
1122 */
1123 typedef void* mspace;
1124
1125 /*
1126 create_mspace creates and returns a new independent space with the
1127 given initial capacity, or, if 0, the default granularity size. It
1128 returns null if there is no system memory available to create the
1129 space. If argument locked is non-zero, the space uses a separate
1130 lock to control access. The capacity of the space will grow
1131 dynamically as needed to service mspace_malloc requests. You can
1132 control the sizes of incremental increases of this space by
1133 compiling with a different DEFAULT_GRANULARITY or dynamically
1134 setting with mallopt(M_GRANULARITY, value).
1135 */
1136 mspace create_mspace(size_t capacity, int locked);
1137
1138 /*
1139 destroy_mspace destroys the given space, and attempts to return all
1140 of its memory back to the system, returning the total number of
1141 bytes freed. After destruction, the results of access to all memory
1142 used by the space become undefined.
1143 */
1144 size_t destroy_mspace(mspace msp);
1145
1146 /*
1147 create_mspace_with_base uses the memory supplied as the initial base
1148 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1149 space is used for bookkeeping, so the capacity must be at least this
1150 large. (Otherwise 0 is returned.) When this initial space is
1151 exhausted, additional memory will be obtained from the system.
1152 Destroying this space will deallocate all additionally allocated
1153 space (if possible) but not the initial base.
1154 */
1155 mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1156
1157 /*
1158 mspace_track_large_chunks controls whether requests for large chunks
1159 are allocated in their own untracked mmapped regions, separate from
1160 others in this mspace. By default large chunks are not tracked,
1161 which reduces fragmentation. However, such chunks are not
1162 necessarily released to the system upon destroy_mspace. Enabling
1163 tracking by setting to true may increase fragmentation, but avoids
1164 leakage when relying on destroy_mspace to release all memory
1165 allocated using this space. The function returns the previous
1166 setting.
1167 */
1168 int mspace_track_large_chunks(mspace msp, int enable);
1169
1170
1171 /*
1172 mspace_malloc behaves as malloc, but operates within
1173 the given space.
1174 */
1175 void* mspace_malloc(mspace msp, size_t bytes);
1176
1177 /*
1178 mspace_free behaves as free, but operates within
1179 the given space.
1180
1181 If compiled with FOOTERS==1, mspace_free is not actually needed.
1182 free may be called instead of mspace_free because freed chunks from
1183 any space are handled by their originating spaces.
1184 */
1185 void mspace_free(mspace msp, void* mem);
1186
1187 /*
1188 mspace_realloc behaves as realloc, but operates within
1189 the given space.
1190
1191 If compiled with FOOTERS==1, mspace_realloc is not actually
1192 needed. realloc may be called instead of mspace_realloc because
1193 realloced chunks from any space are handled by their originating
1194 spaces.
1195 */
1196 void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1197
1198 /*
1199 mspace_calloc behaves as calloc, but operates within
1200 the given space.
1201 */
1202 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1203
1204 /*
1205 mspace_memalign behaves as memalign, but operates within
1206 the given space.
1207 */
1208 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1209
1210 /*
1211 mspace_independent_calloc behaves as independent_calloc, but
1212 operates within the given space.
1213 */
1214 void** mspace_independent_calloc(mspace msp, size_t n_elements,
1215 size_t elem_size, void* chunks[]);
1216
1217 /*
1218 mspace_independent_comalloc behaves as independent_comalloc, but
1219 operates within the given space.
1220 */
1221 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1222 size_t sizes[], void* chunks[]);
1223
1224 /*
1225 mspace_footprint() returns the number of bytes obtained from the
1226 system for this space.
1227 */
1228 size_t mspace_footprint(mspace msp);
1229
1230 /*
1231 mspace_max_footprint() returns the peak number of bytes obtained from the
1232 system for this space.
1233 */
1234 size_t mspace_max_footprint(mspace msp);
1235
1236
1237 #if !NO_MALLINFO
1238 /*
1239 mspace_mallinfo behaves as mallinfo, but reports properties of
1240 the given space.
1241 */
1242 struct mallinfo mspace_mallinfo(mspace msp);
1243 #endif /* NO_MALLINFO */
1244
1245 /*
1246 malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1247 */
1248 size_t mspace_usable_size(void* mem);
1249
1250 /*
1251 mspace_malloc_stats behaves as malloc_stats, but reports
1252 properties of the given space.
1253 */
1254 void mspace_malloc_stats(mspace msp);
1255
1256 /*
1257 mspace_trim behaves as malloc_trim, but
1258 operates within the given space.
1259 */
1260 int mspace_trim(mspace msp, size_t pad);
1261
1262 /*
1263 An alias for mallopt.
1264 */
1265 int mspace_mallopt(int, int);
1266
1267 #endif /* MSPACES */
1268
1269 #ifdef __cplusplus
1270 } /* end of extern "C" */
1271 #endif /* __cplusplus */
1272
1273 /*
1274 ========================================================================
1275 To make a fully customizable malloc.h header file, cut everything
1276 above this line, put into file malloc.h, edit to suit, and #include it
1277 on the next line, as well as in programs that use this malloc.
1278 ========================================================================
1279 */
1280
1281 /* #include "malloc.h" */
1282
1283 /*------------------------------ internal #includes ---------------------- */
1284
1285 #ifdef WIN32
1286 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1287 #endif /* WIN32 */
1288
1289 #include <stdio.h> /* for printing in malloc_stats */
1290
1291 #ifndef LACKS_ERRNO_H
1292 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1293 #endif /* LACKS_ERRNO_H */
1294 /*#if FOOTERS || DEBUG
1295 */
1296 #include <time.h> /* for magic initialization */
1297 /*#endif*/ /* FOOTERS */
1298 #ifndef LACKS_STDLIB_H
1299 #include <stdlib.h> /* for abort() */
1300 #endif /* LACKS_STDLIB_H */
1301 #ifdef DEBUG
1302 #if ABORT_ON_ASSERT_FAILURE
1303 #undef assert
1304 #define assert(x) if(!(x)) ABORT
1305 #else /* ABORT_ON_ASSERT_FAILURE */
1306 #include <assert.h>
1307 #endif /* ABORT_ON_ASSERT_FAILURE */
1308 #else /* DEBUG */
1309 #ifndef assert
1310 #define assert(x)
1311 #endif
1312 #define DEBUG 0
1313 #endif /* DEBUG */
1314 #ifndef LACKS_STRING_H
1315 #include <string.h> /* for memset etc */
1316 #endif /* LACKS_STRING_H */
1317 #if USE_BUILTIN_FFS
1318 #ifndef LACKS_STRINGS_H
1319 #include <strings.h> /* for ffs */
1320 #endif /* LACKS_STRINGS_H */
1321 #endif /* USE_BUILTIN_FFS */
1322 #if HAVE_MMAP
1323 #ifndef LACKS_SYS_MMAN_H
1324 /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
1325 #if (defined(linux) && !defined(__USE_GNU))
1326 #define __USE_GNU 1
1327 #include <sys/mman.h> /* for mmap */
1328 #undef __USE_GNU
1329 #else
1330 #include <sys/mman.h> /* for mmap */
1331 #endif /* linux */
1332 #endif /* LACKS_SYS_MMAN_H */
1333 #ifndef LACKS_FCNTL_H
1334 #include <fcntl.h>
1335 #endif /* LACKS_FCNTL_H */
1336 #endif /* HAVE_MMAP */
1337 #ifndef LACKS_UNISTD_H
1338 #include <unistd.h> /* for sbrk, sysconf */
1339 #else /* LACKS_UNISTD_H */
1340 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1341 extern void* sbrk(ptrdiff_t);
1342 #endif /* FreeBSD etc */
1343 #endif /* LACKS_UNISTD_H */
1344
1345 /* Declarations for locking */
1346 #if USE_LOCKS
1347 #ifndef WIN32
1348 #include <pthread.h>
1349 #if defined (__SVR4) && defined (__sun) /* solaris */
1350 #include <thread.h>
1351 #endif /* solaris */
1352 #else
1353 #ifndef _M_AMD64
1354 /* These are already defined on AMD64 builds */
1355 #ifdef __cplusplus
1356 extern "C" {
1357 #endif /* __cplusplus */
1358 LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1359 LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1360 #ifdef __cplusplus
1361 }
1362 #endif /* __cplusplus */
1363 #endif /* _M_AMD64 */
1364 #pragma intrinsic (_InterlockedCompareExchange)
1365 #pragma intrinsic (_InterlockedExchange)
1366 #define interlockedcompareexchange _InterlockedCompareExchange
1367 #define interlockedexchange _InterlockedExchange
1368 #endif /* Win32 */
1369 #endif /* USE_LOCKS */
1370
1371 /* Declarations for bit scanning on win32 */
1372 #if defined(_MSC_VER) && _MSC_VER>=1300
1373 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1374 #ifdef __cplusplus
1375 extern "C" {
1376 #endif /* __cplusplus */
1377 unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1378 unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1379 #ifdef __cplusplus
1380 }
1381 #endif /* __cplusplus */
1382
1383 #define BitScanForward _BitScanForward
1384 #define BitScanReverse _BitScanReverse
1385 #pragma intrinsic(_BitScanForward)
1386 #pragma intrinsic(_BitScanReverse)
1387 #endif /* BitScanForward */
1388 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1389
1390 #ifndef WIN32
1391 #ifndef malloc_getpagesize
1392 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1393 # ifndef _SC_PAGE_SIZE
1394 # define _SC_PAGE_SIZE _SC_PAGESIZE
1395 # endif
1396 # endif
1397 # ifdef _SC_PAGE_SIZE
1398 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1399 # else
1400 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1401 extern size_t getpagesize();
1402 # define malloc_getpagesize getpagesize()
1403 # else
1404 # ifdef WIN32 /* use supplied emulation of getpagesize */
1405 # define malloc_getpagesize getpagesize()
1406 # else
1407 # ifndef LACKS_SYS_PARAM_H
1408 # include <sys/param.h>
1409 # endif
1410 # ifdef EXEC_PAGESIZE
1411 # define malloc_getpagesize EXEC_PAGESIZE
1412 # else
1413 # ifdef NBPG
1414 # ifndef CLSIZE
1415 # define malloc_getpagesize NBPG
1416 # else
1417 # define malloc_getpagesize (NBPG * CLSIZE)
1418 # endif
1419 # else
1420 # ifdef NBPC
1421 # define malloc_getpagesize NBPC
1422 # else
1423 # ifdef PAGESIZE
1424 # define malloc_getpagesize PAGESIZE
1425 # else /* just guess */
1426 # define malloc_getpagesize ((size_t)4096U)
1427 # endif
1428 # endif
1429 # endif
1430 # endif
1431 # endif
1432 # endif
1433 # endif
1434 #endif
1435 #endif
1436
1437
1438
1439 /* ------------------- size_t and alignment properties -------------------- */
1440
1441 /* The byte and bit size of a size_t */
1442 #define SIZE_T_SIZE (sizeof(size_t))
1443 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1444
1445 /* Some constants coerced to size_t */
1446 /* Annoying but necessary to avoid errors on some platforms */
1447 #define SIZE_T_ZERO ((size_t)0)
1448 #define SIZE_T_ONE ((size_t)1)
1449 #define SIZE_T_TWO ((size_t)2)
1450 #define SIZE_T_FOUR ((size_t)4)
1451 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1452 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1453 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1454 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1455
1456 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1457 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1458
1459 /* True if address a has acceptable alignment */
1460 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1461
1462 /* the number of bytes to offset an address to align it */
1463 #define align_offset(A)\
1464 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1465 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1466
1467 /* -------------------------- MMAP preliminaries ------------------------- */
1468
1469 /*
1470 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1471 checks to fail so compiler optimizer can delete code rather than
1472 using so many "#if"s.
1473 */
1474
1475
1476 /* MORECORE and MMAP must return MFAIL on failure */
1477 #define MFAIL ((void*)(MAX_SIZE_T))
1478 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1479
1480 #if HAVE_MMAP
1481
1482 #ifndef WIN32
1483 #define MUNMAP_DEFAULT(a, s) munmap((a), (s))
1484 #define MMAP_PROT (PROT_READ|PROT_WRITE)
1485 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1486 #define MAP_ANONYMOUS MAP_ANON
1487 #endif /* MAP_ANON */
1488 #ifdef MAP_ANONYMOUS
1489 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1490 #define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1491 #else /* MAP_ANONYMOUS */
1492 /*
1493 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1494 is unlikely to be needed, but is supplied just in case.
1495 */
1496 #define MMAP_FLAGS (MAP_PRIVATE)
1497 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1498 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1499 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1500 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1501 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1502 #endif /* MAP_ANONYMOUS */
1503
1504 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1505
1506 #else /* WIN32 */
1507
1508 /* Win32 MMAP via VirtualAlloc */
win32mmap(size_t size)1509 static FORCEINLINE void* win32mmap(size_t size) {
1510 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1511 return (ptr != 0)? ptr: MFAIL;
1512 }
1513
1514 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
win32direct_mmap(size_t size)1515 static FORCEINLINE void* win32direct_mmap(size_t size) {
1516 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1517 PAGE_READWRITE);
1518 return (ptr != 0)? ptr: MFAIL;
1519 }
1520
1521 /* This function supports releasing coalesed segments */
win32munmap(void * ptr,size_t size)1522 static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1523 MEMORY_BASIC_INFORMATION minfo;
1524 char* cptr = (char*)ptr;
1525 while (size) {
1526 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1527 return -1;
1528 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1529 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1530 return -1;
1531 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1532 return -1;
1533 cptr += minfo.RegionSize;
1534 size -= minfo.RegionSize;
1535 }
1536 return 0;
1537 }
1538
1539 #define MMAP_DEFAULT(s) win32mmap(s)
1540 #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
1541 #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
1542 #endif /* WIN32 */
1543 #endif /* HAVE_MMAP */
1544
1545 #if HAVE_MREMAP
1546 #ifndef WIN32
1547 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1548 #endif /* WIN32 */
1549 #endif /* HAVE_MREMAP */
1550
1551
1552 /**
1553 * Define CALL_MORECORE
1554 */
1555 #if HAVE_MORECORE
1556 #ifdef MORECORE
1557 #define CALL_MORECORE(S) MORECORE(S)
1558 #else /* MORECORE */
1559 #define CALL_MORECORE(S) MORECORE_DEFAULT(S)
1560 #endif /* MORECORE */
1561 #else /* HAVE_MORECORE */
1562 #define CALL_MORECORE(S) MFAIL
1563 #endif /* HAVE_MORECORE */
1564
1565 /**
1566 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1567 */
1568 #if HAVE_MMAP
1569 #define USE_MMAP_BIT (SIZE_T_ONE)
1570
1571 #ifdef MMAP
1572 #define CALL_MMAP(s) MMAP(s)
1573 #else /* MMAP */
1574 #define CALL_MMAP(s) MMAP_DEFAULT(s)
1575 #endif /* MMAP */
1576 #ifdef MUNMAP
1577 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1578 #else /* MUNMAP */
1579 #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
1580 #endif /* MUNMAP */
1581 #ifdef DIRECT_MMAP
1582 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1583 #else /* DIRECT_MMAP */
1584 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1585 #endif /* DIRECT_MMAP */
1586 #else /* HAVE_MMAP */
1587 #define USE_MMAP_BIT (SIZE_T_ZERO)
1588
1589 #define MMAP(s) MFAIL
1590 #define MUNMAP(a, s) (-1)
1591 #define DIRECT_MMAP(s) MFAIL
1592 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1593 #define CALL_MMAP(s) MMAP(s)
1594 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1595 #endif /* HAVE_MMAP */
1596
1597 /**
1598 * Define CALL_MREMAP
1599 */
1600 #if HAVE_MMAP && HAVE_MREMAP
1601 #ifdef MREMAP
1602 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1603 #else /* MREMAP */
1604 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1605 #endif /* MREMAP */
1606 #else /* HAVE_MMAP && HAVE_MREMAP */
1607 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1608 #endif /* HAVE_MMAP && HAVE_MREMAP */
1609
1610 /* mstate bit set if continguous morecore disabled or failed */
1611 #define USE_NONCONTIGUOUS_BIT (4U)
1612
1613 /* segment bit set in create_mspace_with_base */
1614 #define EXTERN_BIT (8U)
1615
1616
1617 /* --------------------------- Lock preliminaries ------------------------ */
1618
1619 /*
1620 When locks are defined, there is one global lock, plus
1621 one per-mspace lock.
1622
1623 The global lock_ensures that mparams.magic and other unique
1624 mparams values are initialized only once. It also protects
1625 sequences of calls to MORECORE. In many cases sys_alloc requires
1626 two calls, that should not be interleaved with calls by other
1627 threads. This does not protect against direct calls to MORECORE
1628 by other threads not using this lock, so there is still code to
1629 cope the best we can on interference.
1630
1631 Per-mspace locks surround calls to malloc, free, etc. To enable use
1632 in layered extensions, per-mspace locks are reentrant.
1633
1634 Because lock-protected regions generally have bounded times, it is
1635 OK to use the supplied simple spinlocks in the custom versions for
1636 x86. Spinlocks are likely to improve performance for lightly
1637 contended applications, but worsen performance under heavy
1638 contention.
1639
1640 If USE_LOCKS is > 1, the definitions of lock routines here are
1641 bypassed, in which case you will need to define the type MLOCK_T,
1642 and at least INITIAL_LOCK, ACQUIRE_LOCK, RELEASE_LOCK and possibly
1643 TRY_LOCK (which is not used in this malloc, but commonly needed in
1644 extensions.) You must also declare a
1645 static MLOCK_T malloc_global_mutex = { initialization values };.
1646
1647 */
1648
1649 #if USE_LOCKS == 1
1650
1651 #if USE_SPIN_LOCKS && SPIN_LOCKS_AVAILABLE
1652 #ifndef WIN32
1653
1654 /* Custom pthread-style spin locks on x86 and x64 for gcc */
1655 struct pthread_mlock_t {
1656 volatile unsigned int l;
1657 unsigned int c;
1658 pthread_t threadid;
1659 };
1660 #define MLOCK_T struct pthread_mlock_t
1661 #define CURRENT_THREAD pthread_self()
1662 #define INITIAL_LOCK(sl) ((sl)->threadid = 0, (sl)->l = (sl)->c = 0, 0)
1663 #define ACQUIRE_LOCK(sl) pthread_acquire_lock(sl)
1664 #define RELEASE_LOCK(sl) pthread_release_lock(sl)
1665 #define TRY_LOCK(sl) pthread_try_lock(sl)
1666 #define SPINS_PER_YIELD 63
1667
1668 static MLOCK_T malloc_global_mutex = { 0, 0, 0};
1669
pthread_acquire_lock(MLOCK_T * sl)1670 static FORCEINLINE int pthread_acquire_lock (MLOCK_T *sl) {
1671 int spins = 0;
1672 volatile unsigned int* lp = &sl->l;
1673 for (;;) {
1674 if (*lp != 0) {
1675 if (sl->threadid == CURRENT_THREAD) {
1676 ++sl->c;
1677 return 0;
1678 }
1679 }
1680 else {
1681 /* place args to cmpxchgl in locals to evade oddities in some gccs */
1682 int cmp = 0;
1683 int val = 1;
1684 int ret;
1685 __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
1686 : "=a" (ret)
1687 : "r" (val), "m" (*(lp)), "0"(cmp)
1688 : "memory", "cc");
1689 if (!ret) {
1690 assert(!sl->threadid);
1691 sl->threadid = CURRENT_THREAD;
1692 sl->c = 1;
1693 return 0;
1694 }
1695 }
1696 if ((++spins & SPINS_PER_YIELD) == 0) {
1697 #if defined (__SVR4) && defined (__sun) /* solaris */
1698 thr_yield();
1699 #else
1700 #if defined(__linux__) || defined(__FreeBSD__) || defined(__APPLE__)
1701 sched_yield();
1702 #else /* no-op yield on unknown systems */
1703 ;
1704 #endif /* __linux__ || __FreeBSD__ || __APPLE__ */
1705 #endif /* solaris */
1706 }
1707 }
1708 }
1709
pthread_release_lock(MLOCK_T * sl)1710 static FORCEINLINE void pthread_release_lock (MLOCK_T *sl) {
1711 volatile unsigned int* lp = &sl->l;
1712 assert(*lp != 0);
1713 assert(sl->threadid == CURRENT_THREAD);
1714 if (--sl->c == 0) {
1715 sl->threadid = 0;
1716 int prev = 0;
1717 int ret;
1718 __asm__ __volatile__ ("lock; xchgl %0, %1"
1719 : "=r" (ret)
1720 : "m" (*(lp)), "0"(prev)
1721 : "memory");
1722 }
1723 }
1724
pthread_try_lock(MLOCK_T * sl)1725 static FORCEINLINE int pthread_try_lock (MLOCK_T *sl) {
1726 volatile unsigned int* lp = &sl->l;
1727 if (*lp != 0) {
1728 if (sl->threadid == CURRENT_THREAD) {
1729 ++sl->c;
1730 return 1;
1731 }
1732 }
1733 else {
1734 int cmp = 0;
1735 int val = 1;
1736 int ret;
1737 __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
1738 : "=a" (ret)
1739 : "r" (val), "m" (*(lp)), "0"(cmp)
1740 : "memory", "cc");
1741 if (!ret) {
1742 assert(!sl->threadid);
1743 sl->threadid = CURRENT_THREAD;
1744 sl->c = 1;
1745 return 1;
1746 }
1747 }
1748 return 0;
1749 }
1750
1751
1752 #else /* WIN32 */
1753 /* Custom win32-style spin locks on x86 and x64 for MSC */
1754 struct win32_mlock_t {
1755 volatile long l;
1756 unsigned int c;
1757 long threadid;
1758 };
1759
1760 #define MLOCK_T struct win32_mlock_t
1761 #define CURRENT_THREAD GetCurrentThreadId()
1762 #define INITIAL_LOCK(sl) ((sl)->threadid = 0, (sl)->l = (sl)->c = 0, 0)
1763 #define ACQUIRE_LOCK(sl) win32_acquire_lock(sl)
1764 #define RELEASE_LOCK(sl) win32_release_lock(sl)
1765 #define TRY_LOCK(sl) win32_try_lock(sl)
1766 #define SPINS_PER_YIELD 63
1767
1768 static MLOCK_T malloc_global_mutex = { 0, 0, 0};
1769
win32_acquire_lock(MLOCK_T * sl)1770 static FORCEINLINE int win32_acquire_lock (MLOCK_T *sl) {
1771 int spins = 0;
1772 for (;;) {
1773 if (sl->l != 0) {
1774 if (sl->threadid == CURRENT_THREAD) {
1775 ++sl->c;
1776 return 0;
1777 }
1778 }
1779 else {
1780 if (!interlockedexchange(&sl->l, 1)) {
1781 assert(!sl->threadid);
1782 sl->threadid = CURRENT_THREAD;
1783 sl->c = 1;
1784 return 0;
1785 }
1786 }
1787 if ((++spins & SPINS_PER_YIELD) == 0)
1788 SleepEx(0, FALSE);
1789 }
1790 }
1791
win32_release_lock(MLOCK_T * sl)1792 static FORCEINLINE void win32_release_lock (MLOCK_T *sl) {
1793 assert(sl->threadid == CURRENT_THREAD);
1794 assert(sl->l != 0);
1795 if (--sl->c == 0) {
1796 sl->threadid = 0;
1797 interlockedexchange (&sl->l, 0);
1798 }
1799 }
1800
win32_try_lock(MLOCK_T * sl)1801 static FORCEINLINE int win32_try_lock (MLOCK_T *sl) {
1802 if (sl->l != 0) {
1803 if (sl->threadid == CURRENT_THREAD) {
1804 ++sl->c;
1805 return 1;
1806 }
1807 }
1808 else {
1809 if (!interlockedexchange(&sl->l, 1)){
1810 assert(!sl->threadid);
1811 sl->threadid = CURRENT_THREAD;
1812 sl->c = 1;
1813 return 1;
1814 }
1815 }
1816 return 0;
1817 }
1818
1819 #endif /* WIN32 */
1820 #else /* USE_SPIN_LOCKS */
1821
1822 #ifndef WIN32
1823 /* pthreads-based locks */
1824
1825 #define MLOCK_T pthread_mutex_t
1826 #define CURRENT_THREAD pthread_self()
1827 #define INITIAL_LOCK(sl) pthread_init_lock(sl)
1828 #define ACQUIRE_LOCK(sl) pthread_mutex_lock(sl)
1829 #define RELEASE_LOCK(sl) pthread_mutex_unlock(sl)
1830 #define TRY_LOCK(sl) (!pthread_mutex_trylock(sl))
1831
1832 static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
1833
1834 /* Cope with old-style linux recursive lock initialization by adding */
1835 /* skipped internal declaration from pthread.h */
1836 #ifdef linux
1837 #ifndef PTHREAD_MUTEX_RECURSIVE
1838 extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
1839 int __kind));
1840 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
1841 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
1842 #endif
1843 #endif
1844
pthread_init_lock(MLOCK_T * sl)1845 static int pthread_init_lock (MLOCK_T *sl) {
1846 pthread_mutexattr_t attr;
1847 if (pthread_mutexattr_init(&attr)) return 1;
1848 if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
1849 if (pthread_mutex_init(sl, &attr)) return 1;
1850 if (pthread_mutexattr_destroy(&attr)) return 1;
1851 return 0;
1852 }
1853
1854 #else /* WIN32 */
1855 /* Win32 critical sections */
1856 #define MLOCK_T CRITICAL_SECTION
1857 #define CURRENT_THREAD GetCurrentThreadId()
1858 #define INITIAL_LOCK(s) (!InitializeCriticalSectionAndSpinCount((s), 0x80000000|4000))
1859 #define ACQUIRE_LOCK(s) (EnterCriticalSection(sl), 0)
1860 #define RELEASE_LOCK(s) LeaveCriticalSection(sl)
1861 #define TRY_LOCK(s) TryEnterCriticalSection(sl)
1862 #define NEED_GLOBAL_LOCK_INIT
1863
1864 static MLOCK_T malloc_global_mutex;
1865 static volatile long malloc_global_mutex_status;
1866
1867 /* Use spin loop to initialize global lock */
init_malloc_global_mutex()1868 static void init_malloc_global_mutex() {
1869 for (;;) {
1870 long stat = malloc_global_mutex_status;
1871 if (stat > 0)
1872 return;
1873 /* transition to < 0 while initializing, then to > 0) */
1874 if (stat == 0 &&
1875 interlockedcompareexchange(&malloc_global_mutex_status, -1, 0) == 0) {
1876 InitializeCriticalSection(&malloc_global_mutex);
1877 interlockedexchange(&malloc_global_mutex_status,1);
1878 return;
1879 }
1880 SleepEx(0, FALSE);
1881 }
1882 }
1883
1884 #endif /* WIN32 */
1885 #endif /* USE_SPIN_LOCKS */
1886 #endif /* USE_LOCKS == 1 */
1887
1888 /* ----------------------- User-defined locks ------------------------ */
1889
1890 #if USE_LOCKS > 1
1891 /* Define your own lock implementation here */
1892 /* #define INITIAL_LOCK(sl) ... */
1893 /* #define ACQUIRE_LOCK(sl) ... */
1894 /* #define RELEASE_LOCK(sl) ... */
1895 /* #define TRY_LOCK(sl) ... */
1896 /* static MLOCK_T malloc_global_mutex = ... */
1897 #endif /* USE_LOCKS > 1 */
1898
1899 /* ----------------------- Lock-based state ------------------------ */
1900
1901 #if USE_LOCKS
1902 #define USE_LOCK_BIT (2U)
1903 #else /* USE_LOCKS */
1904 #define USE_LOCK_BIT (0U)
1905 #define INITIAL_LOCK(l)
1906 #endif /* USE_LOCKS */
1907
1908 #if USE_LOCKS
1909 #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
1910 #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
1911 #endif
1912 #ifndef RELEASE_MALLOC_GLOBAL_LOCK
1913 #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
1914 #endif
1915 #else /* USE_LOCKS */
1916 #define ACQUIRE_MALLOC_GLOBAL_LOCK()
1917 #define RELEASE_MALLOC_GLOBAL_LOCK()
1918 #endif /* USE_LOCKS */
1919
1920
1921 /* ----------------------- Chunk representations ------------------------ */
1922
1923 /*
1924 (The following includes lightly edited explanations by Colin Plumb.)
1925
1926 The malloc_chunk declaration below is misleading (but accurate and
1927 necessary). It declares a "view" into memory allowing access to
1928 necessary fields at known offsets from a given base.
1929
1930 Chunks of memory are maintained using a `boundary tag' method as
1931 originally described by Knuth. (See the paper by Paul Wilson
1932 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1933 techniques.) Sizes of free chunks are stored both in the front of
1934 each chunk and at the end. This makes consolidating fragmented
1935 chunks into bigger chunks fast. The head fields also hold bits
1936 representing whether chunks are free or in use.
1937
1938 Here are some pictures to make it clearer. They are "exploded" to
1939 show that the state of a chunk can be thought of as extending from
1940 the high 31 bits of the head field of its header through the
1941 prev_foot and PINUSE_BIT bit of the following chunk header.
1942
1943 A chunk that's in use looks like:
1944
1945 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1946 | Size of previous chunk (if P = 0) |
1947 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1948 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1949 | Size of this chunk 1| +-+
1950 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1951 | |
1952 +- -+
1953 | |
1954 +- -+
1955 | :
1956 +- size - sizeof(size_t) available payload bytes -+
1957 : |
1958 chunk-> +- -+
1959 | |
1960 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1961 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1962 | Size of next chunk (may or may not be in use) | +-+
1963 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1964
1965 And if it's free, it looks like this:
1966
1967 chunk-> +- -+
1968 | User payload (must be in use, or we would have merged!) |
1969 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1970 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1971 | Size of this chunk 0| +-+
1972 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1973 | Next pointer |
1974 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1975 | Prev pointer |
1976 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1977 | :
1978 +- size - sizeof(struct chunk) unused bytes -+
1979 : |
1980 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1981 | Size of this chunk |
1982 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1983 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1984 | Size of next chunk (must be in use, or we would have merged)| +-+
1985 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1986 | :
1987 +- User payload -+
1988 : |
1989 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1990 |0|
1991 +-+
1992 Note that since we always merge adjacent free chunks, the chunks
1993 adjacent to a free chunk must be in use.
1994
1995 Given a pointer to a chunk (which can be derived trivially from the
1996 payload pointer) we can, in O(1) time, find out whether the adjacent
1997 chunks are free, and if so, unlink them from the lists that they
1998 are on and merge them with the current chunk.
1999
2000 Chunks always begin on even word boundaries, so the mem portion
2001 (which is returned to the user) is also on an even word boundary, and
2002 thus at least double-word aligned.
2003
2004 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2005 chunk size (which is always a multiple of two words), is an in-use
2006 bit for the *previous* chunk. If that bit is *clear*, then the
2007 word before the current chunk size contains the previous chunk
2008 size, and can be used to find the front of the previous chunk.
2009 The very first chunk allocated always has this bit set, preventing
2010 access to non-existent (or non-owned) memory. If pinuse is set for
2011 any given chunk, then you CANNOT determine the size of the
2012 previous chunk, and might even get a memory addressing fault when
2013 trying to do so.
2014
2015 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2016 the chunk size redundantly records whether the current chunk is
2017 inuse (unless the chunk is mmapped). This redundancy enables usage
2018 checks within free and realloc, and reduces indirection when freeing
2019 and consolidating chunks.
2020
2021 Each freshly allocated chunk must have both cinuse and pinuse set.
2022 That is, each allocated chunk borders either a previously allocated
2023 and still in-use chunk, or the base of its memory arena. This is
2024 ensured by making all allocations from the the `lowest' part of any
2025 found chunk. Further, no free chunk physically borders another one,
2026 so each free chunk is known to be preceded and followed by either
2027 inuse chunks or the ends of memory.
2028
2029 Note that the `foot' of the current chunk is actually represented
2030 as the prev_foot of the NEXT chunk. This makes it easier to
2031 deal with alignments etc but can be very confusing when trying
2032 to extend or adapt this code.
2033
2034 The exceptions to all this are
2035
2036 1. The special chunk `top' is the top-most available chunk (i.e.,
2037 the one bordering the end of available memory). It is treated
2038 specially. Top is never included in any bin, is used only if
2039 no other chunk is available, and is released back to the
2040 system if it is very large (see M_TRIM_THRESHOLD). In effect,
2041 the top chunk is treated as larger (and thus less well
2042 fitting) than any other available chunk. The top chunk
2043 doesn't update its trailing size field since there is no next
2044 contiguous chunk that would have to index off it. However,
2045 space is still allocated for it (TOP_FOOT_SIZE) to enable
2046 separation or merging when space is extended.
2047
2048 3. Chunks allocated via mmap, have both cinuse and pinuse bits
2049 cleared in their head fields. Because they are allocated
2050 one-by-one, each must carry its own prev_foot field, which is
2051 also used to hold the offset this chunk has within its mmapped
2052 region, which is needed to preserve alignment. Each mmapped
2053 chunk is trailed by the first two fields of a fake next-chunk
2054 for sake of usage checks.
2055
2056 */
2057
2058 struct malloc_chunk {
2059 size_t prev_foot; /* Size of previous chunk (if free). */
2060 size_t head; /* Size and inuse bits. */
2061 struct malloc_chunk* fd; /* double links -- used only if free. */
2062 struct malloc_chunk* bk;
2063 };
2064
2065 typedef struct malloc_chunk mchunk;
2066 typedef struct malloc_chunk* mchunkptr;
2067 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
2068 typedef unsigned int bindex_t; /* Described below */
2069 typedef unsigned int binmap_t; /* Described below */
2070 typedef unsigned int flag_t; /* The type of various bit flag sets */
2071
2072 /* ------------------- Chunks sizes and alignments ----------------------- */
2073
2074 #define MCHUNK_SIZE (sizeof(mchunk))
2075
2076 #if FOOTERS
2077 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2078 #else /* FOOTERS */
2079 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
2080 #endif /* FOOTERS */
2081
2082 /* MMapped chunks need a second word of overhead ... */
2083 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2084 /* ... and additional padding for fake next-chunk at foot */
2085 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
2086
2087 /* The smallest size we can malloc is an aligned minimal chunk */
2088 #define MIN_CHUNK_SIZE\
2089 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2090
2091 /* conversion from malloc headers to user pointers, and back */
2092 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
2093 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2094 /* chunk associated with aligned address A */
2095 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
2096
2097 /* Bounds on request (not chunk) sizes. */
2098 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
2099 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2100
2101 /* pad request bytes into a usable size */
2102 #define pad_request(req) \
2103 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2104
2105 /* pad request, checking for minimum (but not maximum) */
2106 #define request2size(req) \
2107 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2108
2109
2110 /* ------------------ Operations on head and foot fields ----------------- */
2111
2112 /*
2113 The head field of a chunk is or'ed with PINUSE_BIT when previous
2114 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2115 use, unless mmapped, in which case both bits are cleared.
2116
2117 FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2118 */
2119
2120 #define PINUSE_BIT (SIZE_T_ONE)
2121 #define CINUSE_BIT (SIZE_T_TWO)
2122 #define FLAG4_BIT (SIZE_T_FOUR)
2123 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
2124 #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2125
2126 /* Head value for fenceposts */
2127 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
2128
2129 /* extraction of fields from head words */
2130 #define cinuse(p) ((p)->head & CINUSE_BIT)
2131 #define pinuse(p) ((p)->head & PINUSE_BIT)
2132 #define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)
2133 #define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)
2134
2135 #define chunksize(p) ((p)->head & ~(FLAG_BITS))
2136
2137 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
2138
2139 /* Treat space at ptr +/- offset as a chunk */
2140 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2141 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2142
2143 /* Ptr to next or previous physical malloc_chunk. */
2144 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2145 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2146
2147 /* extract next chunk's pinuse bit */
2148 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
2149
2150 /* Get/set size at footer */
2151 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2152 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2153
2154 /* Set size, pinuse bit, and foot */
2155 #define set_size_and_pinuse_of_free_chunk(p, s)\
2156 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2157
2158 /* Set size, pinuse bit, foot, and clear next pinuse */
2159 #define set_free_with_pinuse(p, s, n)\
2160 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2161
2162 /* Get the internal overhead associated with chunk p */
2163 #define overhead_for(p)\
2164 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2165
2166 /* Return true if malloced space is not necessarily cleared */
2167 #if MMAP_CLEARS
2168 #define calloc_must_clear(p) (!is_mmapped(p))
2169 #else /* MMAP_CLEARS */
2170 #define calloc_must_clear(p) (1)
2171 #endif /* MMAP_CLEARS */
2172
2173 /* ---------------------- Overlaid data structures ----------------------- */
2174
2175 /*
2176 When chunks are not in use, they are treated as nodes of either
2177 lists or trees.
2178
2179 "Small" chunks are stored in circular doubly-linked lists, and look
2180 like this:
2181
2182 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2183 | Size of previous chunk |
2184 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2185 `head:' | Size of chunk, in bytes |P|
2186 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2187 | Forward pointer to next chunk in list |
2188 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2189 | Back pointer to previous chunk in list |
2190 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2191 | Unused space (may be 0 bytes long) .
2192 . .
2193 . |
2194 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2195 `foot:' | Size of chunk, in bytes |
2196 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2197
2198 Larger chunks are kept in a form of bitwise digital trees (aka
2199 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
2200 free chunks greater than 256 bytes, their size doesn't impose any
2201 constraints on user chunk sizes. Each node looks like:
2202
2203 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2204 | Size of previous chunk |
2205 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2206 `head:' | Size of chunk, in bytes |P|
2207 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2208 | Forward pointer to next chunk of same size |
2209 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2210 | Back pointer to previous chunk of same size |
2211 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2212 | Pointer to left child (child[0]) |
2213 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2214 | Pointer to right child (child[1]) |
2215 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2216 | Pointer to parent |
2217 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2218 | bin index of this chunk |
2219 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2220 | Unused space .
2221 . |
2222 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2223 `foot:' | Size of chunk, in bytes |
2224 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2225
2226 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
2227 of the same size are arranged in a circularly-linked list, with only
2228 the oldest chunk (the next to be used, in our FIFO ordering)
2229 actually in the tree. (Tree members are distinguished by a non-null
2230 parent pointer.) If a chunk with the same size an an existing node
2231 is inserted, it is linked off the existing node using pointers that
2232 work in the same way as fd/bk pointers of small chunks.
2233
2234 Each tree contains a power of 2 sized range of chunk sizes (the
2235 smallest is 0x100 <= x < 0x180), which is is divided in half at each
2236 tree level, with the chunks in the smaller half of the range (0x100
2237 <= x < 0x140 for the top nose) in the left subtree and the larger
2238 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
2239 done by inspecting individual bits.
2240
2241 Using these rules, each node's left subtree contains all smaller
2242 sizes than its right subtree. However, the node at the root of each
2243 subtree has no particular ordering relationship to either. (The
2244 dividing line between the subtree sizes is based on trie relation.)
2245 If we remove the last chunk of a given size from the interior of the
2246 tree, we need to replace it with a leaf node. The tree ordering
2247 rules permit a node to be replaced by any leaf below it.
2248
2249 The smallest chunk in a tree (a common operation in a best-fit
2250 allocator) can be found by walking a path to the leftmost leaf in
2251 the tree. Unlike a usual binary tree, where we follow left child
2252 pointers until we reach a null, here we follow the right child
2253 pointer any time the left one is null, until we reach a leaf with
2254 both child pointers null. The smallest chunk in the tree will be
2255 somewhere along that path.
2256
2257 The worst case number of steps to add, find, or remove a node is
2258 bounded by the number of bits differentiating chunks within
2259 bins. Under current bin calculations, this ranges from 6 up to 21
2260 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2261 is of course much better.
2262 */
2263
2264 struct malloc_tree_chunk {
2265 /* The first four fields must be compatible with malloc_chunk */
2266 size_t prev_foot;
2267 size_t head;
2268 struct malloc_tree_chunk* fd;
2269 struct malloc_tree_chunk* bk;
2270
2271 struct malloc_tree_chunk* child[2];
2272 struct malloc_tree_chunk* parent;
2273 bindex_t index;
2274 };
2275
2276 typedef struct malloc_tree_chunk tchunk;
2277 typedef struct malloc_tree_chunk* tchunkptr;
2278 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2279
2280 /* A little helper macro for trees */
2281 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2282
2283 /* ----------------------------- Segments -------------------------------- */
2284
2285 /*
2286 Each malloc space may include non-contiguous segments, held in a
2287 list headed by an embedded malloc_segment record representing the
2288 top-most space. Segments also include flags holding properties of
2289 the space. Large chunks that are directly allocated by mmap are not
2290 included in this list. They are instead independently created and
2291 destroyed without otherwise keeping track of them.
2292
2293 Segment management mainly comes into play for spaces allocated by
2294 MMAP. Any call to MMAP might or might not return memory that is
2295 adjacent to an existing segment. MORECORE normally contiguously
2296 extends the current space, so this space is almost always adjacent,
2297 which is simpler and faster to deal with. (This is why MORECORE is
2298 used preferentially to MMAP when both are available -- see
2299 sys_alloc.) When allocating using MMAP, we don't use any of the
2300 hinting mechanisms (inconsistently) supported in various
2301 implementations of unix mmap, or distinguish reserving from
2302 committing memory. Instead, we just ask for space, and exploit
2303 contiguity when we get it. It is probably possible to do
2304 better than this on some systems, but no general scheme seems
2305 to be significantly better.
2306
2307 Management entails a simpler variant of the consolidation scheme
2308 used for chunks to reduce fragmentation -- new adjacent memory is
2309 normally prepended or appended to an existing segment. However,
2310 there are limitations compared to chunk consolidation that mostly
2311 reflect the fact that segment processing is relatively infrequent
2312 (occurring only when getting memory from system) and that we
2313 don't expect to have huge numbers of segments:
2314
2315 * Segments are not indexed, so traversal requires linear scans. (It
2316 would be possible to index these, but is not worth the extra
2317 overhead and complexity for most programs on most platforms.)
2318 * New segments are only appended to old ones when holding top-most
2319 memory; if they cannot be prepended to others, they are held in
2320 different segments.
2321
2322 Except for the top-most segment of an mstate, each segment record
2323 is kept at the tail of its segment. Segments are added by pushing
2324 segment records onto the list headed by &mstate.seg for the
2325 containing mstate.
2326
2327 Segment flags control allocation/merge/deallocation policies:
2328 * If EXTERN_BIT set, then we did not allocate this segment,
2329 and so should not try to deallocate or merge with others.
2330 (This currently holds only for the initial segment passed
2331 into create_mspace_with_base.)
2332 * If USE_MMAP_BIT set, the segment may be merged with
2333 other surrounding mmapped segments and trimmed/de-allocated
2334 using munmap.
2335 * If neither bit is set, then the segment was obtained using
2336 MORECORE so can be merged with surrounding MORECORE'd segments
2337 and deallocated/trimmed using MORECORE with negative arguments.
2338 */
2339
2340 struct malloc_segment {
2341 char* base; /* base address */
2342 size_t size; /* allocated size */
2343 struct malloc_segment* next; /* ptr to next segment */
2344 flag_t sflags; /* mmap and extern flag */
2345 };
2346
2347 #define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)
2348 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2349
2350 typedef struct malloc_segment msegment;
2351 typedef struct malloc_segment* msegmentptr;
2352
2353 /* ---------------------------- malloc_state ----------------------------- */
2354
2355 /*
2356 A malloc_state holds all of the bookkeeping for a space.
2357 The main fields are:
2358
2359 Top
2360 The topmost chunk of the currently active segment. Its size is
2361 cached in topsize. The actual size of topmost space is
2362 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2363 fenceposts and segment records if necessary when getting more
2364 space from the system. The size at which to autotrim top is
2365 cached from mparams in trim_check, except that it is disabled if
2366 an autotrim fails.
2367
2368 Designated victim (dv)
2369 This is the preferred chunk for servicing small requests that
2370 don't have exact fits. It is normally the chunk split off most
2371 recently to service another small request. Its size is cached in
2372 dvsize. The link fields of this chunk are not maintained since it
2373 is not kept in a bin.
2374
2375 SmallBins
2376 An array of bin headers for free chunks. These bins hold chunks
2377 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2378 chunks of all the same size, spaced 8 bytes apart. To simplify
2379 use in double-linked lists, each bin header acts as a malloc_chunk
2380 pointing to the real first node, if it exists (else pointing to
2381 itself). This avoids special-casing for headers. But to avoid
2382 waste, we allocate only the fd/bk pointers of bins, and then use
2383 repositioning tricks to treat these as the fields of a chunk.
2384
2385 TreeBins
2386 Treebins are pointers to the roots of trees holding a range of
2387 sizes. There are 2 equally spaced treebins for each power of two
2388 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2389 larger.
2390
2391 Bin maps
2392 There is one bit map for small bins ("smallmap") and one for
2393 treebins ("treemap). Each bin sets its bit when non-empty, and
2394 clears the bit when empty. Bit operations are then used to avoid
2395 bin-by-bin searching -- nearly all "search" is done without ever
2396 looking at bins that won't be selected. The bit maps
2397 conservatively use 32 bits per map word, even if on 64bit system.
2398 For a good description of some of the bit-based techniques used
2399 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2400 supplement at http://hackersdelight.org/). Many of these are
2401 intended to reduce the branchiness of paths through malloc etc, as
2402 well as to reduce the number of memory locations read or written.
2403
2404 Segments
2405 A list of segments headed by an embedded malloc_segment record
2406 representing the initial space.
2407
2408 Address check support
2409 The least_addr field is the least address ever obtained from
2410 MORECORE or MMAP. Attempted frees and reallocs of any address less
2411 than this are trapped (unless INSECURE is defined).
2412
2413 Magic tag
2414 A cross-check field that should always hold same value as mparams.magic.
2415
2416 Flags
2417 Bits recording whether to use MMAP, locks, or contiguous MORECORE
2418
2419 Statistics
2420 Each space keeps track of current and maximum system memory
2421 obtained via MORECORE or MMAP.
2422
2423 Trim support
2424 Fields holding the amount of unused topmost memory that should trigger
2425 timming, and a counter to force periodic scanning to release unused
2426 non-topmost segments.
2427
2428 Locking
2429 If USE_LOCKS is defined, the "mutex" lock is acquired and released
2430 around every public call using this mspace.
2431
2432 Extension support
2433 A void* pointer and a size_t field that can be used to help implement
2434 extensions to this malloc.
2435 */
2436
2437 /* Bin types, widths and sizes */
2438 #define NSMALLBINS (32U)
2439 #define NTREEBINS (32U)
2440 #define SMALLBIN_SHIFT (3U)
2441 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2442 #define TREEBIN_SHIFT (8U)
2443 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2444 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2445 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2446
2447 struct malloc_state {
2448 binmap_t smallmap;
2449 binmap_t treemap;
2450 size_t dvsize;
2451 size_t topsize;
2452 char* least_addr;
2453 mchunkptr dv;
2454 mchunkptr top;
2455 size_t trim_check;
2456 size_t release_checks;
2457 size_t magic;
2458 mchunkptr smallbins[(NSMALLBINS+1)*2];
2459 tbinptr treebins[NTREEBINS];
2460 size_t footprint;
2461 size_t max_footprint;
2462 flag_t mflags;
2463 #if USE_LOCKS
2464 MLOCK_T mutex; /* locate lock among fields that rarely change */
2465 #endif /* USE_LOCKS */
2466 msegment seg;
2467 void* extp; /* Unused but available for extensions */
2468 size_t exts;
2469 };
2470
2471 typedef struct malloc_state* mstate;
2472
2473 /* ------------- Global malloc_state and malloc_params ------------------- */
2474
2475 /*
2476 malloc_params holds global properties, including those that can be
2477 dynamically set using mallopt. There is a single instance, mparams,
2478 initialized in init_mparams. Note that the non-zeroness of "magic"
2479 also serves as an initialization flag.
2480 */
2481
2482 struct malloc_params {
2483 volatile size_t magic;
2484 size_t page_size;
2485 size_t granularity;
2486 size_t mmap_threshold;
2487 size_t trim_threshold;
2488 flag_t default_mflags;
2489 };
2490
2491 static struct malloc_params mparams;
2492
2493 /* Ensure mparams initialized */
2494 #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
2495
2496 #if !ONLY_MSPACES
2497
2498 /* The global malloc_state used for all non-"mspace" calls */
2499 static struct malloc_state _gm_;
2500 #define gm (&_gm_)
2501 #define is_global(M) ((M) == &_gm_)
2502
2503 #endif /* !ONLY_MSPACES */
2504
2505 #define is_initialized(M) ((M)->top != 0)
2506
2507 /* -------------------------- system alloc setup ------------------------- */
2508
2509 /* Operations on mflags */
2510
2511 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2512 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2513 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2514
2515 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2516 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2517 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2518
2519 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2520 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2521
2522 #define set_lock(M,L)\
2523 ((M)->mflags = (L)?\
2524 ((M)->mflags | USE_LOCK_BIT) :\
2525 ((M)->mflags & ~USE_LOCK_BIT))
2526
2527 /* page-align a size */
2528 #define page_align(S)\
2529 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2530
2531 /* granularity-align a size */
2532 #define granularity_align(S)\
2533 (((S) + (mparams.granularity - SIZE_T_ONE))\
2534 & ~(mparams.granularity - SIZE_T_ONE))
2535
2536
2537 /* For mmap, use granularity alignment on windows, else page-align */
2538 #ifdef WIN32
2539 #define mmap_align(S) granularity_align(S)
2540 #else
2541 #define mmap_align(S) page_align(S)
2542 #endif
2543
2544 /* For sys_alloc, enough padding to ensure can malloc request on success */
2545 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2546
2547 #define is_page_aligned(S)\
2548 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2549 #define is_granularity_aligned(S)\
2550 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2551
2552 /* True if segment S holds address A */
2553 #define segment_holds(S, A)\
2554 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2555
2556 /* Return segment holding given address */
segment_holding(mstate m,char * addr)2557 static msegmentptr segment_holding(mstate m, char* addr) {
2558 msegmentptr sp = &m->seg;
2559 for (;;) {
2560 if (addr >= sp->base && addr < sp->base + sp->size)
2561 return sp;
2562 if ((sp = sp->next) == 0)
2563 return 0;
2564 }
2565 }
2566
2567 /* Return true if segment contains a segment link */
has_segment_link(mstate m,msegmentptr ss)2568 static int has_segment_link(mstate m, msegmentptr ss) {
2569 msegmentptr sp = &m->seg;
2570 for (;;) {
2571 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2572 return 1;
2573 if ((sp = sp->next) == 0)
2574 return 0;
2575 }
2576 }
2577
2578 #ifndef MORECORE_CANNOT_TRIM
2579 #define should_trim(M,s) ((s) > (M)->trim_check)
2580 #else /* MORECORE_CANNOT_TRIM */
2581 #define should_trim(M,s) (0)
2582 #endif /* MORECORE_CANNOT_TRIM */
2583
2584 /*
2585 TOP_FOOT_SIZE is padding at the end of a segment, including space
2586 that may be needed to place segment records and fenceposts when new
2587 noncontiguous segments are added.
2588 */
2589 #define TOP_FOOT_SIZE\
2590 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2591
2592
2593 /* ------------------------------- Hooks -------------------------------- */
2594
2595 /*
2596 PREACTION should be defined to return 0 on success, and nonzero on
2597 failure. If you are not using locking, you can redefine these to do
2598 anything you like.
2599 */
2600
2601 #if USE_LOCKS
2602
2603 #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2604 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2605 #else /* USE_LOCKS */
2606
2607 #ifndef PREACTION
2608 #define PREACTION(M) (0)
2609 #endif /* PREACTION */
2610
2611 #ifndef POSTACTION
2612 #define POSTACTION(M)
2613 #endif /* POSTACTION */
2614
2615 #endif /* USE_LOCKS */
2616
2617 /*
2618 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2619 USAGE_ERROR_ACTION is triggered on detected bad frees and
2620 reallocs. The argument p is an address that might have triggered the
2621 fault. It is ignored by the two predefined actions, but might be
2622 useful in custom actions that try to help diagnose errors.
2623 */
2624
2625 #if PROCEED_ON_ERROR
2626
2627 /* A count of the number of corruption errors causing resets */
2628 int malloc_corruption_error_count;
2629
2630 /* default corruption action */
2631 static void reset_on_error(mstate m);
2632
2633 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2634 #define USAGE_ERROR_ACTION(m, p)
2635
2636 #else /* PROCEED_ON_ERROR */
2637
2638 #ifndef CORRUPTION_ERROR_ACTION
2639 #define CORRUPTION_ERROR_ACTION(m) ABORT
2640 #endif /* CORRUPTION_ERROR_ACTION */
2641
2642 #ifndef USAGE_ERROR_ACTION
2643 #define USAGE_ERROR_ACTION(m,p) ABORT
2644 #endif /* USAGE_ERROR_ACTION */
2645
2646 #endif /* PROCEED_ON_ERROR */
2647
2648 /* -------------------------- Debugging setup ---------------------------- */
2649
2650 #if ! DEBUG
2651
2652 #define check_free_chunk(M,P)
2653 #define check_inuse_chunk(M,P)
2654 #define check_malloced_chunk(M,P,N)
2655 #define check_mmapped_chunk(M,P)
2656 #define check_malloc_state(M)
2657 #define check_top_chunk(M,P)
2658
2659 #else /* DEBUG */
2660 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2661 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2662 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2663 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2664 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2665 #define check_malloc_state(M) do_check_malloc_state(M)
2666
2667 static void do_check_any_chunk(mstate m, mchunkptr p);
2668 static void do_check_top_chunk(mstate m, mchunkptr p);
2669 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2670 static void do_check_inuse_chunk(mstate m, mchunkptr p);
2671 static void do_check_free_chunk(mstate m, mchunkptr p);
2672 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2673 static void do_check_tree(mstate m, tchunkptr t);
2674 static void do_check_treebin(mstate m, bindex_t i);
2675 static void do_check_smallbin(mstate m, bindex_t i);
2676 static void do_check_malloc_state(mstate m);
2677 static int bin_find(mstate m, mchunkptr x);
2678 static size_t traverse_and_check(mstate m);
2679 #endif /* DEBUG */
2680
2681 /* ---------------------------- Indexing Bins ---------------------------- */
2682
2683 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2684 #define small_index(s) ((s) >> SMALLBIN_SHIFT)
2685 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2686 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2687
2688 /* addressing by index. See above about smallbin repositioning */
2689 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2690 #define treebin_at(M,i) (&((M)->treebins[i]))
2691
2692 /* assign tree index for size S to variable I. Use x86 asm if possible */
2693 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2694 #define compute_tree_index(S, I)\
2695 {\
2696 unsigned int X = S >> TREEBIN_SHIFT;\
2697 if (X == 0)\
2698 I = 0;\
2699 else if (X > 0xFFFF)\
2700 I = NTREEBINS-1;\
2701 else {\
2702 unsigned int K;\
2703 __asm__("bsrl\t%1, %0\n\t" : "=r" (K) : "g" (X));\
2704 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2705 }\
2706 }
2707
2708 #elif defined (__INTEL_COMPILER)
2709 #define compute_tree_index(S, I)\
2710 {\
2711 size_t X = S >> TREEBIN_SHIFT;\
2712 if (X == 0)\
2713 I = 0;\
2714 else if (X > 0xFFFF)\
2715 I = NTREEBINS-1;\
2716 else {\
2717 unsigned int K = _bit_scan_reverse (X); \
2718 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2719 }\
2720 }
2721
2722 #elif defined(_MSC_VER) && _MSC_VER>=1300
2723 #define compute_tree_index(S, I)\
2724 {\
2725 size_t X = S >> TREEBIN_SHIFT;\
2726 if (X == 0)\
2727 I = 0;\
2728 else if (X > 0xFFFF)\
2729 I = NTREEBINS-1;\
2730 else {\
2731 unsigned int K;\
2732 _BitScanReverse((DWORD *) &K, X);\
2733 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2734 }\
2735 }
2736
2737 #else /* GNUC */
2738 #define compute_tree_index(S, I)\
2739 {\
2740 size_t X = S >> TREEBIN_SHIFT;\
2741 if (X == 0)\
2742 I = 0;\
2743 else if (X > 0xFFFF)\
2744 I = NTREEBINS-1;\
2745 else {\
2746 unsigned int Y = (unsigned int)X;\
2747 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2748 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2749 N += K;\
2750 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2751 K = 14 - N + ((Y <<= K) >> 15);\
2752 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2753 }\
2754 }
2755 #endif /* GNUC */
2756
2757 /* Bit representing maximum resolved size in a treebin at i */
2758 #define bit_for_tree_index(i) \
2759 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2760
2761 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2762 #define leftshift_for_tree_index(i) \
2763 ((i == NTREEBINS-1)? 0 : \
2764 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2765
2766 /* The size of the smallest chunk held in bin with index i */
2767 #define minsize_for_tree_index(i) \
2768 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2769 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2770
2771
2772 /* ------------------------ Operations on bin maps ----------------------- */
2773
2774 /* bit corresponding to given index */
2775 #define idx2bit(i) ((binmap_t)(1) << (i))
2776
2777 /* Mark/Clear bits with given index */
2778 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2779 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2780 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2781
2782 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2783 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2784 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2785
2786 /* isolate the least set bit of a bitmap */
2787 #define least_bit(x) ((x) & -(x))
2788
2789 /* mask with all bits to left of least bit of x on */
2790 #define left_bits(x) ((x<<1) | -(x<<1))
2791
2792 /* mask with all bits to left of or equal to least bit of x on */
2793 #define same_or_left_bits(x) ((x) | -(x))
2794
2795 /* index corresponding to given bit. Use x86 asm if possible */
2796
2797 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2798 #define compute_bit2idx(X, I)\
2799 {\
2800 unsigned int J;\
2801 __asm__("bsfl\t%1, %0\n\t" : "=r" (J) : "g" (X));\
2802 I = (bindex_t)J;\
2803 }
2804
2805 #elif defined (__INTEL_COMPILER)
2806 #define compute_bit2idx(X, I)\
2807 {\
2808 unsigned int J;\
2809 J = _bit_scan_forward (X); \
2810 I = (bindex_t)J;\
2811 }
2812
2813 #elif defined(_MSC_VER) && _MSC_VER>=1300
2814 #define compute_bit2idx(X, I)\
2815 {\
2816 unsigned int J;\
2817 _BitScanForward((DWORD *) &J, X);\
2818 I = (bindex_t)J;\
2819 }
2820
2821 #elif USE_BUILTIN_FFS
2822 #define compute_bit2idx(X, I) I = ffs(X)-1
2823
2824 #else
2825 #define compute_bit2idx(X, I)\
2826 {\
2827 unsigned int Y = X - 1;\
2828 unsigned int K = Y >> (16-4) & 16;\
2829 unsigned int N = K; Y >>= K;\
2830 N += K = Y >> (8-3) & 8; Y >>= K;\
2831 N += K = Y >> (4-2) & 4; Y >>= K;\
2832 N += K = Y >> (2-1) & 2; Y >>= K;\
2833 N += K = Y >> (1-0) & 1; Y >>= K;\
2834 I = (bindex_t)(N + Y);\
2835 }
2836 #endif /* GNUC */
2837
2838
2839 /* ----------------------- Runtime Check Support ------------------------- */
2840
2841 /*
2842 For security, the main invariant is that malloc/free/etc never
2843 writes to a static address other than malloc_state, unless static
2844 malloc_state itself has been corrupted, which cannot occur via
2845 malloc (because of these checks). In essence this means that we
2846 believe all pointers, sizes, maps etc held in malloc_state, but
2847 check all of those linked or offsetted from other embedded data
2848 structures. These checks are interspersed with main code in a way
2849 that tends to minimize their run-time cost.
2850
2851 When FOOTERS is defined, in addition to range checking, we also
2852 verify footer fields of inuse chunks, which can be used guarantee
2853 that the mstate controlling malloc/free is intact. This is a
2854 streamlined version of the approach described by William Robertson
2855 et al in "Run-time Detection of Heap-based Overflows" LISA'03
2856 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2857 of an inuse chunk holds the xor of its mstate and a random seed,
2858 that is checked upon calls to free() and realloc(). This is
2859 (probablistically) unguessable from outside the program, but can be
2860 computed by any code successfully malloc'ing any chunk, so does not
2861 itself provide protection against code that has already broken
2862 security through some other means. Unlike Robertson et al, we
2863 always dynamically check addresses of all offset chunks (previous,
2864 next, etc). This turns out to be cheaper than relying on hashes.
2865 */
2866
2867 #if !INSECURE
2868 /* Check if address a is at least as high as any from MORECORE or MMAP */
2869 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2870 /* Check if address of next chunk n is higher than base chunk p */
2871 #define ok_next(p, n) ((char*)(p) < (char*)(n))
2872 /* Check if p has inuse status */
2873 #define ok_inuse(p) is_inuse(p)
2874 /* Check if p has its pinuse bit on */
2875 #define ok_pinuse(p) pinuse(p)
2876
2877 #else /* !INSECURE */
2878 #define ok_address(M, a) (1)
2879 #define ok_next(b, n) (1)
2880 #define ok_inuse(p) (1)
2881 #define ok_pinuse(p) (1)
2882 #endif /* !INSECURE */
2883
2884 #if (FOOTERS && !INSECURE)
2885 /* Check if (alleged) mstate m has expected magic field */
2886 #define ok_magic(M) ((M)->magic == mparams.magic)
2887 #else /* (FOOTERS && !INSECURE) */
2888 #define ok_magic(M) (1)
2889 #endif /* (FOOTERS && !INSECURE) */
2890
2891
2892 /* In gcc, use __builtin_expect to minimize impact of checks */
2893 #if !INSECURE
2894 #if defined(__GNUC__) && __GNUC__ >= 3
2895 #define RTCHECK(e) __builtin_expect(e, 1)
2896 #else /* GNUC */
2897 #define RTCHECK(e) (e)
2898 #endif /* GNUC */
2899 #else /* !INSECURE */
2900 #define RTCHECK(e) (1)
2901 #endif /* !INSECURE */
2902
2903 /* macros to set up inuse chunks with or without footers */
2904
2905 #if !FOOTERS
2906
2907 #define mark_inuse_foot(M,p,s)
2908
2909 /* Macros for setting head/foot of non-mmapped chunks */
2910
2911 /* Set cinuse bit and pinuse bit of next chunk */
2912 #define set_inuse(M,p,s)\
2913 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2914 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2915
2916 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2917 #define set_inuse_and_pinuse(M,p,s)\
2918 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2919 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2920
2921 /* Set size, cinuse and pinuse bit of this chunk */
2922 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2923 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2924
2925 #else /* FOOTERS */
2926
2927 /* Set foot of inuse chunk to be xor of mstate and seed */
2928 #define mark_inuse_foot(M,p,s)\
2929 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2930
2931 #define get_mstate_for(p)\
2932 ((mstate)(((mchunkptr)((char*)(p) +\
2933 (chunksize(p))))->prev_foot ^ mparams.magic))
2934
2935 #define set_inuse(M,p,s)\
2936 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2937 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2938 mark_inuse_foot(M,p,s))
2939
2940 #define set_inuse_and_pinuse(M,p,s)\
2941 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2942 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2943 mark_inuse_foot(M,p,s))
2944
2945 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2946 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2947 mark_inuse_foot(M, p, s))
2948
2949 #endif /* !FOOTERS */
2950
2951 /* ---------------------------- setting mparams -------------------------- */
2952
2953 /* Initialize mparams */
init_mparams(void)2954 static int init_mparams(void) {
2955 #ifdef NEED_GLOBAL_LOCK_INIT
2956 if (malloc_global_mutex_status <= 0)
2957 init_malloc_global_mutex();
2958 #endif
2959
2960 ACQUIRE_MALLOC_GLOBAL_LOCK();
2961 if (mparams.magic == 0) {
2962 size_t magic;
2963 size_t psize;
2964 size_t gsize;
2965
2966 #ifndef WIN32
2967 psize = malloc_getpagesize;
2968 gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
2969 #else /* WIN32 */
2970 {
2971 SYSTEM_INFO system_info;
2972 GetSystemInfo(&system_info);
2973 psize = system_info.dwPageSize;
2974 gsize = ((DEFAULT_GRANULARITY != 0)?
2975 DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
2976 }
2977 #endif /* WIN32 */
2978
2979 /* Sanity-check configuration:
2980 size_t must be unsigned and as wide as pointer type.
2981 ints must be at least 4 bytes.
2982 alignment must be at least 8.
2983 Alignment, min chunk size, and page size must all be powers of 2.
2984 */
2985 if ((sizeof(size_t) != sizeof(char*)) ||
2986 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
2987 (sizeof(int) < 4) ||
2988 (MALLOC_ALIGNMENT < (size_t)8U) ||
2989 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
2990 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
2991 ((gsize & (gsize-SIZE_T_ONE)) != 0) ||
2992 ((psize & (psize-SIZE_T_ONE)) != 0))
2993 ABORT;
2994
2995 mparams.granularity = gsize;
2996 mparams.page_size = psize;
2997 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2998 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2999 #if MORECORE_CONTIGUOUS
3000 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
3001 #else /* MORECORE_CONTIGUOUS */
3002 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
3003 #endif /* MORECORE_CONTIGUOUS */
3004
3005 #if !ONLY_MSPACES
3006 /* Set up lock for main malloc area */
3007 gm->mflags = mparams.default_mflags;
3008 INITIAL_LOCK(&gm->mutex);
3009 #endif
3010
3011 {
3012 #if USE_DEV_RANDOM
3013 int fd;
3014 unsigned char buf[sizeof(size_t)];
3015 /* Try to use /dev/urandom, else fall back on using time */
3016 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3017 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3018 magic = *((size_t *) buf);
3019 close(fd);
3020 }
3021 else
3022 #endif /* USE_DEV_RANDOM */
3023 #ifdef WIN32
3024 magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
3025 #else
3026 magic = (size_t)(time(0) ^ (size_t)0x55555555U);
3027 #endif
3028 magic |= (size_t)8U; /* ensure nonzero */
3029 magic &= ~(size_t)7U; /* improve chances of fault for bad values */
3030 mparams.magic = magic;
3031 }
3032 }
3033
3034 RELEASE_MALLOC_GLOBAL_LOCK();
3035 return 1;
3036 }
3037
3038 /* support for mallopt */
change_mparam(int param_number,int value)3039 static int change_mparam(int param_number, int value) {
3040 size_t val;
3041 ensure_initialization();
3042 val = (value == -1)? MAX_SIZE_T : (size_t)value;
3043 switch(param_number) {
3044 case M_TRIM_THRESHOLD:
3045 mparams.trim_threshold = val;
3046 return 1;
3047 case M_GRANULARITY:
3048 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3049 mparams.granularity = val;
3050 return 1;
3051 }
3052 else
3053 return 0;
3054 case M_MMAP_THRESHOLD:
3055 mparams.mmap_threshold = val;
3056 return 1;
3057 default:
3058 return 0;
3059 }
3060 }
3061
3062 #if DEBUG
3063 /* ------------------------- Debugging Support --------------------------- */
3064
3065 /* Check properties of any chunk, whether free, inuse, mmapped etc */
do_check_any_chunk(mstate m,mchunkptr p)3066 static void do_check_any_chunk(mstate m, mchunkptr p) {
3067 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3068 assert(ok_address(m, p));
3069 }
3070
3071 /* Check properties of top chunk */
do_check_top_chunk(mstate m,mchunkptr p)3072 static void do_check_top_chunk(mstate m, mchunkptr p) {
3073 msegmentptr sp = segment_holding(m, (char*)p);
3074 size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3075 assert(sp != 0);
3076 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3077 assert(ok_address(m, p));
3078 assert(sz == m->topsize);
3079 assert(sz > 0);
3080 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3081 assert(pinuse(p));
3082 assert(!pinuse(chunk_plus_offset(p, sz)));
3083 }
3084
3085 /* Check properties of (inuse) mmapped chunks */
do_check_mmapped_chunk(mstate m,mchunkptr p)3086 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3087 size_t sz = chunksize(p);
3088 size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
3089 assert(is_mmapped(p));
3090 assert(use_mmap(m));
3091 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3092 assert(ok_address(m, p));
3093 assert(!is_small(sz));
3094 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3095 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3096 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3097 }
3098
3099 /* Check properties of inuse chunks */
do_check_inuse_chunk(mstate m,mchunkptr p)3100 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3101 do_check_any_chunk(m, p);
3102 assert(is_inuse(p));
3103 assert(next_pinuse(p));
3104 /* If not pinuse and not mmapped, previous chunk has OK offset */
3105 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3106 if (is_mmapped(p))
3107 do_check_mmapped_chunk(m, p);
3108 }
3109
3110 /* Check properties of free chunks */
do_check_free_chunk(mstate m,mchunkptr p)3111 static void do_check_free_chunk(mstate m, mchunkptr p) {
3112 size_t sz = chunksize(p);
3113 mchunkptr next = chunk_plus_offset(p, sz);
3114 do_check_any_chunk(m, p);
3115 assert(!is_inuse(p));
3116 assert(!next_pinuse(p));
3117 assert (!is_mmapped(p));
3118 if (p != m->dv && p != m->top) {
3119 if (sz >= MIN_CHUNK_SIZE) {
3120 assert((sz & CHUNK_ALIGN_MASK) == 0);
3121 assert(is_aligned(chunk2mem(p)));
3122 assert(next->prev_foot == sz);
3123 assert(pinuse(p));
3124 assert (next == m->top || is_inuse(next));
3125 assert(p->fd->bk == p);
3126 assert(p->bk->fd == p);
3127 }
3128 else /* markers are always of size SIZE_T_SIZE */
3129 assert(sz == SIZE_T_SIZE);
3130 }
3131 }
3132
3133 /* Check properties of malloced chunks at the point they are malloced */
do_check_malloced_chunk(mstate m,void * mem,size_t s)3134 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3135 if (mem != 0) {
3136 mchunkptr p = mem2chunk(mem);
3137 size_t sz = p->head & ~INUSE_BITS;
3138 do_check_inuse_chunk(m, p);
3139 assert((sz & CHUNK_ALIGN_MASK) == 0);
3140 assert(sz >= MIN_CHUNK_SIZE);
3141 assert(sz >= s);
3142 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3143 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3144 }
3145 }
3146
3147 /* Check a tree and its subtrees. */
do_check_tree(mstate m,tchunkptr t)3148 static void do_check_tree(mstate m, tchunkptr t) {
3149 tchunkptr head = 0;
3150 tchunkptr u = t;
3151 bindex_t tindex = t->index;
3152 size_t tsize = chunksize(t);
3153 bindex_t idx;
3154 compute_tree_index(tsize, idx);
3155 assert(tindex == idx);
3156 assert(tsize >= MIN_LARGE_SIZE);
3157 assert(tsize >= minsize_for_tree_index(idx));
3158 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3159
3160 do { /* traverse through chain of same-sized nodes */
3161 do_check_any_chunk(m, ((mchunkptr)u));
3162 assert(u->index == tindex);
3163 assert(chunksize(u) == tsize);
3164 assert(!is_inuse(u));
3165 assert(!next_pinuse(u));
3166 assert(u->fd->bk == u);
3167 assert(u->bk->fd == u);
3168 if (u->parent == 0) {
3169 assert(u->child[0] == 0);
3170 assert(u->child[1] == 0);
3171 }
3172 else {
3173 assert(head == 0); /* only one node on chain has parent */
3174 head = u;
3175 assert(u->parent != u);
3176 assert (u->parent->child[0] == u ||
3177 u->parent->child[1] == u ||
3178 *((tbinptr*)(u->parent)) == u);
3179 if (u->child[0] != 0) {
3180 assert(u->child[0]->parent == u);
3181 assert(u->child[0] != u);
3182 do_check_tree(m, u->child[0]);
3183 }
3184 if (u->child[1] != 0) {
3185 assert(u->child[1]->parent == u);
3186 assert(u->child[1] != u);
3187 do_check_tree(m, u->child[1]);
3188 }
3189 if (u->child[0] != 0 && u->child[1] != 0) {
3190 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3191 }
3192 }
3193 u = u->fd;
3194 } while (u != t);
3195 assert(head != 0);
3196 }
3197
3198 /* Check all the chunks in a treebin. */
do_check_treebin(mstate m,bindex_t i)3199 static void do_check_treebin(mstate m, bindex_t i) {
3200 tbinptr* tb = treebin_at(m, i);
3201 tchunkptr t = *tb;
3202 int empty = (m->treemap & (1U << i)) == 0;
3203 if (t == 0)
3204 assert(empty);
3205 if (!empty)
3206 do_check_tree(m, t);
3207 }
3208
3209 /* Check all the chunks in a smallbin. */
do_check_smallbin(mstate m,bindex_t i)3210 static void do_check_smallbin(mstate m, bindex_t i) {
3211 sbinptr b = smallbin_at(m, i);
3212 mchunkptr p = b->bk;
3213 unsigned int empty = (m->smallmap & (1U << i)) == 0;
3214 if (p == b)
3215 assert(empty);
3216 if (!empty) {
3217 for (; p != b; p = p->bk) {
3218 size_t size = chunksize(p);
3219 mchunkptr q;
3220 /* each chunk claims to be free */
3221 do_check_free_chunk(m, p);
3222 /* chunk belongs in bin */
3223 assert(small_index(size) == i);
3224 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3225 /* chunk is followed by an inuse chunk */
3226 q = next_chunk(p);
3227 if (q->head != FENCEPOST_HEAD)
3228 do_check_inuse_chunk(m, q);
3229 }
3230 }
3231 }
3232
3233 /* Find x in a bin. Used in other check functions. */
bin_find(mstate m,mchunkptr x)3234 static int bin_find(mstate m, mchunkptr x) {
3235 size_t size = chunksize(x);
3236 if (is_small(size)) {
3237 bindex_t sidx = small_index(size);
3238 sbinptr b = smallbin_at(m, sidx);
3239 if (smallmap_is_marked(m, sidx)) {
3240 mchunkptr p = b;
3241 do {
3242 if (p == x)
3243 return 1;
3244 } while ((p = p->fd) != b);
3245 }
3246 }
3247 else {
3248 bindex_t tidx;
3249 compute_tree_index(size, tidx);
3250 if (treemap_is_marked(m, tidx)) {
3251 tchunkptr t = *treebin_at(m, tidx);
3252 size_t sizebits = size << leftshift_for_tree_index(tidx);
3253 while (t != 0 && chunksize(t) != size) {
3254 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3255 sizebits <<= 1;
3256 }
3257 if (t != 0) {
3258 tchunkptr u = t;
3259 do {
3260 if (u == (tchunkptr)x)
3261 return 1;
3262 } while ((u = u->fd) != t);
3263 }
3264 }
3265 }
3266 return 0;
3267 }
3268
3269 /* Traverse each chunk and check it; return total */
traverse_and_check(mstate m)3270 static size_t traverse_and_check(mstate m) {
3271 size_t sum = 0;
3272 if (is_initialized(m)) {
3273 msegmentptr s = &m->seg;
3274 sum += m->topsize + TOP_FOOT_SIZE;
3275 while (s != 0) {
3276 mchunkptr q = align_as_chunk(s->base);
3277 mchunkptr lastq = 0;
3278 assert(pinuse(q));
3279 while (segment_holds(s, q) &&
3280 q != m->top && q->head != FENCEPOST_HEAD) {
3281 sum += chunksize(q);
3282 if (is_inuse(q)) {
3283 assert(!bin_find(m, q));
3284 do_check_inuse_chunk(m, q);
3285 }
3286 else {
3287 assert(q == m->dv || bin_find(m, q));
3288 assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
3289 do_check_free_chunk(m, q);
3290 }
3291 lastq = q;
3292 q = next_chunk(q);
3293 }
3294 s = s->next;
3295 }
3296 }
3297 return sum;
3298 }
3299
3300 /* Check all properties of malloc_state. */
do_check_malloc_state(mstate m)3301 static void do_check_malloc_state(mstate m) {
3302 bindex_t i;
3303 size_t total;
3304 /* check bins */
3305 for (i = 0; i < NSMALLBINS; ++i)
3306 do_check_smallbin(m, i);
3307 for (i = 0; i < NTREEBINS; ++i)
3308 do_check_treebin(m, i);
3309
3310 if (m->dvsize != 0) { /* check dv chunk */
3311 do_check_any_chunk(m, m->dv);
3312 assert(m->dvsize == chunksize(m->dv));
3313 assert(m->dvsize >= MIN_CHUNK_SIZE);
3314 assert(bin_find(m, m->dv) == 0);
3315 }
3316
3317 if (m->top != 0) { /* check top chunk */
3318 do_check_top_chunk(m, m->top);
3319 /*assert(m->topsize == chunksize(m->top)); redundant */
3320 assert(m->topsize > 0);
3321 assert(bin_find(m, m->top) == 0);
3322 }
3323
3324 total = traverse_and_check(m);
3325 assert(total <= m->footprint);
3326 assert(m->footprint <= m->max_footprint);
3327 }
3328 #endif /* DEBUG */
3329
3330 /* ----------------------------- statistics ------------------------------ */
3331
3332 #if !NO_MALLINFO
internal_mallinfo(mstate m)3333 static struct mallinfo internal_mallinfo(mstate m) {
3334 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3335 ensure_initialization();
3336 if (!PREACTION(m)) {
3337 check_malloc_state(m);
3338 if (is_initialized(m)) {
3339 size_t nfree = SIZE_T_ONE; /* top always free */
3340 size_t mfree = m->topsize + TOP_FOOT_SIZE;
3341 size_t sum = mfree;
3342 msegmentptr s = &m->seg;
3343 while (s != 0) {
3344 mchunkptr q = align_as_chunk(s->base);
3345 while (segment_holds(s, q) &&
3346 q != m->top && q->head != FENCEPOST_HEAD) {
3347 size_t sz = chunksize(q);
3348 sum += sz;
3349 if (!is_inuse(q)) {
3350 mfree += sz;
3351 ++nfree;
3352 }
3353 q = next_chunk(q);
3354 }
3355 s = s->next;
3356 }
3357
3358 nm.arena = sum;
3359 nm.ordblks = nfree;
3360 nm.hblkhd = m->footprint - sum;
3361 nm.usmblks = m->max_footprint;
3362 nm.uordblks = m->footprint - mfree;
3363 nm.fordblks = mfree;
3364 nm.keepcost = m->topsize;
3365 }
3366
3367 POSTACTION(m);
3368 }
3369 return nm;
3370 }
3371 #endif /* !NO_MALLINFO */
3372
internal_malloc_stats(mstate m)3373 static void internal_malloc_stats(mstate m) {
3374 ensure_initialization();
3375 if (!PREACTION(m)) {
3376 size_t maxfp = 0;
3377 size_t fp = 0;
3378 size_t used = 0;
3379 check_malloc_state(m);
3380 if (is_initialized(m)) {
3381 msegmentptr s = &m->seg;
3382 maxfp = m->max_footprint;
3383 fp = m->footprint;
3384 used = fp - (m->topsize + TOP_FOOT_SIZE);
3385
3386 while (s != 0) {
3387 mchunkptr q = align_as_chunk(s->base);
3388 while (segment_holds(s, q) &&
3389 q != m->top && q->head != FENCEPOST_HEAD) {
3390 if (!is_inuse(q))
3391 used -= chunksize(q);
3392 q = next_chunk(q);
3393 }
3394 s = s->next;
3395 }
3396 }
3397
3398 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3399 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
3400 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
3401
3402 POSTACTION(m);
3403 }
3404 }
3405
3406 /* ----------------------- Operations on smallbins ----------------------- */
3407
3408 /*
3409 Various forms of linking and unlinking are defined as macros. Even
3410 the ones for trees, which are very long but have very short typical
3411 paths. This is ugly but reduces reliance on inlining support of
3412 compilers.
3413 */
3414
3415 /* Link a free chunk into a smallbin */
3416 #define insert_small_chunk(M, P, S) {\
3417 bindex_t I = small_index(S);\
3418 mchunkptr B = smallbin_at(M, I);\
3419 mchunkptr F = B;\
3420 assert(S >= MIN_CHUNK_SIZE);\
3421 if (!smallmap_is_marked(M, I))\
3422 mark_smallmap(M, I);\
3423 else if (RTCHECK(ok_address(M, B->fd)))\
3424 F = B->fd;\
3425 else {\
3426 CORRUPTION_ERROR_ACTION(M);\
3427 }\
3428 B->fd = P;\
3429 F->bk = P;\
3430 P->fd = F;\
3431 P->bk = B;\
3432 }
3433
3434 /* Unlink a chunk from a smallbin */
3435 #define unlink_small_chunk(M, P, S) {\
3436 mchunkptr F = P->fd;\
3437 mchunkptr B = P->bk;\
3438 bindex_t I = small_index(S);\
3439 assert(P != B);\
3440 assert(P != F);\
3441 assert(chunksize(P) == small_index2size(I));\
3442 if (F == B)\
3443 clear_smallmap(M, I);\
3444 else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
3445 (B == smallbin_at(M,I) || ok_address(M, B)))) {\
3446 F->bk = B;\
3447 B->fd = F;\
3448 }\
3449 else {\
3450 CORRUPTION_ERROR_ACTION(M);\
3451 }\
3452 }
3453
3454 /* Unlink the first chunk from a smallbin */
3455 #define unlink_first_small_chunk(M, B, P, I) {\
3456 mchunkptr F = P->fd;\
3457 assert(P != B);\
3458 assert(P != F);\
3459 assert(chunksize(P) == small_index2size(I));\
3460 if (B == F)\
3461 clear_smallmap(M, I);\
3462 else if (RTCHECK(ok_address(M, F))) {\
3463 B->fd = F;\
3464 F->bk = B;\
3465 }\
3466 else {\
3467 CORRUPTION_ERROR_ACTION(M);\
3468 }\
3469 }
3470
3471
3472
3473 /* Replace dv node, binning the old one */
3474 /* Used only when dvsize known to be small */
3475 #define replace_dv(M, P, S) {\
3476 size_t DVS = M->dvsize;\
3477 if (DVS != 0) {\
3478 mchunkptr DV = M->dv;\
3479 assert(is_small(DVS));\
3480 insert_small_chunk(M, DV, DVS);\
3481 }\
3482 M->dvsize = S;\
3483 M->dv = P;\
3484 }
3485
3486 /* ------------------------- Operations on trees ------------------------- */
3487
3488 /* Insert chunk into tree */
3489 #define insert_large_chunk(M, X, S) {\
3490 tbinptr* H;\
3491 bindex_t I;\
3492 compute_tree_index(S, I);\
3493 H = treebin_at(M, I);\
3494 X->index = I;\
3495 X->child[0] = X->child[1] = 0;\
3496 if (!treemap_is_marked(M, I)) {\
3497 mark_treemap(M, I);\
3498 *H = X;\
3499 X->parent = (tchunkptr)H;\
3500 X->fd = X->bk = X;\
3501 }\
3502 else {\
3503 tchunkptr T = *H;\
3504 size_t K = S << leftshift_for_tree_index(I);\
3505 for (;;) {\
3506 if (chunksize(T) != S) {\
3507 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3508 K <<= 1;\
3509 if (*C != 0)\
3510 T = *C;\
3511 else if (RTCHECK(ok_address(M, C))) {\
3512 *C = X;\
3513 X->parent = T;\
3514 X->fd = X->bk = X;\
3515 break;\
3516 }\
3517 else {\
3518 CORRUPTION_ERROR_ACTION(M);\
3519 break;\
3520 }\
3521 }\
3522 else {\
3523 tchunkptr F = T->fd;\
3524 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3525 T->fd = F->bk = X;\
3526 X->fd = F;\
3527 X->bk = T;\
3528 X->parent = 0;\
3529 break;\
3530 }\
3531 else {\
3532 CORRUPTION_ERROR_ACTION(M);\
3533 break;\
3534 }\
3535 }\
3536 }\
3537 }\
3538 }
3539
3540 /*
3541 Unlink steps:
3542
3543 1. If x is a chained node, unlink it from its same-sized fd/bk links
3544 and choose its bk node as its replacement.
3545 2. If x was the last node of its size, but not a leaf node, it must
3546 be replaced with a leaf node (not merely one with an open left or
3547 right), to make sure that lefts and rights of descendents
3548 correspond properly to bit masks. We use the rightmost descendent
3549 of x. We could use any other leaf, but this is easy to locate and
3550 tends to counteract removal of leftmosts elsewhere, and so keeps
3551 paths shorter than minimally guaranteed. This doesn't loop much
3552 because on average a node in a tree is near the bottom.
3553 3. If x is the base of a chain (i.e., has parent links) relink
3554 x's parent and children to x's replacement (or null if none).
3555 */
3556
3557 #define unlink_large_chunk(M, X) {\
3558 tchunkptr XP = X->parent;\
3559 tchunkptr R;\
3560 if (X->bk != X) {\
3561 tchunkptr F = X->fd;\
3562 R = X->bk;\
3563 if (RTCHECK(ok_address(M, F))) {\
3564 F->bk = R;\
3565 R->fd = F;\
3566 }\
3567 else {\
3568 CORRUPTION_ERROR_ACTION(M);\
3569 }\
3570 }\
3571 else {\
3572 tchunkptr* RP;\
3573 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3574 ((R = *(RP = &(X->child[0]))) != 0)) {\
3575 tchunkptr* CP;\
3576 while ((*(CP = &(R->child[1])) != 0) ||\
3577 (*(CP = &(R->child[0])) != 0)) {\
3578 R = *(RP = CP);\
3579 }\
3580 if (RTCHECK(ok_address(M, RP)))\
3581 *RP = 0;\
3582 else {\
3583 CORRUPTION_ERROR_ACTION(M);\
3584 }\
3585 }\
3586 }\
3587 if (XP != 0) {\
3588 tbinptr* H = treebin_at(M, X->index);\
3589 if (X == *H) {\
3590 if ((*H = R) == 0) \
3591 clear_treemap(M, X->index);\
3592 }\
3593 else if (RTCHECK(ok_address(M, XP))) {\
3594 if (XP->child[0] == X) \
3595 XP->child[0] = R;\
3596 else \
3597 XP->child[1] = R;\
3598 }\
3599 else\
3600 CORRUPTION_ERROR_ACTION(M);\
3601 if (R != 0) {\
3602 if (RTCHECK(ok_address(M, R))) {\
3603 tchunkptr C0, C1;\
3604 R->parent = XP;\
3605 if ((C0 = X->child[0]) != 0) {\
3606 if (RTCHECK(ok_address(M, C0))) {\
3607 R->child[0] = C0;\
3608 C0->parent = R;\
3609 }\
3610 else\
3611 CORRUPTION_ERROR_ACTION(M);\
3612 }\
3613 if ((C1 = X->child[1]) != 0) {\
3614 if (RTCHECK(ok_address(M, C1))) {\
3615 R->child[1] = C1;\
3616 C1->parent = R;\
3617 }\
3618 else\
3619 CORRUPTION_ERROR_ACTION(M);\
3620 }\
3621 }\
3622 else\
3623 CORRUPTION_ERROR_ACTION(M);\
3624 }\
3625 }\
3626 }
3627
3628 /* Relays to large vs small bin operations */
3629
3630 #define insert_chunk(M, P, S)\
3631 if (is_small(S)) insert_small_chunk(M, P, S)\
3632 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3633
3634 #define unlink_chunk(M, P, S)\
3635 if (is_small(S)) unlink_small_chunk(M, P, S)\
3636 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3637
3638
3639 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3640
3641 #if ONLY_MSPACES
3642 #define internal_malloc(m, b) mspace_malloc(m, b)
3643 #define internal_free(m, mem) mspace_free(m,mem);
3644 #else /* ONLY_MSPACES */
3645 #if MSPACES
3646 #define internal_malloc(m, b)\
3647 (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3648 #define internal_free(m, mem)\
3649 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3650 #else /* MSPACES */
3651 #define internal_malloc(m, b) dlmalloc(b)
3652 #define internal_free(m, mem) dlfree(mem)
3653 #endif /* MSPACES */
3654 #endif /* ONLY_MSPACES */
3655
3656 /* ----------------------- Direct-mmapping chunks ----------------------- */
3657
3658 /*
3659 Directly mmapped chunks are set up with an offset to the start of
3660 the mmapped region stored in the prev_foot field of the chunk. This
3661 allows reconstruction of the required argument to MUNMAP when freed,
3662 and also allows adjustment of the returned chunk to meet alignment
3663 requirements (especially in memalign).
3664 */
3665
3666 /* Malloc using mmap */
mmap_alloc(mstate m,size_t nb)3667 static void* mmap_alloc(mstate m, size_t nb) {
3668 size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3669 if (mmsize > nb) { /* Check for wrap around 0 */
3670 char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3671 if (mm != CMFAIL) {
3672 size_t offset = align_offset(chunk2mem(mm));
3673 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3674 mchunkptr p = (mchunkptr)(mm + offset);
3675 p->prev_foot = offset;
3676 p->head = psize;
3677 mark_inuse_foot(m, p, psize);
3678 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3679 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3680
3681 if (m->least_addr == 0 || mm < m->least_addr)
3682 m->least_addr = mm;
3683 if ((m->footprint += mmsize) > m->max_footprint)
3684 m->max_footprint = m->footprint;
3685 assert(is_aligned(chunk2mem(p)));
3686 check_mmapped_chunk(m, p);
3687 return chunk2mem(p);
3688 }
3689 }
3690 return 0;
3691 }
3692
3693 /* Realloc using mmap */
mmap_resize(mstate m,mchunkptr oldp,size_t nb)3694 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3695 size_t oldsize = chunksize(oldp);
3696 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3697 return 0;
3698 /* Keep old chunk if big enough but not too big */
3699 if (oldsize >= nb + SIZE_T_SIZE &&
3700 (oldsize - nb) <= (mparams.granularity << 1))
3701 return oldp;
3702 else {
3703 size_t offset = oldp->prev_foot;
3704 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3705 size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3706 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3707 oldmmsize, newmmsize, 1);
3708 if (cp != CMFAIL) {
3709 mchunkptr newp = (mchunkptr)(cp + offset);
3710 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3711 newp->head = psize;
3712 mark_inuse_foot(m, newp, psize);
3713 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3714 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3715
3716 if (cp < m->least_addr)
3717 m->least_addr = cp;
3718 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3719 m->max_footprint = m->footprint;
3720 check_mmapped_chunk(m, newp);
3721 return newp;
3722 }
3723 }
3724 return 0;
3725 }
3726
3727 /* -------------------------- mspace management -------------------------- */
3728
3729 /* Initialize top chunk and its size */
init_top(mstate m,mchunkptr p,size_t psize)3730 static void init_top(mstate m, mchunkptr p, size_t psize) {
3731 /* Ensure alignment */
3732 size_t offset = align_offset(chunk2mem(p));
3733 p = (mchunkptr)((char*)p + offset);
3734 psize -= offset;
3735
3736 m->top = p;
3737 m->topsize = psize;
3738 p->head = psize | PINUSE_BIT;
3739 /* set size of fake trailing chunk holding overhead space only once */
3740 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3741 m->trim_check = mparams.trim_threshold; /* reset on each update */
3742 }
3743
3744 /* Initialize bins for a new mstate that is otherwise zeroed out */
init_bins(mstate m)3745 static void init_bins(mstate m) {
3746 /* Establish circular links for smallbins */
3747 bindex_t i;
3748 for (i = 0; i < NSMALLBINS; ++i) {
3749 sbinptr bin = smallbin_at(m,i);
3750 bin->fd = bin->bk = bin;
3751 }
3752 }
3753
3754 #if PROCEED_ON_ERROR
3755
3756 /* default corruption action */
reset_on_error(mstate m)3757 static void reset_on_error(mstate m) {
3758 int i;
3759 ++malloc_corruption_error_count;
3760 /* Reinitialize fields to forget about all memory */
3761 m->smallbins = m->treebins = 0;
3762 m->dvsize = m->topsize = 0;
3763 m->seg.base = 0;
3764 m->seg.size = 0;
3765 m->seg.next = 0;
3766 m->top = m->dv = 0;
3767 for (i = 0; i < NTREEBINS; ++i)
3768 *treebin_at(m, i) = 0;
3769 init_bins(m);
3770 }
3771 #endif /* PROCEED_ON_ERROR */
3772
3773 /* Allocate chunk and prepend remainder with chunk in successor base. */
prepend_alloc(mstate m,char * newbase,char * oldbase,size_t nb)3774 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3775 size_t nb) {
3776 mchunkptr p = align_as_chunk(newbase);
3777 mchunkptr oldfirst = align_as_chunk(oldbase);
3778 size_t psize = (char*)oldfirst - (char*)p;
3779 mchunkptr q = chunk_plus_offset(p, nb);
3780 size_t qsize = psize - nb;
3781 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3782
3783 assert((char*)oldfirst > (char*)q);
3784 assert(pinuse(oldfirst));
3785 assert(qsize >= MIN_CHUNK_SIZE);
3786
3787 /* consolidate remainder with first chunk of old base */
3788 if (oldfirst == m->top) {
3789 size_t tsize = m->topsize += qsize;
3790 m->top = q;
3791 q->head = tsize | PINUSE_BIT;
3792 check_top_chunk(m, q);
3793 }
3794 else if (oldfirst == m->dv) {
3795 size_t dsize = m->dvsize += qsize;
3796 m->dv = q;
3797 set_size_and_pinuse_of_free_chunk(q, dsize);
3798 }
3799 else {
3800 if (!is_inuse(oldfirst)) {
3801 size_t nsize = chunksize(oldfirst);
3802 unlink_chunk(m, oldfirst, nsize);
3803 oldfirst = chunk_plus_offset(oldfirst, nsize);
3804 qsize += nsize;
3805 }
3806 set_free_with_pinuse(q, qsize, oldfirst);
3807 insert_chunk(m, q, qsize);
3808 check_free_chunk(m, q);
3809 }
3810
3811 check_malloced_chunk(m, chunk2mem(p), nb);
3812 return chunk2mem(p);
3813 }
3814
3815 /* Add a segment to hold a new noncontiguous region */
add_segment(mstate m,char * tbase,size_t tsize,flag_t mmapped)3816 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3817 /* Determine locations and sizes of segment, fenceposts, old top */
3818 char* old_top = (char*)m->top;
3819 msegmentptr oldsp = segment_holding(m, old_top);
3820 char* old_end = oldsp->base + oldsp->size;
3821 size_t ssize = pad_request(sizeof(struct malloc_segment));
3822 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3823 size_t offset = align_offset(chunk2mem(rawsp));
3824 char* asp = rawsp + offset;
3825 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3826 mchunkptr sp = (mchunkptr)csp;
3827 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3828 mchunkptr tnext = chunk_plus_offset(sp, ssize);
3829 mchunkptr p = tnext;
3830 int nfences = 0;
3831
3832 /* reset top to new space */
3833 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3834
3835 /* Set up segment record */
3836 assert(is_aligned(ss));
3837 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3838 *ss = m->seg; /* Push current record */
3839 m->seg.base = tbase;
3840 m->seg.size = tsize;
3841 m->seg.sflags = mmapped;
3842 m->seg.next = ss;
3843
3844 /* Insert trailing fenceposts */
3845 for (;;) {
3846 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3847 p->head = FENCEPOST_HEAD;
3848 ++nfences;
3849 if ((char*)(&(nextp->head)) < old_end)
3850 p = nextp;
3851 else
3852 break;
3853 }
3854 assert(nfences >= 2);
3855
3856 /* Insert the rest of old top into a bin as an ordinary free chunk */
3857 if (csp != old_top) {
3858 mchunkptr q = (mchunkptr)old_top;
3859 size_t psize = csp - old_top;
3860 mchunkptr tn = chunk_plus_offset(q, psize);
3861 set_free_with_pinuse(q, psize, tn);
3862 insert_chunk(m, q, psize);
3863 }
3864
3865 check_top_chunk(m, m->top);
3866 }
3867
3868 /* -------------------------- System allocation -------------------------- */
3869
3870 /* Get memory from system using MORECORE or MMAP */
sys_alloc(mstate m,size_t nb)3871 static void* sys_alloc(mstate m, size_t nb) {
3872 char* tbase = CMFAIL;
3873 size_t tsize = 0;
3874 flag_t mmap_flag = 0;
3875
3876 ensure_initialization();
3877
3878 /* Directly map large chunks, but only if already initialized */
3879 if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
3880 void* mem = mmap_alloc(m, nb);
3881 if (mem != 0)
3882 return mem;
3883 }
3884
3885 /*
3886 Try getting memory in any of three ways (in most-preferred to
3887 least-preferred order):
3888 1. A call to MORECORE that can normally contiguously extend memory.
3889 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3890 or main space is mmapped or a previous contiguous call failed)
3891 2. A call to MMAP new space (disabled if not HAVE_MMAP).
3892 Note that under the default settings, if MORECORE is unable to
3893 fulfill a request, and HAVE_MMAP is true, then mmap is
3894 used as a noncontiguous system allocator. This is a useful backup
3895 strategy for systems with holes in address spaces -- in this case
3896 sbrk cannot contiguously expand the heap, but mmap may be able to
3897 find space.
3898 3. A call to MORECORE that cannot usually contiguously extend memory.
3899 (disabled if not HAVE_MORECORE)
3900
3901 In all cases, we need to request enough bytes from system to ensure
3902 we can malloc nb bytes upon success, so pad with enough space for
3903 top_foot, plus alignment-pad to make sure we don't lose bytes if
3904 not on boundary, and round this up to a granularity unit.
3905 */
3906
3907 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3908 char* br = CMFAIL;
3909 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3910 size_t asize = 0;
3911 ACQUIRE_MALLOC_GLOBAL_LOCK();
3912
3913 if (ss == 0) { /* First time through or recovery */
3914 char* base = (char*)CALL_MORECORE(0);
3915 if (base != CMFAIL) {
3916 asize = granularity_align(nb + SYS_ALLOC_PADDING);
3917 /* Adjust to end on a page boundary */
3918 if (!is_page_aligned(base))
3919 asize += (page_align((size_t)base) - (size_t)base);
3920 /* Can't call MORECORE if size is negative when treated as signed */
3921 if (asize < HALF_MAX_SIZE_T &&
3922 (br = (char*)(CALL_MORECORE(asize))) == base) {
3923 tbase = base;
3924 tsize = asize;
3925 }
3926 }
3927 }
3928 else {
3929 /* Subtract out existing available top space from MORECORE request. */
3930 asize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
3931 /* Use mem here only if it did continuously extend old space */
3932 if (asize < HALF_MAX_SIZE_T &&
3933 (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3934 tbase = br;
3935 tsize = asize;
3936 }
3937 }
3938
3939 if (tbase == CMFAIL) { /* Cope with partial failure */
3940 if (br != CMFAIL) { /* Try to use/extend the space we did get */
3941 if (asize < HALF_MAX_SIZE_T &&
3942 asize < nb + SYS_ALLOC_PADDING) {
3943 size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - asize);
3944 if (esize < HALF_MAX_SIZE_T) {
3945 char* end = (char*)CALL_MORECORE(esize);
3946 if (end != CMFAIL)
3947 asize += esize;
3948 else { /* Can't use; try to release */
3949 (void) CALL_MORECORE(-asize);
3950 br = CMFAIL;
3951 }
3952 }
3953 }
3954 }
3955 if (br != CMFAIL) { /* Use the space we did get */
3956 tbase = br;
3957 tsize = asize;
3958 }
3959 else
3960 disable_contiguous(m); /* Don't try contiguous path in the future */
3961 }
3962
3963 RELEASE_MALLOC_GLOBAL_LOCK();
3964 }
3965
3966 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
3967 size_t rsize = granularity_align(nb + SYS_ALLOC_PADDING);
3968 if (rsize > nb) { /* Fail if wraps around zero */
3969 char* mp = (char*)(CALL_MMAP(rsize));
3970 if (mp != CMFAIL) {
3971 tbase = mp;
3972 tsize = rsize;
3973 mmap_flag = USE_MMAP_BIT;
3974 }
3975 }
3976 }
3977
3978 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3979 size_t asize = granularity_align(nb + SYS_ALLOC_PADDING);
3980 if (asize < HALF_MAX_SIZE_T) {
3981 char* br = CMFAIL;
3982 char* end = CMFAIL;
3983 ACQUIRE_MALLOC_GLOBAL_LOCK();
3984 br = (char*)(CALL_MORECORE(asize));
3985 end = (char*)(CALL_MORECORE(0));
3986 RELEASE_MALLOC_GLOBAL_LOCK();
3987 if (br != CMFAIL && end != CMFAIL && br < end) {
3988 size_t ssize = end - br;
3989 if (ssize > nb + TOP_FOOT_SIZE) {
3990 tbase = br;
3991 tsize = ssize;
3992 }
3993 }
3994 }
3995 }
3996
3997 if (tbase != CMFAIL) {
3998
3999 if ((m->footprint += tsize) > m->max_footprint)
4000 m->max_footprint = m->footprint;
4001
4002 if (!is_initialized(m)) { /* first-time initialization */
4003 if (m->least_addr == 0 || tbase < m->least_addr)
4004 m->least_addr = tbase;
4005 m->seg.base = tbase;
4006 m->seg.size = tsize;
4007 m->seg.sflags = mmap_flag;
4008 m->magic = mparams.magic;
4009 m->release_checks = MAX_RELEASE_CHECK_RATE;
4010 init_bins(m);
4011 #if !ONLY_MSPACES
4012 if (is_global(m))
4013 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4014 else
4015 #endif
4016 {
4017 /* Offset top by embedded malloc_state */
4018 mchunkptr mn = next_chunk(mem2chunk(m));
4019 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4020 }
4021 }
4022
4023 else {
4024 /* Try to merge with an existing segment */
4025 msegmentptr sp = &m->seg;
4026 /* Only consider most recent segment if traversal suppressed */
4027 while (sp != 0 && tbase != sp->base + sp->size)
4028 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4029 if (sp != 0 &&
4030 !is_extern_segment(sp) &&
4031 (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
4032 segment_holds(sp, m->top)) { /* append */
4033 sp->size += tsize;
4034 init_top(m, m->top, m->topsize + tsize);
4035 }
4036 else {
4037 if (tbase < m->least_addr)
4038 m->least_addr = tbase;
4039 sp = &m->seg;
4040 while (sp != 0 && sp->base != tbase + tsize)
4041 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4042 if (sp != 0 &&
4043 !is_extern_segment(sp) &&
4044 (sp->sflags & USE_MMAP_BIT) == mmap_flag) {
4045 char* oldbase = sp->base;
4046 sp->base = tbase;
4047 sp->size += tsize;
4048 return prepend_alloc(m, tbase, oldbase, nb);
4049 }
4050 else
4051 add_segment(m, tbase, tsize, mmap_flag);
4052 }
4053 }
4054
4055 if (nb < m->topsize) { /* Allocate from new or extended top space */
4056 size_t rsize = m->topsize -= nb;
4057 mchunkptr p = m->top;
4058 mchunkptr r = m->top = chunk_plus_offset(p, nb);
4059 r->head = rsize | PINUSE_BIT;
4060 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
4061 check_top_chunk(m, m->top);
4062 check_malloced_chunk(m, chunk2mem(p), nb);
4063 return chunk2mem(p);
4064 }
4065 }
4066
4067 MALLOC_FAILURE_ACTION;
4068 return 0;
4069 }
4070
4071 /* ----------------------- system deallocation -------------------------- */
4072
4073 /* Unmap and unlink any mmapped segments that don't contain used chunks */
release_unused_segments(mstate m)4074 static size_t release_unused_segments(mstate m) {
4075 size_t released = 0;
4076 int nsegs = 0;
4077 msegmentptr pred = &m->seg;
4078 msegmentptr sp = pred->next;
4079 while (sp != 0) {
4080 char* base = sp->base;
4081 size_t size = sp->size;
4082 msegmentptr next = sp->next;
4083 ++nsegs;
4084 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4085 mchunkptr p = align_as_chunk(base);
4086 size_t psize = chunksize(p);
4087 /* Can unmap if first chunk holds entire segment and not pinned */
4088 if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4089 tchunkptr tp = (tchunkptr)p;
4090 assert(segment_holds(sp, (char*)sp));
4091 if (p == m->dv) {
4092 m->dv = 0;
4093 m->dvsize = 0;
4094 }
4095 else {
4096 unlink_large_chunk(m, tp);
4097 }
4098 if (CALL_MUNMAP(base, size) == 0) {
4099 released += size;
4100 m->footprint -= size;
4101 /* unlink obsoleted record */
4102 sp = pred;
4103 sp->next = next;
4104 }
4105 else { /* back out if cannot unmap */
4106 insert_large_chunk(m, tp, psize);
4107 }
4108 }
4109 }
4110 if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4111 break;
4112 pred = sp;
4113 sp = next;
4114 }
4115 /* Reset check counter */
4116 m->release_checks = ((nsegs > MAX_RELEASE_CHECK_RATE)?
4117 nsegs : MAX_RELEASE_CHECK_RATE);
4118 return released;
4119 }
4120
sys_trim(mstate m,size_t pad)4121 static int sys_trim(mstate m, size_t pad) {
4122 size_t released = 0;
4123 ensure_initialization();
4124 if (pad < MAX_REQUEST && is_initialized(m)) {
4125 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4126
4127 if (m->topsize > pad) {
4128 /* Shrink top space in granularity-size units, keeping at least one */
4129 size_t unit = mparams.granularity;
4130 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4131 SIZE_T_ONE) * unit;
4132 msegmentptr sp = segment_holding(m, (char*)m->top);
4133
4134 if (!is_extern_segment(sp)) {
4135 if (is_mmapped_segment(sp)) {
4136 if (HAVE_MMAP &&
4137 sp->size >= extra &&
4138 !has_segment_link(m, sp)) { /* can't shrink if pinned */
4139 size_t newsize = sp->size - extra;
4140 /* Prefer mremap, fall back to munmap */
4141 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4142 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4143 released = extra;
4144 }
4145 }
4146 }
4147 else if (HAVE_MORECORE) {
4148 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4149 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4150 ACQUIRE_MALLOC_GLOBAL_LOCK();
4151 {
4152 /* Make sure end of memory is where we last set it. */
4153 char* old_br = (char*)(CALL_MORECORE(0));
4154 if (old_br == sp->base + sp->size) {
4155 char* rel_br = (char*)(CALL_MORECORE(-extra));
4156 char* new_br = (char*)(CALL_MORECORE(0));
4157 if (rel_br != CMFAIL && new_br < old_br)
4158 released = old_br - new_br;
4159 }
4160 }
4161 RELEASE_MALLOC_GLOBAL_LOCK();
4162 }
4163 }
4164
4165 if (released != 0) {
4166 sp->size -= released;
4167 m->footprint -= released;
4168 init_top(m, m->top, m->topsize - released);
4169 check_top_chunk(m, m->top);
4170 }
4171 }
4172
4173 /* Unmap any unused mmapped segments */
4174 if (HAVE_MMAP)
4175 released += release_unused_segments(m);
4176
4177 /* On failure, disable autotrim to avoid repeated failed future calls */
4178 if (released == 0 && m->topsize > m->trim_check)
4179 m->trim_check = MAX_SIZE_T;
4180 }
4181
4182 return (released != 0)? 1 : 0;
4183 }
4184
4185
4186 /* ---------------------------- malloc support --------------------------- */
4187
4188 /* allocate a large request from the best fitting chunk in a treebin */
tmalloc_large(mstate m,size_t nb)4189 static void* tmalloc_large(mstate m, size_t nb) {
4190 tchunkptr v = 0;
4191 size_t rsize = -nb; /* Unsigned negation */
4192 tchunkptr t;
4193 bindex_t idx;
4194 compute_tree_index(nb, idx);
4195 if ((t = *treebin_at(m, idx)) != 0) {
4196 /* Traverse tree for this bin looking for node with size == nb */
4197 size_t sizebits = nb << leftshift_for_tree_index(idx);
4198 tchunkptr rst = 0; /* The deepest untaken right subtree */
4199 for (;;) {
4200 tchunkptr rt;
4201 size_t trem = chunksize(t) - nb;
4202 if (trem < rsize) {
4203 v = t;
4204 if ((rsize = trem) == 0)
4205 break;
4206 }
4207 rt = t->child[1];
4208 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4209 if (rt != 0 && rt != t)
4210 rst = rt;
4211 if (t == 0) {
4212 t = rst; /* set t to least subtree holding sizes > nb */
4213 break;
4214 }
4215 sizebits <<= 1;
4216 }
4217 }
4218 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4219 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4220 if (leftbits != 0) {
4221 bindex_t i;
4222 binmap_t leastbit = least_bit(leftbits);
4223 compute_bit2idx(leastbit, i);
4224 t = *treebin_at(m, i);
4225 }
4226 }
4227
4228 while (t != 0) { /* find smallest of tree or subtree */
4229 size_t trem = chunksize(t) - nb;
4230 if (trem < rsize) {
4231 rsize = trem;
4232 v = t;
4233 }
4234 t = leftmost_child(t);
4235 }
4236
4237 /* If dv is a better fit, return 0 so malloc will use it */
4238 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4239 if (RTCHECK(ok_address(m, v))) { /* split */
4240 mchunkptr r = chunk_plus_offset(v, nb);
4241 assert(chunksize(v) == rsize + nb);
4242 if (RTCHECK(ok_next(v, r))) {
4243 unlink_large_chunk(m, v);
4244 if (rsize < MIN_CHUNK_SIZE)
4245 set_inuse_and_pinuse(m, v, (rsize + nb));
4246 else {
4247 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4248 set_size_and_pinuse_of_free_chunk(r, rsize);
4249 insert_chunk(m, r, rsize);
4250 }
4251 return chunk2mem(v);
4252 }
4253 }
4254 CORRUPTION_ERROR_ACTION(m);
4255 }
4256 return 0;
4257 }
4258
4259 /* allocate a small request from the best fitting chunk in a treebin */
tmalloc_small(mstate m,size_t nb)4260 static void* tmalloc_small(mstate m, size_t nb) {
4261 tchunkptr t, v;
4262 size_t rsize;
4263 bindex_t i;
4264 binmap_t leastbit = least_bit(m->treemap);
4265 compute_bit2idx(leastbit, i);
4266 v = t = *treebin_at(m, i);
4267 rsize = chunksize(t) - nb;
4268
4269 while ((t = leftmost_child(t)) != 0) {
4270 size_t trem = chunksize(t) - nb;
4271 if (trem < rsize) {
4272 rsize = trem;
4273 v = t;
4274 }
4275 }
4276
4277 if (RTCHECK(ok_address(m, v))) {
4278 mchunkptr r = chunk_plus_offset(v, nb);
4279 assert(chunksize(v) == rsize + nb);
4280 if (RTCHECK(ok_next(v, r))) {
4281 unlink_large_chunk(m, v);
4282 if (rsize < MIN_CHUNK_SIZE)
4283 set_inuse_and_pinuse(m, v, (rsize + nb));
4284 else {
4285 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4286 set_size_and_pinuse_of_free_chunk(r, rsize);
4287 replace_dv(m, r, rsize);
4288 }
4289 return chunk2mem(v);
4290 }
4291 }
4292
4293 CORRUPTION_ERROR_ACTION(m);
4294 return 0;
4295 }
4296
4297 /* --------------------------- realloc support --------------------------- */
4298
internal_realloc(mstate m,void * oldmem,size_t bytes)4299 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
4300 if (bytes >= MAX_REQUEST) {
4301 MALLOC_FAILURE_ACTION;
4302 return 0;
4303 }
4304 if (!PREACTION(m)) {
4305 mchunkptr oldp = mem2chunk(oldmem);
4306 size_t oldsize = chunksize(oldp);
4307 mchunkptr next = chunk_plus_offset(oldp, oldsize);
4308 mchunkptr newp = 0;
4309 void* extra = 0;
4310
4311 /* Try to either shrink or extend into top. Else malloc-copy-free */
4312
4313 if (RTCHECK(ok_address(m, oldp) && ok_inuse(oldp) &&
4314 ok_next(oldp, next) && ok_pinuse(next))) {
4315 size_t nb = request2size(bytes);
4316 if (is_mmapped(oldp))
4317 newp = mmap_resize(m, oldp, nb);
4318 else if (oldsize >= nb) { /* already big enough */
4319 size_t rsize = oldsize - nb;
4320 newp = oldp;
4321 if (rsize >= MIN_CHUNK_SIZE) {
4322 mchunkptr remainder = chunk_plus_offset(newp, nb);
4323 set_inuse(m, newp, nb);
4324 set_inuse_and_pinuse(m, remainder, rsize);
4325 extra = chunk2mem(remainder);
4326 }
4327 }
4328 else if (next == m->top && oldsize + m->topsize > nb) {
4329 /* Expand into top */
4330 size_t newsize = oldsize + m->topsize;
4331 size_t newtopsize = newsize - nb;
4332 mchunkptr newtop = chunk_plus_offset(oldp, nb);
4333 set_inuse(m, oldp, nb);
4334 newtop->head = newtopsize |PINUSE_BIT;
4335 m->top = newtop;
4336 m->topsize = newtopsize;
4337 newp = oldp;
4338 }
4339 }
4340 else {
4341 USAGE_ERROR_ACTION(m, oldmem);
4342 POSTACTION(m);
4343 return 0;
4344 }
4345 #if DEBUG
4346 if (newp != 0) {
4347 check_inuse_chunk(m, newp); /* Check requires lock */
4348 }
4349 #endif
4350
4351 POSTACTION(m);
4352
4353 if (newp != 0) {
4354 if (extra != 0) {
4355 internal_free(m, extra);
4356 }
4357 return chunk2mem(newp);
4358 }
4359 else {
4360 void* newmem = internal_malloc(m, bytes);
4361 if (newmem != 0) {
4362 size_t oc = oldsize - overhead_for(oldp);
4363 memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
4364 internal_free(m, oldmem);
4365 }
4366 return newmem;
4367 }
4368 }
4369 return 0;
4370 }
4371
4372 /* --------------------------- memalign support -------------------------- */
4373
internal_memalign(mstate m,size_t alignment,size_t bytes)4374 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4375 if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */
4376 return internal_malloc(m, bytes);
4377 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4378 alignment = MIN_CHUNK_SIZE;
4379 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4380 size_t a = MALLOC_ALIGNMENT << 1;
4381 while (a < alignment) a <<= 1;
4382 alignment = a;
4383 }
4384
4385 if (bytes >= MAX_REQUEST - alignment) {
4386 if (m != 0) { /* Test isn't needed but avoids compiler warning */
4387 MALLOC_FAILURE_ACTION;
4388 }
4389 }
4390 else {
4391 size_t nb = request2size(bytes);
4392 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4393 char* mem = (char*)internal_malloc(m, req);
4394 if (mem != 0) {
4395 void* leader = 0;
4396 void* trailer = 0;
4397 mchunkptr p = mem2chunk(mem);
4398
4399 if (PREACTION(m)) return 0;
4400 if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
4401 /*
4402 Find an aligned spot inside chunk. Since we need to give
4403 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4404 the first calculation places us at a spot with less than
4405 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4406 We've allocated enough total room so that this is always
4407 possible.
4408 */
4409 char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
4410 alignment -
4411 SIZE_T_ONE)) &
4412 -alignment));
4413 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4414 br : br+alignment;
4415 mchunkptr newp = (mchunkptr)pos;
4416 size_t leadsize = pos - (char*)(p);
4417 size_t newsize = chunksize(p) - leadsize;
4418
4419 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4420 newp->prev_foot = p->prev_foot + leadsize;
4421 newp->head = newsize;
4422 }
4423 else { /* Otherwise, give back leader, use the rest */
4424 set_inuse(m, newp, newsize);
4425 set_inuse(m, p, leadsize);
4426 leader = chunk2mem(p);
4427 }
4428 p = newp;
4429 }
4430
4431 /* Give back spare room at the end */
4432 if (!is_mmapped(p)) {
4433 size_t size = chunksize(p);
4434 if (size > nb + MIN_CHUNK_SIZE) {
4435 size_t remainder_size = size - nb;
4436 mchunkptr remainder = chunk_plus_offset(p, nb);
4437 set_inuse(m, p, nb);
4438 set_inuse(m, remainder, remainder_size);
4439 trailer = chunk2mem(remainder);
4440 }
4441 }
4442
4443 assert (chunksize(p) >= nb);
4444 assert((((size_t)(chunk2mem(p))) % alignment) == 0);
4445 check_inuse_chunk(m, p);
4446 POSTACTION(m);
4447 if (leader != 0) {
4448 internal_free(m, leader);
4449 }
4450 if (trailer != 0) {
4451 internal_free(m, trailer);
4452 }
4453 return chunk2mem(p);
4454 }
4455 }
4456 return 0;
4457 }
4458
4459 /* ------------------------ comalloc/coalloc support --------------------- */
4460
ialloc(mstate m,size_t n_elements,size_t * sizes,int opts,void * chunks[])4461 static void** ialloc(mstate m,
4462 size_t n_elements,
4463 size_t* sizes,
4464 int opts,
4465 void* chunks[]) {
4466 /*
4467 This provides common support for independent_X routines, handling
4468 all of the combinations that can result.
4469
4470 The opts arg has:
4471 bit 0 set if all elements are same size (using sizes[0])
4472 bit 1 set if elements should be zeroed
4473 */
4474
4475 size_t element_size; /* chunksize of each element, if all same */
4476 size_t contents_size; /* total size of elements */
4477 size_t array_size; /* request size of pointer array */
4478 void* mem; /* malloced aggregate space */
4479 mchunkptr p; /* corresponding chunk */
4480 size_t remainder_size; /* remaining bytes while splitting */
4481 void** marray; /* either "chunks" or malloced ptr array */
4482 mchunkptr array_chunk; /* chunk for malloced ptr array */
4483 flag_t was_enabled; /* to disable mmap */
4484 size_t size;
4485 size_t i;
4486
4487 ensure_initialization();
4488 /* compute array length, if needed */
4489 if (chunks != 0) {
4490 if (n_elements == 0)
4491 return chunks; /* nothing to do */
4492 marray = chunks;
4493 array_size = 0;
4494 }
4495 else {
4496 /* if empty req, must still return chunk representing empty array */
4497 if (n_elements == 0)
4498 return (void**)internal_malloc(m, 0);
4499 marray = 0;
4500 array_size = request2size(n_elements * (sizeof(void*)));
4501 }
4502
4503 /* compute total element size */
4504 if (opts & 0x1) { /* all-same-size */
4505 element_size = request2size(*sizes);
4506 contents_size = n_elements * element_size;
4507 }
4508 else { /* add up all the sizes */
4509 element_size = 0;
4510 contents_size = 0;
4511 for (i = 0; i != n_elements; ++i)
4512 contents_size += request2size(sizes[i]);
4513 }
4514
4515 size = contents_size + array_size;
4516
4517 /*
4518 Allocate the aggregate chunk. First disable direct-mmapping so
4519 malloc won't use it, since we would not be able to later
4520 free/realloc space internal to a segregated mmap region.
4521 */
4522 was_enabled = use_mmap(m);
4523 disable_mmap(m);
4524 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
4525 if (was_enabled)
4526 enable_mmap(m);
4527 if (mem == 0)
4528 return 0;
4529
4530 if (PREACTION(m)) return 0;
4531 p = mem2chunk(mem);
4532 remainder_size = chunksize(p);
4533
4534 assert(!is_mmapped(p));
4535
4536 if (opts & 0x2) { /* optionally clear the elements */
4537 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4538 }
4539
4540 /* If not provided, allocate the pointer array as final part of chunk */
4541 if (marray == 0) {
4542 size_t array_chunk_size;
4543 array_chunk = chunk_plus_offset(p, contents_size);
4544 array_chunk_size = remainder_size - contents_size;
4545 marray = (void**) (chunk2mem(array_chunk));
4546 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4547 remainder_size = contents_size;
4548 }
4549
4550 /* split out elements */
4551 for (i = 0; ; ++i) {
4552 marray[i] = chunk2mem(p);
4553 if (i != n_elements-1) {
4554 if (element_size != 0)
4555 size = element_size;
4556 else
4557 size = request2size(sizes[i]);
4558 remainder_size -= size;
4559 set_size_and_pinuse_of_inuse_chunk(m, p, size);
4560 p = chunk_plus_offset(p, size);
4561 }
4562 else { /* the final element absorbs any overallocation slop */
4563 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4564 break;
4565 }
4566 }
4567
4568 #if DEBUG
4569 if (marray != chunks) {
4570 /* final element must have exactly exhausted chunk */
4571 if (element_size != 0) {
4572 assert(remainder_size == element_size);
4573 }
4574 else {
4575 assert(remainder_size == request2size(sizes[i]));
4576 }
4577 check_inuse_chunk(m, mem2chunk(marray));
4578 }
4579 for (i = 0; i != n_elements; ++i)
4580 check_inuse_chunk(m, mem2chunk(marray[i]));
4581
4582 #endif /* DEBUG */
4583
4584 POSTACTION(m);
4585 return marray;
4586 }
4587
4588
4589 /* -------------------------- public routines ---------------------------- */
4590
4591 #if !ONLY_MSPACES
4592
dlmalloc(size_t bytes)4593 void* dlmalloc(size_t bytes) {
4594 /*
4595 Basic algorithm:
4596 If a small request (< 256 bytes minus per-chunk overhead):
4597 1. If one exists, use a remainderless chunk in associated smallbin.
4598 (Remainderless means that there are too few excess bytes to
4599 represent as a chunk.)
4600 2. If it is big enough, use the dv chunk, which is normally the
4601 chunk adjacent to the one used for the most recent small request.
4602 3. If one exists, split the smallest available chunk in a bin,
4603 saving remainder in dv.
4604 4. If it is big enough, use the top chunk.
4605 5. If available, get memory from system and use it
4606 Otherwise, for a large request:
4607 1. Find the smallest available binned chunk that fits, and use it
4608 if it is better fitting than dv chunk, splitting if necessary.
4609 2. If better fitting than any binned chunk, use the dv chunk.
4610 3. If it is big enough, use the top chunk.
4611 4. If request size >= mmap threshold, try to directly mmap this chunk.
4612 5. If available, get memory from system and use it
4613
4614 The ugly goto's here ensure that postaction occurs along all paths.
4615 */
4616
4617 #if USE_LOCKS
4618 ensure_initialization(); /* initialize in sys_alloc if not using locks */
4619 #endif
4620
4621 if (!PREACTION(gm)) {
4622 void* mem;
4623 size_t nb;
4624 if (bytes <= MAX_SMALL_REQUEST) {
4625 bindex_t idx;
4626 binmap_t smallbits;
4627 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4628 idx = small_index(nb);
4629 smallbits = gm->smallmap >> idx;
4630
4631 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4632 mchunkptr b, p;
4633 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4634 b = smallbin_at(gm, idx);
4635 p = b->fd;
4636 assert(chunksize(p) == small_index2size(idx));
4637 unlink_first_small_chunk(gm, b, p, idx);
4638 set_inuse_and_pinuse(gm, p, small_index2size(idx));
4639 mem = chunk2mem(p);
4640 check_malloced_chunk(gm, mem, nb);
4641 goto postaction;
4642 }
4643
4644 else if (nb > gm->dvsize) {
4645 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4646 mchunkptr b, p, r;
4647 size_t rsize;
4648 bindex_t i;
4649 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4650 binmap_t leastbit = least_bit(leftbits);
4651 compute_bit2idx(leastbit, i);
4652 b = smallbin_at(gm, i);
4653 p = b->fd;
4654 assert(chunksize(p) == small_index2size(i));
4655 unlink_first_small_chunk(gm, b, p, i);
4656 rsize = small_index2size(i) - nb;
4657 /* Fit here cannot be remainderless if 4byte sizes */
4658 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4659 set_inuse_and_pinuse(gm, p, small_index2size(i));
4660 else {
4661 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4662 r = chunk_plus_offset(p, nb);
4663 set_size_and_pinuse_of_free_chunk(r, rsize);
4664 replace_dv(gm, r, rsize);
4665 }
4666 mem = chunk2mem(p);
4667 check_malloced_chunk(gm, mem, nb);
4668 goto postaction;
4669 }
4670
4671 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4672 check_malloced_chunk(gm, mem, nb);
4673 goto postaction;
4674 }
4675 }
4676 }
4677 else if (bytes >= MAX_REQUEST)
4678 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4679 else {
4680 nb = pad_request(bytes);
4681 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4682 check_malloced_chunk(gm, mem, nb);
4683 goto postaction;
4684 }
4685 }
4686
4687 if (nb <= gm->dvsize) {
4688 size_t rsize = gm->dvsize - nb;
4689 mchunkptr p = gm->dv;
4690 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4691 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4692 gm->dvsize = rsize;
4693 set_size_and_pinuse_of_free_chunk(r, rsize);
4694 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4695 }
4696 else { /* exhaust dv */
4697 size_t dvs = gm->dvsize;
4698 gm->dvsize = 0;
4699 gm->dv = 0;
4700 set_inuse_and_pinuse(gm, p, dvs);
4701 }
4702 mem = chunk2mem(p);
4703 check_malloced_chunk(gm, mem, nb);
4704 goto postaction;
4705 }
4706
4707 else if (nb < gm->topsize) { /* Split top */
4708 size_t rsize = gm->topsize -= nb;
4709 mchunkptr p = gm->top;
4710 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4711 r->head = rsize | PINUSE_BIT;
4712 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4713 mem = chunk2mem(p);
4714 check_top_chunk(gm, gm->top);
4715 check_malloced_chunk(gm, mem, nb);
4716 goto postaction;
4717 }
4718
4719 mem = sys_alloc(gm, nb);
4720
4721 postaction:
4722 POSTACTION(gm);
4723 return mem;
4724 }
4725
4726 return 0;
4727 }
4728
dlfree(void * mem)4729 void dlfree(void* mem) {
4730 /*
4731 Consolidate freed chunks with preceeding or succeeding bordering
4732 free chunks, if they exist, and then place in a bin. Intermixed
4733 with special cases for top, dv, mmapped chunks, and usage errors.
4734 */
4735
4736 if (mem != 0) {
4737 mchunkptr p = mem2chunk(mem);
4738 #if FOOTERS
4739 mstate fm = get_mstate_for(p);
4740 if (!ok_magic(fm)) {
4741 USAGE_ERROR_ACTION(fm, p);
4742 return;
4743 }
4744 #else /* FOOTERS */
4745 #define fm gm
4746 #endif /* FOOTERS */
4747 if (!PREACTION(fm)) {
4748 check_inuse_chunk(fm, p);
4749 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
4750 size_t psize = chunksize(p);
4751 mchunkptr next = chunk_plus_offset(p, psize);
4752 if (!pinuse(p)) {
4753 size_t prevsize = p->prev_foot;
4754 if (is_mmapped(p)) {
4755 psize += prevsize + MMAP_FOOT_PAD;
4756 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4757 fm->footprint -= psize;
4758 goto postaction;
4759 }
4760 else {
4761 mchunkptr prev = chunk_minus_offset(p, prevsize);
4762 psize += prevsize;
4763 p = prev;
4764 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4765 if (p != fm->dv) {
4766 unlink_chunk(fm, p, prevsize);
4767 }
4768 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4769 fm->dvsize = psize;
4770 set_free_with_pinuse(p, psize, next);
4771 goto postaction;
4772 }
4773 }
4774 else
4775 goto erroraction;
4776 }
4777 }
4778
4779 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4780 if (!cinuse(next)) { /* consolidate forward */
4781 if (next == fm->top) {
4782 size_t tsize = fm->topsize += psize;
4783 fm->top = p;
4784 p->head = tsize | PINUSE_BIT;
4785 if (p == fm->dv) {
4786 fm->dv = 0;
4787 fm->dvsize = 0;
4788 }
4789 if (should_trim(fm, tsize))
4790 sys_trim(fm, 0);
4791 goto postaction;
4792 }
4793 else if (next == fm->dv) {
4794 size_t dsize = fm->dvsize += psize;
4795 fm->dv = p;
4796 set_size_and_pinuse_of_free_chunk(p, dsize);
4797 goto postaction;
4798 }
4799 else {
4800 size_t nsize = chunksize(next);
4801 psize += nsize;
4802 unlink_chunk(fm, next, nsize);
4803 set_size_and_pinuse_of_free_chunk(p, psize);
4804 if (p == fm->dv) {
4805 fm->dvsize = psize;
4806 goto postaction;
4807 }
4808 }
4809 }
4810 else
4811 set_free_with_pinuse(p, psize, next);
4812
4813 if (is_small(psize)) {
4814 insert_small_chunk(fm, p, psize);
4815 check_free_chunk(fm, p);
4816 }
4817 else {
4818 tchunkptr tp = (tchunkptr)p;
4819 insert_large_chunk(fm, tp, psize);
4820 check_free_chunk(fm, p);
4821 if (--fm->release_checks == 0)
4822 release_unused_segments(fm);
4823 }
4824 goto postaction;
4825 }
4826 }
4827 erroraction:
4828 USAGE_ERROR_ACTION(fm, p);
4829 postaction:
4830 POSTACTION(fm);
4831 }
4832 }
4833 #if !FOOTERS
4834 #undef fm
4835 #endif /* FOOTERS */
4836 }
4837
dlcalloc(size_t n_elements,size_t elem_size)4838 void* dlcalloc(size_t n_elements, size_t elem_size) {
4839 void* mem;
4840 size_t req = 0;
4841 if (n_elements != 0) {
4842 req = n_elements * elem_size;
4843 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4844 (req / n_elements != elem_size))
4845 req = MAX_SIZE_T; /* force downstream failure on overflow */
4846 }
4847 mem = dlmalloc(req);
4848 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4849 memset(mem, 0, req);
4850 return mem;
4851 }
4852
dlrealloc(void * oldmem,size_t bytes)4853 void* dlrealloc(void* oldmem, size_t bytes) {
4854 if (oldmem == 0)
4855 return dlmalloc(bytes);
4856 #ifdef REALLOC_ZERO_BYTES_FREES
4857 if (bytes == 0) {
4858 dlfree(oldmem);
4859 return 0;
4860 }
4861 #endif /* REALLOC_ZERO_BYTES_FREES */
4862 else {
4863 #if ! FOOTERS
4864 mstate m = gm;
4865 #else /* FOOTERS */
4866 mstate m = get_mstate_for(mem2chunk(oldmem));
4867 if (!ok_magic(m)) {
4868 USAGE_ERROR_ACTION(m, oldmem);
4869 return 0;
4870 }
4871 #endif /* FOOTERS */
4872 return internal_realloc(m, oldmem, bytes);
4873 }
4874 }
4875
dlmemalign(size_t alignment,size_t bytes)4876 void* dlmemalign(size_t alignment, size_t bytes) {
4877 return internal_memalign(gm, alignment, bytes);
4878 }
4879
dlindependent_calloc(size_t n_elements,size_t elem_size,void * chunks[])4880 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4881 void* chunks[]) {
4882 size_t sz = elem_size; /* serves as 1-element array */
4883 return ialloc(gm, n_elements, &sz, 3, chunks);
4884 }
4885
dlindependent_comalloc(size_t n_elements,size_t sizes[],void * chunks[])4886 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4887 void* chunks[]) {
4888 return ialloc(gm, n_elements, sizes, 0, chunks);
4889 }
4890
dlvalloc(size_t bytes)4891 void* dlvalloc(size_t bytes) {
4892 size_t pagesz;
4893 ensure_initialization();
4894 pagesz = mparams.page_size;
4895 return dlmemalign(pagesz, bytes);
4896 }
4897
dlpvalloc(size_t bytes)4898 void* dlpvalloc(size_t bytes) {
4899 size_t pagesz;
4900 ensure_initialization();
4901 pagesz = mparams.page_size;
4902 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4903 }
4904
dlmalloc_trim(size_t pad)4905 int dlmalloc_trim(size_t pad) {
4906 int result = 0;
4907 ensure_initialization();
4908 if (!PREACTION(gm)) {
4909 result = sys_trim(gm, pad);
4910 POSTACTION(gm);
4911 }
4912 return result;
4913 }
4914
dlmalloc_footprint(void)4915 size_t dlmalloc_footprint(void) {
4916 return gm->footprint;
4917 }
4918
dlmalloc_max_footprint(void)4919 size_t dlmalloc_max_footprint(void) {
4920 return gm->max_footprint;
4921 }
4922
4923 #if !NO_MALLINFO
dlmallinfo(void)4924 struct mallinfo dlmallinfo(void) {
4925 return internal_mallinfo(gm);
4926 }
4927 #endif /* NO_MALLINFO */
4928
dlmalloc_stats()4929 void dlmalloc_stats() {
4930 internal_malloc_stats(gm);
4931 }
4932
dlmallopt(int param_number,int value)4933 int dlmallopt(int param_number, int value) {
4934 return change_mparam(param_number, value);
4935 }
4936
4937 #endif /* !ONLY_MSPACES */
4938
dlmalloc_usable_size(void * mem)4939 size_t dlmalloc_usable_size(void* mem) {
4940 if (mem != 0) {
4941 mchunkptr p = mem2chunk(mem);
4942 if (is_inuse(p))
4943 return chunksize(p) - overhead_for(p);
4944 }
4945 return 0;
4946 }
4947
4948 /* ----------------------------- user mspaces ---------------------------- */
4949
4950 #if MSPACES
4951
init_user_mstate(char * tbase,size_t tsize)4952 static mstate init_user_mstate(char* tbase, size_t tsize) {
4953 size_t msize = pad_request(sizeof(struct malloc_state));
4954 mchunkptr mn;
4955 mchunkptr msp = align_as_chunk(tbase);
4956 mstate m = (mstate)(chunk2mem(msp));
4957 memset(m, 0, msize);
4958 INITIAL_LOCK(&m->mutex);
4959 msp->head = (msize|INUSE_BITS);
4960 m->seg.base = m->least_addr = tbase;
4961 m->seg.size = m->footprint = m->max_footprint = tsize;
4962 m->magic = mparams.magic;
4963 m->release_checks = MAX_RELEASE_CHECK_RATE;
4964 m->mflags = mparams.default_mflags;
4965 m->extp = 0;
4966 m->exts = 0;
4967 disable_contiguous(m);
4968 init_bins(m);
4969 mn = next_chunk(mem2chunk(m));
4970 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4971 check_top_chunk(m, m->top);
4972 return m;
4973 }
4974
create_mspace(size_t capacity,int locked)4975 mspace create_mspace(size_t capacity, int locked) {
4976 mstate m = 0;
4977 size_t msize;
4978 ensure_initialization();
4979 msize = pad_request(sizeof(struct malloc_state));
4980 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4981 size_t rs = ((capacity == 0)? mparams.granularity :
4982 (capacity + TOP_FOOT_SIZE + msize));
4983 size_t tsize = granularity_align(rs);
4984 char* tbase = (char*)(CALL_MMAP(tsize));
4985 if (tbase != CMFAIL) {
4986 m = init_user_mstate(tbase, tsize);
4987 m->seg.sflags = USE_MMAP_BIT;
4988 set_lock(m, locked);
4989 }
4990 }
4991 return (mspace)m;
4992 }
4993
create_mspace_with_base(void * base,size_t capacity,int locked)4994 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4995 mstate m = 0;
4996 size_t msize;
4997 ensure_initialization();
4998 msize = pad_request(sizeof(struct malloc_state));
4999 if (capacity > msize + TOP_FOOT_SIZE &&
5000 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5001 m = init_user_mstate((char*)base, capacity);
5002 m->seg.sflags = EXTERN_BIT;
5003 set_lock(m, locked);
5004 }
5005 return (mspace)m;
5006 }
5007
mspace_track_large_chunks(mspace msp,int enable)5008 int mspace_track_large_chunks(mspace msp, int enable) {
5009 int ret = 0;
5010 mstate ms = (mstate)msp;
5011 if (!PREACTION(ms)) {
5012 if (!use_mmap(ms))
5013 ret = 1;
5014 if (!enable)
5015 enable_mmap(ms);
5016 else
5017 disable_mmap(ms);
5018 POSTACTION(ms);
5019 }
5020 return ret;
5021 }
5022
destroy_mspace(mspace msp)5023 size_t destroy_mspace(mspace msp) {
5024 size_t freed = 0;
5025 mstate ms = (mstate)msp;
5026 if (ok_magic(ms)) {
5027 msegmentptr sp = &ms->seg;
5028 while (sp != 0) {
5029 char* base = sp->base;
5030 size_t size = sp->size;
5031 flag_t flag = sp->sflags;
5032 sp = sp->next;
5033 if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
5034 CALL_MUNMAP(base, size) == 0)
5035 freed += size;
5036 }
5037 }
5038 else {
5039 USAGE_ERROR_ACTION(ms,ms);
5040 }
5041 return freed;
5042 }
5043
5044 /*
5045 mspace versions of routines are near-clones of the global
5046 versions. This is not so nice but better than the alternatives.
5047 */
5048
5049
mspace_malloc(mspace msp,size_t bytes)5050 void* mspace_malloc(mspace msp, size_t bytes) {
5051 mstate ms = (mstate)msp;
5052 if (!ok_magic(ms)) {
5053 USAGE_ERROR_ACTION(ms,ms);
5054 return 0;
5055 }
5056 if (!PREACTION(ms)) {
5057 void* mem;
5058 size_t nb;
5059 if (bytes <= MAX_SMALL_REQUEST) {
5060 bindex_t idx;
5061 binmap_t smallbits;
5062 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5063 idx = small_index(nb);
5064 smallbits = ms->smallmap >> idx;
5065
5066 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5067 mchunkptr b, p;
5068 idx += ~smallbits & 1; /* Uses next bin if idx empty */
5069 b = smallbin_at(ms, idx);
5070 p = b->fd;
5071 assert(chunksize(p) == small_index2size(idx));
5072 unlink_first_small_chunk(ms, b, p, idx);
5073 set_inuse_and_pinuse(ms, p, small_index2size(idx));
5074 mem = chunk2mem(p);
5075 check_malloced_chunk(ms, mem, nb);
5076 goto postaction;
5077 }
5078
5079 else if (nb > ms->dvsize) {
5080 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5081 mchunkptr b, p, r;
5082 size_t rsize;
5083 bindex_t i;
5084 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5085 binmap_t leastbit = least_bit(leftbits);
5086 compute_bit2idx(leastbit, i);
5087 b = smallbin_at(ms, i);
5088 p = b->fd;
5089 assert(chunksize(p) == small_index2size(i));
5090 unlink_first_small_chunk(ms, b, p, i);
5091 rsize = small_index2size(i) - nb;
5092 /* Fit here cannot be remainderless if 4byte sizes */
5093 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5094 set_inuse_and_pinuse(ms, p, small_index2size(i));
5095 else {
5096 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5097 r = chunk_plus_offset(p, nb);
5098 set_size_and_pinuse_of_free_chunk(r, rsize);
5099 replace_dv(ms, r, rsize);
5100 }
5101 mem = chunk2mem(p);
5102 check_malloced_chunk(ms, mem, nb);
5103 goto postaction;
5104 }
5105
5106 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5107 check_malloced_chunk(ms, mem, nb);
5108 goto postaction;
5109 }
5110 }
5111 }
5112 else if (bytes >= MAX_REQUEST)
5113 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5114 else {
5115 nb = pad_request(bytes);
5116 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5117 check_malloced_chunk(ms, mem, nb);
5118 goto postaction;
5119 }
5120 }
5121
5122 if (nb <= ms->dvsize) {
5123 size_t rsize = ms->dvsize - nb;
5124 mchunkptr p = ms->dv;
5125 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5126 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5127 ms->dvsize = rsize;
5128 set_size_and_pinuse_of_free_chunk(r, rsize);
5129 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5130 }
5131 else { /* exhaust dv */
5132 size_t dvs = ms->dvsize;
5133 ms->dvsize = 0;
5134 ms->dv = 0;
5135 set_inuse_and_pinuse(ms, p, dvs);
5136 }
5137 mem = chunk2mem(p);
5138 check_malloced_chunk(ms, mem, nb);
5139 goto postaction;
5140 }
5141
5142 else if (nb < ms->topsize) { /* Split top */
5143 size_t rsize = ms->topsize -= nb;
5144 mchunkptr p = ms->top;
5145 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5146 r->head = rsize | PINUSE_BIT;
5147 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5148 mem = chunk2mem(p);
5149 check_top_chunk(ms, ms->top);
5150 check_malloced_chunk(ms, mem, nb);
5151 goto postaction;
5152 }
5153
5154 mem = sys_alloc(ms, nb);
5155
5156 postaction:
5157 POSTACTION(ms);
5158 return mem;
5159 }
5160
5161 return 0;
5162 }
5163
mspace_free(mspace msp,void * mem)5164 void mspace_free(mspace msp, void* mem) {
5165 if (mem != 0) {
5166 mchunkptr p = mem2chunk(mem);
5167 #if FOOTERS
5168 mstate fm = get_mstate_for(p);
5169 msp = msp; /* placate people compiling -Wunused */
5170 #else /* FOOTERS */
5171 mstate fm = (mstate)msp;
5172 #endif /* FOOTERS */
5173 if (!ok_magic(fm)) {
5174 USAGE_ERROR_ACTION(fm, p);
5175 return;
5176 }
5177 if (!PREACTION(fm)) {
5178 check_inuse_chunk(fm, p);
5179 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
5180 size_t psize = chunksize(p);
5181 mchunkptr next = chunk_plus_offset(p, psize);
5182 if (!pinuse(p)) {
5183 size_t prevsize = p->prev_foot;
5184 if (is_mmapped(p)) {
5185 psize += prevsize + MMAP_FOOT_PAD;
5186 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5187 fm->footprint -= psize;
5188 goto postaction;
5189 }
5190 else {
5191 mchunkptr prev = chunk_minus_offset(p, prevsize);
5192 psize += prevsize;
5193 p = prev;
5194 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5195 if (p != fm->dv) {
5196 unlink_chunk(fm, p, prevsize);
5197 }
5198 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5199 fm->dvsize = psize;
5200 set_free_with_pinuse(p, psize, next);
5201 goto postaction;
5202 }
5203 }
5204 else
5205 goto erroraction;
5206 }
5207 }
5208
5209 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5210 if (!cinuse(next)) { /* consolidate forward */
5211 if (next == fm->top) {
5212 size_t tsize = fm->topsize += psize;
5213 fm->top = p;
5214 p->head = tsize | PINUSE_BIT;
5215 if (p == fm->dv) {
5216 fm->dv = 0;
5217 fm->dvsize = 0;
5218 }
5219 if (should_trim(fm, tsize))
5220 sys_trim(fm, 0);
5221 goto postaction;
5222 }
5223 else if (next == fm->dv) {
5224 size_t dsize = fm->dvsize += psize;
5225 fm->dv = p;
5226 set_size_and_pinuse_of_free_chunk(p, dsize);
5227 goto postaction;
5228 }
5229 else {
5230 size_t nsize = chunksize(next);
5231 psize += nsize;
5232 unlink_chunk(fm, next, nsize);
5233 set_size_and_pinuse_of_free_chunk(p, psize);
5234 if (p == fm->dv) {
5235 fm->dvsize = psize;
5236 goto postaction;
5237 }
5238 }
5239 }
5240 else
5241 set_free_with_pinuse(p, psize, next);
5242
5243 if (is_small(psize)) {
5244 insert_small_chunk(fm, p, psize);
5245 check_free_chunk(fm, p);
5246 }
5247 else {
5248 tchunkptr tp = (tchunkptr)p;
5249 insert_large_chunk(fm, tp, psize);
5250 check_free_chunk(fm, p);
5251 if (--fm->release_checks == 0)
5252 release_unused_segments(fm);
5253 }
5254 goto postaction;
5255 }
5256 }
5257 erroraction:
5258 USAGE_ERROR_ACTION(fm, p);
5259 postaction:
5260 POSTACTION(fm);
5261 }
5262 }
5263 }
5264
mspace_calloc(mspace msp,size_t n_elements,size_t elem_size)5265 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5266 void* mem;
5267 size_t req = 0;
5268 mstate ms = (mstate)msp;
5269 if (!ok_magic(ms)) {
5270 USAGE_ERROR_ACTION(ms,ms);
5271 return 0;
5272 }
5273 if (n_elements != 0) {
5274 req = n_elements * elem_size;
5275 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5276 (req / n_elements != elem_size))
5277 req = MAX_SIZE_T; /* force downstream failure on overflow */
5278 }
5279 mem = internal_malloc(ms, req);
5280 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5281 memset(mem, 0, req);
5282 return mem;
5283 }
5284
mspace_realloc(mspace msp,void * oldmem,size_t bytes)5285 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5286 if (oldmem == 0)
5287 return mspace_malloc(msp, bytes);
5288 #ifdef REALLOC_ZERO_BYTES_FREES
5289 if (bytes == 0) {
5290 mspace_free(msp, oldmem);
5291 return 0;
5292 }
5293 #endif /* REALLOC_ZERO_BYTES_FREES */
5294 else {
5295 #if FOOTERS
5296 mchunkptr p = mem2chunk(oldmem);
5297 mstate ms = get_mstate_for(p);
5298 #else /* FOOTERS */
5299 mstate ms = (mstate)msp;
5300 #endif /* FOOTERS */
5301 if (!ok_magic(ms)) {
5302 USAGE_ERROR_ACTION(ms,ms);
5303 return 0;
5304 }
5305 return internal_realloc(ms, oldmem, bytes);
5306 }
5307 }
5308
mspace_memalign(mspace msp,size_t alignment,size_t bytes)5309 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5310 mstate ms = (mstate)msp;
5311 if (!ok_magic(ms)) {
5312 USAGE_ERROR_ACTION(ms,ms);
5313 return 0;
5314 }
5315 return internal_memalign(ms, alignment, bytes);
5316 }
5317
mspace_independent_calloc(mspace msp,size_t n_elements,size_t elem_size,void * chunks[])5318 void** mspace_independent_calloc(mspace msp, size_t n_elements,
5319 size_t elem_size, void* chunks[]) {
5320 size_t sz = elem_size; /* serves as 1-element array */
5321 mstate ms = (mstate)msp;
5322 if (!ok_magic(ms)) {
5323 USAGE_ERROR_ACTION(ms,ms);
5324 return 0;
5325 }
5326 return ialloc(ms, n_elements, &sz, 3, chunks);
5327 }
5328
mspace_independent_comalloc(mspace msp,size_t n_elements,size_t sizes[],void * chunks[])5329 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5330 size_t sizes[], void* chunks[]) {
5331 mstate ms = (mstate)msp;
5332 if (!ok_magic(ms)) {
5333 USAGE_ERROR_ACTION(ms,ms);
5334 return 0;
5335 }
5336 return ialloc(ms, n_elements, sizes, 0, chunks);
5337 }
5338
mspace_trim(mspace msp,size_t pad)5339 int mspace_trim(mspace msp, size_t pad) {
5340 int result = 0;
5341 mstate ms = (mstate)msp;
5342 if (ok_magic(ms)) {
5343 if (!PREACTION(ms)) {
5344 result = sys_trim(ms, pad);
5345 POSTACTION(ms);
5346 }
5347 }
5348 else {
5349 USAGE_ERROR_ACTION(ms,ms);
5350 }
5351 return result;
5352 }
5353
mspace_malloc_stats(mspace msp)5354 void mspace_malloc_stats(mspace msp) {
5355 mstate ms = (mstate)msp;
5356 if (ok_magic(ms)) {
5357 internal_malloc_stats(ms);
5358 }
5359 else {
5360 USAGE_ERROR_ACTION(ms,ms);
5361 }
5362 }
5363
mspace_footprint(mspace msp)5364 size_t mspace_footprint(mspace msp) {
5365 size_t result = 0;
5366 mstate ms = (mstate)msp;
5367 if (ok_magic(ms)) {
5368 result = ms->footprint;
5369 }
5370 else {
5371 USAGE_ERROR_ACTION(ms,ms);
5372 }
5373 return result;
5374 }
5375
5376
mspace_max_footprint(mspace msp)5377 size_t mspace_max_footprint(mspace msp) {
5378 size_t result = 0;
5379 mstate ms = (mstate)msp;
5380 if (ok_magic(ms)) {
5381 result = ms->max_footprint;
5382 }
5383 else {
5384 USAGE_ERROR_ACTION(ms,ms);
5385 }
5386 return result;
5387 }
5388
5389
5390 #if !NO_MALLINFO
mspace_mallinfo(mspace msp)5391 struct mallinfo mspace_mallinfo(mspace msp) {
5392 mstate ms = (mstate)msp;
5393 if (!ok_magic(ms)) {
5394 USAGE_ERROR_ACTION(ms,ms);
5395 }
5396 return internal_mallinfo(ms);
5397 }
5398 #endif /* NO_MALLINFO */
5399
mspace_usable_size(void * mem)5400 size_t mspace_usable_size(void* mem) {
5401 if (mem != 0) {
5402 mchunkptr p = mem2chunk(mem);
5403 if (is_inuse(p))
5404 return chunksize(p) - overhead_for(p);
5405 }
5406 return 0;
5407 }
5408
mspace_mallopt(int param_number,int value)5409 int mspace_mallopt(int param_number, int value) {
5410 return change_mparam(param_number, value);
5411 }
5412
5413 #endif /* MSPACES */
5414
5415
5416 /* -------------------- Alternative MORECORE functions ------------------- */
5417
5418 /*
5419 Guidelines for creating a custom version of MORECORE:
5420
5421 * For best performance, MORECORE should allocate in multiples of pagesize.
5422 * MORECORE may allocate more memory than requested. (Or even less,
5423 but this will usually result in a malloc failure.)
5424 * MORECORE must not allocate memory when given argument zero, but
5425 instead return one past the end address of memory from previous
5426 nonzero call.
5427 * For best performance, consecutive calls to MORECORE with positive
5428 arguments should return increasing addresses, indicating that
5429 space has been contiguously extended.
5430 * Even though consecutive calls to MORECORE need not return contiguous
5431 addresses, it must be OK for malloc'ed chunks to span multiple
5432 regions in those cases where they do happen to be contiguous.
5433 * MORECORE need not handle negative arguments -- it may instead
5434 just return MFAIL when given negative arguments.
5435 Negative arguments are always multiples of pagesize. MORECORE
5436 must not misinterpret negative args as large positive unsigned
5437 args. You can suppress all such calls from even occurring by defining
5438 MORECORE_CANNOT_TRIM,
5439
5440 As an example alternative MORECORE, here is a custom allocator
5441 kindly contributed for pre-OSX macOS. It uses virtually but not
5442 necessarily physically contiguous non-paged memory (locked in,
5443 present and won't get swapped out). You can use it by uncommenting
5444 this section, adding some #includes, and setting up the appropriate
5445 defines above:
5446
5447 #define MORECORE osMoreCore
5448
5449 There is also a shutdown routine that should somehow be called for
5450 cleanup upon program exit.
5451
5452 #define MAX_POOL_ENTRIES 100
5453 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
5454 static int next_os_pool;
5455 void *our_os_pools[MAX_POOL_ENTRIES];
5456
5457 void *osMoreCore(int size)
5458 {
5459 void *ptr = 0;
5460 static void *sbrk_top = 0;
5461
5462 if (size > 0)
5463 {
5464 if (size < MINIMUM_MORECORE_SIZE)
5465 size = MINIMUM_MORECORE_SIZE;
5466 if (CurrentExecutionLevel() == kTaskLevel)
5467 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
5468 if (ptr == 0)
5469 {
5470 return (void *) MFAIL;
5471 }
5472 // save ptrs so they can be freed during cleanup
5473 our_os_pools[next_os_pool] = ptr;
5474 next_os_pool++;
5475 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
5476 sbrk_top = (char *) ptr + size;
5477 return ptr;
5478 }
5479 else if (size < 0)
5480 {
5481 // we don't currently support shrink behavior
5482 return (void *) MFAIL;
5483 }
5484 else
5485 {
5486 return sbrk_top;
5487 }
5488 }
5489
5490 // cleanup any allocated memory pools
5491 // called as last thing before shutting down driver
5492
5493 void osCleanupMem(void)
5494 {
5495 void **ptr;
5496
5497 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
5498 if (*ptr)
5499 {
5500 PoolDeallocate(*ptr);
5501 *ptr = 0;
5502 }
5503 }
5504
5505 */
5506
5507
5508 /* -----------------------------------------------------------------------
5509 History:
5510 V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
5511 * Use zeros instead of prev foot for is_mmapped
5512 * Add mspace_track_large_chunks; thanks to Jean Brouwers
5513 * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
5514 * Fix insufficient sys_alloc padding when using 16byte alignment
5515 * Fix bad error check in mspace_footprint
5516 * Adaptations for ptmalloc; thanks to Wolfram Gloger.
5517 * Reentrant spin locks; thanks to Earl Chew and others
5518 * Win32 improvements; thanks to Niall Douglas and Earl Chew
5519 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
5520 * Extension hook in malloc_state
5521 * Various small adjustments to reduce warnings on some compilers
5522 * Various configuration extensions/changes for more platforms. Thanks
5523 to all who contributed these.
5524
5525 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
5526 * Add max_footprint functions
5527 * Ensure all appropriate literals are size_t
5528 * Fix conditional compilation problem for some #define settings
5529 * Avoid concatenating segments with the one provided
5530 in create_mspace_with_base
5531 * Rename some variables to avoid compiler shadowing warnings
5532 * Use explicit lock initialization.
5533 * Better handling of sbrk interference.
5534 * Simplify and fix segment insertion, trimming and mspace_destroy
5535 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
5536 * Thanks especially to Dennis Flanagan for help on these.
5537
5538 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
5539 * Fix memalign brace error.
5540
5541 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
5542 * Fix improper #endif nesting in C++
5543 * Add explicit casts needed for C++
5544
5545 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
5546 * Use trees for large bins
5547 * Support mspaces
5548 * Use segments to unify sbrk-based and mmap-based system allocation,
5549 removing need for emulation on most platforms without sbrk.
5550 * Default safety checks
5551 * Optional footer checks. Thanks to William Robertson for the idea.
5552 * Internal code refactoring
5553 * Incorporate suggestions and platform-specific changes.
5554 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
5555 Aaron Bachmann, Emery Berger, and others.
5556 * Speed up non-fastbin processing enough to remove fastbins.
5557 * Remove useless cfree() to avoid conflicts with other apps.
5558 * Remove internal memcpy, memset. Compilers handle builtins better.
5559 * Remove some options that no one ever used and rename others.
5560
5561 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
5562 * Fix malloc_state bitmap array misdeclaration
5563
5564 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
5565 * Allow tuning of FIRST_SORTED_BIN_SIZE
5566 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5567 * Better detection and support for non-contiguousness of MORECORE.
5568 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5569 * Bypass most of malloc if no frees. Thanks To Emery Berger.
5570 * Fix freeing of old top non-contiguous chunk im sysmalloc.
5571 * Raised default trim and map thresholds to 256K.
5572 * Fix mmap-related #defines. Thanks to Lubos Lunak.
5573 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5574 * Branch-free bin calculation
5575 * Default trim and mmap thresholds now 256K.
5576
5577 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
5578 * Introduce independent_comalloc and independent_calloc.
5579 Thanks to Michael Pachos for motivation and help.
5580 * Make optional .h file available
5581 * Allow > 2GB requests on 32bit systems.
5582 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5583 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5584 and Anonymous.
5585 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5586 helping test this.)
5587 * memalign: check alignment arg
5588 * realloc: don't try to shift chunks backwards, since this
5589 leads to more fragmentation in some programs and doesn't
5590 seem to help in any others.
5591 * Collect all cases in malloc requiring system memory into sysmalloc
5592 * Use mmap as backup to sbrk
5593 * Place all internal state in malloc_state
5594 * Introduce fastbins (although similar to 2.5.1)
5595 * Many minor tunings and cosmetic improvements
5596 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5597 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5598 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5599 * Include errno.h to support default failure action.
5600
5601 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5602 * return null for negative arguments
5603 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5604 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5605 (e.g. WIN32 platforms)
5606 * Cleanup header file inclusion for WIN32 platforms
5607 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5608 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5609 memory allocation routines
5610 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5611 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5612 usage of 'assert' in non-WIN32 code
5613 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5614 avoid infinite loop
5615 * Always call 'fREe()' rather than 'free()'
5616
5617 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5618 * Fixed ordering problem with boundary-stamping
5619
5620 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5621 * Added pvalloc, as recommended by H.J. Liu
5622 * Added 64bit pointer support mainly from Wolfram Gloger
5623 * Added anonymously donated WIN32 sbrk emulation
5624 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5625 * malloc_extend_top: fix mask error that caused wastage after
5626 foreign sbrks
5627 * Add linux mremap support code from HJ Liu
5628
5629 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5630 * Integrated most documentation with the code.
5631 * Add support for mmap, with help from
5632 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5633 * Use last_remainder in more cases.
5634 * Pack bins using idea from colin@nyx10.cs.du.edu
5635 * Use ordered bins instead of best-fit threshhold
5636 * Eliminate block-local decls to simplify tracing and debugging.
5637 * Support another case of realloc via move into top
5638 * Fix error occuring when initial sbrk_base not word-aligned.
5639 * Rely on page size for units instead of SBRK_UNIT to
5640 avoid surprises about sbrk alignment conventions.
5641 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5642 (raymond@es.ele.tue.nl) for the suggestion.
5643 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5644 * More precautions for cases where other routines call sbrk,
5645 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5646 * Added macros etc., allowing use in linux libc from
5647 H.J. Lu (hjl@gnu.ai.mit.edu)
5648 * Inverted this history list
5649
5650 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5651 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5652 * Removed all preallocation code since under current scheme
5653 the work required to undo bad preallocations exceeds
5654 the work saved in good cases for most test programs.
5655 * No longer use return list or unconsolidated bins since
5656 no scheme using them consistently outperforms those that don't
5657 given above changes.
5658 * Use best fit for very large chunks to prevent some worst-cases.
5659 * Added some support for debugging
5660
5661 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5662 * Removed footers when chunks are in use. Thanks to
5663 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5664
5665 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5666 * Added malloc_trim, with help from Wolfram Gloger
5667 (wmglo@Dent.MED.Uni-Muenchen.DE).
5668
5669 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5670
5671 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5672 * realloc: try to expand in both directions
5673 * malloc: swap order of clean-bin strategy;
5674 * realloc: only conditionally expand backwards
5675 * Try not to scavenge used bins
5676 * Use bin counts as a guide to preallocation
5677 * Occasionally bin return list chunks in first scan
5678 * Add a few optimizations from colin@nyx10.cs.du.edu
5679
5680 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5681 * faster bin computation & slightly different binning
5682 * merged all consolidations to one part of malloc proper
5683 (eliminating old malloc_find_space & malloc_clean_bin)
5684 * Scan 2 returns chunks (not just 1)
5685 * Propagate failure in realloc if malloc returns 0
5686 * Add stuff to allow compilation on non-ANSI compilers
5687 from kpv@research.att.com
5688
5689 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5690 * removed potential for odd address access in prev_chunk
5691 * removed dependency on getpagesize.h
5692 * misc cosmetics and a bit more internal documentation
5693 * anticosmetics: mangled names in macros to evade debugger strangeness
5694 * tested on sparc, hp-700, dec-mips, rs6000
5695 with gcc & native cc (hp, dec only) allowing
5696 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5697
5698 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5699 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5700 structure of old version, but most details differ.)
5701
5702 */
5703
5704