• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1 /*
2   This is a version (aka dlmalloc) of malloc/free/realloc written by
3   Doug Lea and released to the public domain, as explained at
4   http://creativecommons.org/licenses/publicdomain.  Send questions,
5   comments, complaints, performance data, etc to dl@cs.oswego.edu
6 
7 * Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
8 
9    Note: There may be an updated version of this malloc obtainable at
10            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11          Check before installing!
12 
13 * Quickstart
14 
15   This library is all in one file to simplify the most common usage:
16   ftp it, compile it (-O3), and link it into another program. All of
17   the compile-time options default to reasonable values for use on
18   most platforms.  You might later want to step through various
19   compile-time and dynamic tuning options.
20 
21   For convenience, an include file for code using this malloc is at:
22      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
23   You don't really need this .h file unless you call functions not
24   defined in your system include files.  The .h file contains only the
25   excerpts from this file needed for using this malloc on ANSI C/C++
26   systems, so long as you haven't changed compile-time options about
27   naming and tuning parameters.  If you do, then you can create your
28   own malloc.h that does include all settings by cutting at the point
29   indicated below. Note that you may already by default be using a C
30   library containing a malloc that is based on some version of this
31   malloc (for example in linux). You might still want to use the one
32   in this file to customize settings or to avoid overheads associated
33   with library versions.
34 
35 * Vital statistics:
36 
37   Supported pointer/size_t representation:       4 or 8 bytes
38        size_t MUST be an unsigned type of the same width as
39        pointers. (If you are using an ancient system that declares
40        size_t as a signed type, or need it to be a different width
41        than pointers, you can use a previous release of this malloc
42        (e.g. 2.7.2) supporting these.)
43 
44   Alignment:                                     8 bytes (default)
45        This suffices for nearly all current machines and C compilers.
46        However, you can define MALLOC_ALIGNMENT to be wider than this
47        if necessary (up to 128bytes), at the expense of using more space.
48 
49   Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
50                                           8 or 16 bytes (if 8byte sizes)
51        Each malloced chunk has a hidden word of overhead holding size
52        and status information, and additional cross-check word
53        if FOOTERS is defined.
54 
55   Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
56                           8-byte ptrs:  32 bytes    (including overhead)
57 
58        Even a request for zero bytes (i.e., malloc(0)) returns a
59        pointer to something of the minimum allocatable size.
60        The maximum overhead wastage (i.e., number of extra bytes
61        allocated than were requested in malloc) is less than or equal
62        to the minimum size, except for requests >= mmap_threshold that
63        are serviced via mmap(), where the worst case wastage is about
64        32 bytes plus the remainder from a system page (the minimal
65        mmap unit); typically 4096 or 8192 bytes.
66 
67   Security: static-safe; optionally more or less
68        The "security" of malloc refers to the ability of malicious
69        code to accentuate the effects of errors (for example, freeing
70        space that is not currently malloc'ed or overwriting past the
71        ends of chunks) in code that calls malloc.  This malloc
72        guarantees not to modify any memory locations below the base of
73        heap, i.e., static variables, even in the presence of usage
74        errors.  The routines additionally detect most improper frees
75        and reallocs.  All this holds as long as the static bookkeeping
76        for malloc itself is not corrupted by some other means.  This
77        is only one aspect of security -- these checks do not, and
78        cannot, detect all possible programming errors.
79 
80        If FOOTERS is defined nonzero, then each allocated chunk
81        carries an additional check word to verify that it was malloced
82        from its space.  These check words are the same within each
83        execution of a program using malloc, but differ across
84        executions, so externally crafted fake chunks cannot be
85        freed. This improves security by rejecting frees/reallocs that
86        could corrupt heap memory, in addition to the checks preventing
87        writes to statics that are always on.  This may further improve
88        security at the expense of time and space overhead.  (Note that
89        FOOTERS may also be worth using with MSPACES.)
90 
91        By default detected errors cause the program to abort (calling
92        "abort()"). You can override this to instead proceed past
93        errors by defining PROCEED_ON_ERROR.  In this case, a bad free
94        has no effect, and a malloc that encounters a bad address
95        caused by user overwrites will ignore the bad address by
96        dropping pointers and indices to all known memory. This may
97        be appropriate for programs that should continue if at all
98        possible in the face of programming errors, although they may
99        run out of memory because dropped memory is never reclaimed.
100 
101        If you don't like either of these options, you can define
102        CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103        else. And if if you are sure that your program using malloc has
104        no errors or vulnerabilities, you can define INSECURE to 1,
105        which might (or might not) provide a small performance improvement.
106 
107   Thread-safety: NOT thread-safe unless USE_LOCKS defined
108        When USE_LOCKS is defined, each public call to malloc, free,
109        etc is surrounded with either a pthread mutex or a win32
110        spinlock (depending on WIN32). This is not especially fast, and
111        can be a major bottleneck.  It is designed only to provide
112        minimal protection in concurrent environments, and to provide a
113        basis for extensions.  If you are using malloc in a concurrent
114        program, consider instead using ptmalloc, which is derived from
115        a version of this malloc. (See http://www.malloc.de).
116 
117   System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
118        This malloc can use unix sbrk or any emulation (invoked using
119        the CALL_MORECORE macro) and/or mmap/munmap or any emulation
120        (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
121        memory.  On most unix systems, it tends to work best if both
122        MORECORE and MMAP are enabled.  On Win32, it uses emulations
123        based on VirtualAlloc. It also uses common C library functions
124        like memset.
125 
126   Compliance: I believe it is compliant with the Single Unix Specification
127        (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
128        others as well.
129 
130 * Overview of algorithms
131 
132   This is not the fastest, most space-conserving, most portable, or
133   most tunable malloc ever written. However it is among the fastest
134   while also being among the most space-conserving, portable and
135   tunable.  Consistent balance across these factors results in a good
136   general-purpose allocator for malloc-intensive programs.
137 
138   In most ways, this malloc is a best-fit allocator. Generally, it
139   chooses the best-fitting existing chunk for a request, with ties
140   broken in approximately least-recently-used order. (This strategy
141   normally maintains low fragmentation.) However, for requests less
142   than 256bytes, it deviates from best-fit when there is not an
143   exactly fitting available chunk by preferring to use space adjacent
144   to that used for the previous small request, as well as by breaking
145   ties in approximately most-recently-used order. (These enhance
146   locality of series of small allocations.)  And for very large requests
147   (>= 256Kb by default), it relies on system memory mapping
148   facilities, if supported.  (This helps avoid carrying around and
149   possibly fragmenting memory used only for large chunks.)
150 
151   All operations (except malloc_stats and mallinfo) have execution
152   times that are bounded by a constant factor of the number of bits in
153   a size_t, not counting any clearing in calloc or copying in realloc,
154   or actions surrounding MORECORE and MMAP that have times
155   proportional to the number of non-contiguous regions returned by
156   system allocation routines, which is often just 1.
157 
158   The implementation is not very modular and seriously overuses
159   macros. Perhaps someday all C compilers will do as good a job
160   inlining modular code as can now be done by brute-force expansion,
161   but now, enough of them seem not to.
162 
163   Some compilers issue a lot of warnings about code that is
164   dead/unreachable only on some platforms, and also about intentional
165   uses of negation on unsigned types. All known cases of each can be
166   ignored.
167 
168   For a longer but out of date high-level description, see
169      http://gee.cs.oswego.edu/dl/html/malloc.html
170 
171 * MSPACES
172   If MSPACES is defined, then in addition to malloc, free, etc.,
173   this file also defines mspace_malloc, mspace_free, etc. These
174   are versions of malloc routines that take an "mspace" argument
175   obtained using create_mspace, to control all internal bookkeeping.
176   If ONLY_MSPACES is defined, only these versions are compiled.
177   So if you would like to use this allocator for only some allocations,
178   and your system malloc for others, you can compile with
179   ONLY_MSPACES and then do something like...
180     static mspace mymspace = create_mspace(0,0); // for example
181     #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
182 
183   (Note: If you only need one instance of an mspace, you can instead
184   use "USE_DL_PREFIX" to relabel the global malloc.)
185 
186   You can similarly create thread-local allocators by storing
187   mspaces as thread-locals. For example:
188     static __thread mspace tlms = 0;
189     void*  tlmalloc(size_t bytes) {
190       if (tlms == 0) tlms = create_mspace(0, 0);
191       return mspace_malloc(tlms, bytes);
192     }
193     void  tlfree(void* mem) { mspace_free(tlms, mem); }
194 
195   Unless FOOTERS is defined, each mspace is completely independent.
196   You cannot allocate from one and free to another (although
197   conformance is only weakly checked, so usage errors are not always
198   caught). If FOOTERS is defined, then each chunk carries around a tag
199   indicating its originating mspace, and frees are directed to their
200   originating spaces.
201 
202  -------------------------  Compile-time options ---------------------------
203 
204 Be careful in setting #define values for numerical constants of type
205 size_t. On some systems, literal values are not automatically extended
206 to size_t precision unless they are explicitly casted.
207 
208 WIN32                    default: defined if _WIN32 defined
209   Defining WIN32 sets up defaults for MS environment and compilers.
210   Otherwise defaults are for unix.
211 
212 MALLOC_ALIGNMENT         default: (size_t)8
213   Controls the minimum alignment for malloc'ed chunks.  It must be a
214   power of two and at least 8, even on machines for which smaller
215   alignments would suffice. It may be defined as larger than this
216   though. Note however that code and data structures are optimized for
217   the case of 8-byte alignment.
218 
219 MSPACES                  default: 0 (false)
220   If true, compile in support for independent allocation spaces.
221   This is only supported if HAVE_MMAP is true.
222 
223 ONLY_MSPACES             default: 0 (false)
224   If true, only compile in mspace versions, not regular versions.
225 
226 USE_LOCKS                default: 0 (false)
227   Causes each call to each public routine to be surrounded with
228   pthread or WIN32 mutex lock/unlock. (If set true, this can be
229   overridden on a per-mspace basis for mspace versions.)
230 
231 FOOTERS                  default: 0
232   If true, provide extra checking and dispatching by placing
233   information in the footers of allocated chunks. This adds
234   space and time overhead.
235 
236 INSECURE                 default: 0
237   If true, omit checks for usage errors and heap space overwrites.
238 
239 USE_DL_PREFIX            default: NOT defined
240   Causes compiler to prefix all public routines with the string 'dl'.
241   This can be useful when you only want to use this malloc in one part
242   of a program, using your regular system malloc elsewhere.
243 
244 ABORT                    default: defined as abort()
245   Defines how to abort on failed checks.  On most systems, a failed
246   check cannot die with an "assert" or even print an informative
247   message, because the underlying print routines in turn call malloc,
248   which will fail again.  Generally, the best policy is to simply call
249   abort(). It's not very useful to do more than this because many
250   errors due to overwriting will show up as address faults (null, odd
251   addresses etc) rather than malloc-triggered checks, so will also
252   abort.  Also, most compilers know that abort() does not return, so
253   can better optimize code conditionally calling it.
254 
255 PROCEED_ON_ERROR           default: defined as 0 (false)
256   Controls whether detected bad addresses cause them to bypassed
257   rather than aborting. If set, detected bad arguments to free and
258   realloc are ignored. And all bookkeeping information is zeroed out
259   upon a detected overwrite of freed heap space, thus losing the
260   ability to ever return it from malloc again, but enabling the
261   application to proceed. If PROCEED_ON_ERROR is defined, the
262   static variable malloc_corruption_error_count is compiled in
263   and can be examined to see if errors have occurred. This option
264   generates slower code than the default abort policy.
265 
266 DEBUG                    default: NOT defined
267   The DEBUG setting is mainly intended for people trying to modify
268   this code or diagnose problems when porting to new platforms.
269   However, it may also be able to better isolate user errors than just
270   using runtime checks.  The assertions in the check routines spell
271   out in more detail the assumptions and invariants underlying the
272   algorithms.  The checking is fairly extensive, and will slow down
273   execution noticeably. Calling malloc_stats or mallinfo with DEBUG
274   set will attempt to check every non-mmapped allocated and free chunk
275   in the course of computing the summaries.
276 
277 ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
278   Debugging assertion failures can be nearly impossible if your
279   version of the assert macro causes malloc to be called, which will
280   lead to a cascade of further failures, blowing the runtime stack.
281   ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
282   which will usually make debugging easier.
283 
284 MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
285   The action to take before "return 0" when malloc fails to be able to
286   return memory because there is none available.
287 
288 HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
289   True if this system supports sbrk or an emulation of it.
290 
291 MORECORE                  default: sbrk
292   The name of the sbrk-style system routine to call to obtain more
293   memory.  See below for guidance on writing custom MORECORE
294   functions. The type of the argument to sbrk/MORECORE varies across
295   systems.  It cannot be size_t, because it supports negative
296   arguments, so it is normally the signed type of the same width as
297   size_t (sometimes declared as "intptr_t").  It doesn't much matter
298   though. Internally, we only call it with arguments less than half
299   the max value of a size_t, which should work across all reasonable
300   possibilities, although sometimes generating compiler warnings.  See
301   near the end of this file for guidelines for creating a custom
302   version of MORECORE.
303 
304 MORECORE_CONTIGUOUS       default: 1 (true)
305   If true, take advantage of fact that consecutive calls to MORECORE
306   with positive arguments always return contiguous increasing
307   addresses.  This is true of unix sbrk. It does not hurt too much to
308   set it true anyway, since malloc copes with non-contiguities.
309   Setting it false when definitely non-contiguous saves time
310   and possibly wasted space it would take to discover this though.
311 
312 MORECORE_CANNOT_TRIM      default: NOT defined
313   True if MORECORE cannot release space back to the system when given
314   negative arguments. This is generally necessary only if you are
315   using a hand-crafted MORECORE function that cannot handle negative
316   arguments.
317 
318 HAVE_MMAP                 default: 1 (true)
319   True if this system supports mmap or an emulation of it.  If so, and
320   HAVE_MORECORE is not true, MMAP is used for all system
321   allocation. If set and HAVE_MORECORE is true as well, MMAP is
322   primarily used to directly allocate very large blocks. It is also
323   used as a backup strategy in cases where MORECORE fails to provide
324   space from system. Note: A single call to MUNMAP is assumed to be
325   able to unmap memory that may have be allocated using multiple calls
326   to MMAP, so long as they are adjacent.
327 
328 HAVE_MREMAP               default: 1 on linux, else 0
329   If true realloc() uses mremap() to re-allocate large blocks and
330   extend or shrink allocation spaces.
331 
332 MMAP_CLEARS               default: 1 on unix
333   True if mmap clears memory so calloc doesn't need to. This is true
334   for standard unix mmap using /dev/zero.
335 
336 USE_BUILTIN_FFS            default: 0 (i.e., not used)
337   Causes malloc to use the builtin ffs() function to compute indices.
338   Some compilers may recognize and intrinsify ffs to be faster than the
339   supplied C version. Also, the case of x86 using gcc is special-cased
340   to an asm instruction, so is already as fast as it can be, and so
341   this setting has no effect. (On most x86s, the asm version is only
342   slightly faster than the C version.)
343 
344 malloc_getpagesize         default: derive from system includes, or 4096.
345   The system page size. To the extent possible, this malloc manages
346   memory from the system in page-size units.  This may be (and
347   usually is) a function rather than a constant. This is ignored
348   if WIN32, where page size is determined using getSystemInfo during
349   initialization.
350 
351 USE_DEV_RANDOM             default: 0 (i.e., not used)
352   Causes malloc to use /dev/random to initialize secure magic seed for
353   stamping footers. Otherwise, the current time is used.
354 
355 NO_MALLINFO                default: 0
356   If defined, don't compile "mallinfo". This can be a simple way
357   of dealing with mismatches between system declarations and
358   those in this file.
359 
360 MALLINFO_FIELD_TYPE        default: size_t
361   The type of the fields in the mallinfo struct. This was originally
362   defined as "int" in SVID etc, but is more usefully defined as
363   size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
364 
365 REALLOC_ZERO_BYTES_FREES    default: not defined
366   This should be set if a call to realloc with zero bytes should
367   be the same as a call to free. Some people think it should. Otherwise,
368   since this malloc returns a unique pointer for malloc(0), so does
369   realloc(p, 0).
370 
371 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
372 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
373 LACKS_STDLIB_H                default: NOT defined unless on WIN32
374   Define these if your system does not have these header files.
375   You might need to manually insert some of the declarations they provide.
376 
377 DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
378                                 system_info.dwAllocationGranularity in WIN32,
379                                 otherwise 64K.
380       Also settable using mallopt(M_GRANULARITY, x)
381   The unit for allocating and deallocating memory from the system.  On
382   most systems with contiguous MORECORE, there is no reason to
383   make this more than a page. However, systems with MMAP tend to
384   either require or encourage larger granularities.  You can increase
385   this value to prevent system allocation functions to be called so
386   often, especially if they are slow.  The value must be at least one
387   page and must be a power of two.  Setting to 0 causes initialization
388   to either page size or win32 region size.  (Note: In previous
389   versions of malloc, the equivalent of this option was called
390   "TOP_PAD")
391 
392 DEFAULT_TRIM_THRESHOLD    default: 2MB
393       Also settable using mallopt(M_TRIM_THRESHOLD, x)
394   The maximum amount of unused top-most memory to keep before
395   releasing via malloc_trim in free().  Automatic trimming is mainly
396   useful in long-lived programs using contiguous MORECORE.  Because
397   trimming via sbrk can be slow on some systems, and can sometimes be
398   wasteful (in cases where programs immediately afterward allocate
399   more large chunks) the value should be high enough so that your
400   overall system performance would improve by releasing this much
401   memory.  As a rough guide, you might set to a value close to the
402   average size of a process (program) running on your system.
403   Releasing this much memory would allow such a process to run in
404   memory.  Generally, it is worth tuning trim thresholds when a
405   program undergoes phases where several large chunks are allocated
406   and released in ways that can reuse each other's storage, perhaps
407   mixed with phases where there are no such chunks at all. The trim
408   value must be greater than page size to have any useful effect.  To
409   disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
410   some people use of mallocing a huge space and then freeing it at
411   program startup, in an attempt to reserve system memory, doesn't
412   have the intended effect under automatic trimming, since that memory
413   will immediately be returned to the system.
414 
415 DEFAULT_MMAP_THRESHOLD       default: 256K
416       Also settable using mallopt(M_MMAP_THRESHOLD, x)
417   The request size threshold for using MMAP to directly service a
418   request. Requests of at least this size that cannot be allocated
419   using already-existing space will be serviced via mmap.  (If enough
420   normal freed space already exists it is used instead.)  Using mmap
421   segregates relatively large chunks of memory so that they can be
422   individually obtained and released from the host system. A request
423   serviced through mmap is never reused by any other request (at least
424   not directly; the system may just so happen to remap successive
425   requests to the same locations).  Segregating space in this way has
426   the benefits that: Mmapped space can always be individually released
427   back to the system, which helps keep the system level memory demands
428   of a long-lived program low.  Also, mapped memory doesn't become
429   `locked' between other chunks, as can happen with normally allocated
430   chunks, which means that even trimming via malloc_trim would not
431   release them.  However, it has the disadvantage that the space
432   cannot be reclaimed, consolidated, and then used to service later
433   requests, as happens with normal chunks.  The advantages of mmap
434   nearly always outweigh disadvantages for "large" chunks, but the
435   value of "large" may vary across systems.  The default is an
436   empirically derived value that works well in most systems. You can
437   disable mmap by setting to MAX_SIZE_T.
438 
439 */
440 
441 #ifndef WIN32
442 #ifdef _WIN32
443 #define WIN32 1
444 #endif  /* _WIN32 */
445 #endif  /* WIN32 */
446 #ifdef WIN32
447 #define WIN32_LEAN_AND_MEAN
448 #include <windows.h>
449 #define HAVE_MMAP 1
450 #define HAVE_MORECORE 0
451 #define LACKS_UNISTD_H
452 #define LACKS_SYS_PARAM_H
453 #define LACKS_SYS_MMAN_H
454 #define LACKS_STRING_H
455 #define LACKS_STRINGS_H
456 #define LACKS_SYS_TYPES_H
457 #define LACKS_ERRNO_H
458 #define MALLOC_FAILURE_ACTION
459 #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
460 #endif  /* WIN32 */
461 
462 #if defined(DARWIN) || defined(_DARWIN)
463 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
464 #ifndef HAVE_MORECORE
465 #define HAVE_MORECORE 0
466 #define HAVE_MMAP 1
467 #endif  /* HAVE_MORECORE */
468 #endif  /* DARWIN */
469 
470 #ifndef LACKS_SYS_TYPES_H
471 #include <sys/types.h>  /* For size_t */
472 #endif  /* LACKS_SYS_TYPES_H */
473 
474 /* The maximum possible size_t value has all bits set */
475 #define MAX_SIZE_T           (~(size_t)0)
476 
477 #ifndef ONLY_MSPACES
478 #define ONLY_MSPACES 0
479 #endif  /* ONLY_MSPACES */
480 #ifndef MSPACES
481 #if ONLY_MSPACES
482 #define MSPACES 1
483 #else   /* ONLY_MSPACES */
484 #define MSPACES 0
485 #endif  /* ONLY_MSPACES */
486 #endif  /* MSPACES */
487 #ifndef MALLOC_ALIGNMENT
488 #define MALLOC_ALIGNMENT ((size_t)8U)
489 #endif  /* MALLOC_ALIGNMENT */
490 #ifndef FOOTERS
491 #define FOOTERS 0
492 #endif  /* FOOTERS */
493 #ifndef ABORT
494 #define ABORT  abort()
495 #endif  /* ABORT */
496 #ifndef ABORT_ON_ASSERT_FAILURE
497 #define ABORT_ON_ASSERT_FAILURE 1
498 #endif  /* ABORT_ON_ASSERT_FAILURE */
499 #ifndef PROCEED_ON_ERROR
500 #define PROCEED_ON_ERROR 0
501 #endif  /* PROCEED_ON_ERROR */
502 #ifndef USE_LOCKS
503 #define USE_LOCKS 0
504 #endif  /* USE_LOCKS */
505 #ifndef INSECURE
506 #define INSECURE 0
507 #endif  /* INSECURE */
508 #ifndef HAVE_MMAP
509 #define HAVE_MMAP 1
510 #endif  /* HAVE_MMAP */
511 #ifndef MMAP_CLEARS
512 #define MMAP_CLEARS 1
513 #endif  /* MMAP_CLEARS */
514 #ifndef HAVE_MREMAP
515 #ifdef linux
516 #define HAVE_MREMAP 1
517 #else   /* linux */
518 #define HAVE_MREMAP 0
519 #endif  /* linux */
520 #endif  /* HAVE_MREMAP */
521 #ifndef MALLOC_FAILURE_ACTION
522 #define MALLOC_FAILURE_ACTION  errno = ENOMEM;
523 #endif  /* MALLOC_FAILURE_ACTION */
524 #ifndef HAVE_MORECORE
525 #if ONLY_MSPACES
526 #define HAVE_MORECORE 0
527 #else   /* ONLY_MSPACES */
528 #define HAVE_MORECORE 1
529 #endif  /* ONLY_MSPACES */
530 #endif  /* HAVE_MORECORE */
531 #if !HAVE_MORECORE
532 #define MORECORE_CONTIGUOUS 0
533 #else   /* !HAVE_MORECORE */
534 #ifndef MORECORE
535 #define MORECORE sbrk
536 #endif  /* MORECORE */
537 #ifndef MORECORE_CONTIGUOUS
538 #define MORECORE_CONTIGUOUS 1
539 #endif  /* MORECORE_CONTIGUOUS */
540 #endif  /* HAVE_MORECORE */
541 #ifndef DEFAULT_GRANULARITY
542 #if MORECORE_CONTIGUOUS
543 #define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
544 #else   /* MORECORE_CONTIGUOUS */
545 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
546 #endif  /* MORECORE_CONTIGUOUS */
547 #endif  /* DEFAULT_GRANULARITY */
548 #ifndef DEFAULT_TRIM_THRESHOLD
549 #ifndef MORECORE_CANNOT_TRIM
550 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
551 #else   /* MORECORE_CANNOT_TRIM */
552 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
553 #endif  /* MORECORE_CANNOT_TRIM */
554 #endif  /* DEFAULT_TRIM_THRESHOLD */
555 #ifndef DEFAULT_MMAP_THRESHOLD
556 #if HAVE_MMAP
557 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
558 #else   /* HAVE_MMAP */
559 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
560 #endif  /* HAVE_MMAP */
561 #endif  /* DEFAULT_MMAP_THRESHOLD */
562 #ifndef USE_BUILTIN_FFS
563 #define USE_BUILTIN_FFS 0
564 #endif  /* USE_BUILTIN_FFS */
565 #ifndef USE_DEV_RANDOM
566 #define USE_DEV_RANDOM 0
567 #endif  /* USE_DEV_RANDOM */
568 #ifndef NO_MALLINFO
569 #define NO_MALLINFO 0
570 #endif  /* NO_MALLINFO */
571 #ifndef MALLINFO_FIELD_TYPE
572 #define MALLINFO_FIELD_TYPE size_t
573 #endif  /* MALLINFO_FIELD_TYPE */
574 
575 /*
576   mallopt tuning options.  SVID/XPG defines four standard parameter
577   numbers for mallopt, normally defined in malloc.h.  None of these
578   are used in this malloc, so setting them has no effect. But this
579   malloc does support the following options.
580 */
581 
582 #define M_TRIM_THRESHOLD     (-1)
583 #define M_GRANULARITY        (-2)
584 #define M_MMAP_THRESHOLD     (-3)
585 
586 /* ------------------------ Mallinfo declarations ------------------------ */
587 
588 #if !NO_MALLINFO
589 /*
590   This version of malloc supports the standard SVID/XPG mallinfo
591   routine that returns a struct containing usage properties and
592   statistics. It should work on any system that has a
593   /usr/include/malloc.h defining struct mallinfo.  The main
594   declaration needed is the mallinfo struct that is returned (by-copy)
595   by mallinfo().  The malloinfo struct contains a bunch of fields that
596   are not even meaningful in this version of malloc.  These fields are
597   are instead filled by mallinfo() with other numbers that might be of
598   interest.
599 
600   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
601   /usr/include/malloc.h file that includes a declaration of struct
602   mallinfo.  If so, it is included; else a compliant version is
603   declared below.  These must be precisely the same for mallinfo() to
604   work.  The original SVID version of this struct, defined on most
605   systems with mallinfo, declares all fields as ints. But some others
606   define as unsigned long. If your system defines the fields using a
607   type of different width than listed here, you MUST #include your
608   system version and #define HAVE_USR_INCLUDE_MALLOC_H.
609 */
610 
611 /* #define HAVE_USR_INCLUDE_MALLOC_H */
612 
613 #ifdef HAVE_USR_INCLUDE_MALLOC_H
614 #include "/usr/include/malloc.h"
615 #else /* HAVE_USR_INCLUDE_MALLOC_H */
616 
617 struct mallinfo {
618   MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
619   MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
620   MALLINFO_FIELD_TYPE smblks;   /* always 0 */
621   MALLINFO_FIELD_TYPE hblks;    /* always 0 */
622   MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
623   MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
624   MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
625   MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
626   MALLINFO_FIELD_TYPE fordblks; /* total free space */
627   MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
628 };
629 
630 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
631 #endif /* NO_MALLINFO */
632 
633 #ifdef __cplusplus
634 extern "C" {
635 #endif /* __cplusplus */
636 
637 #if !ONLY_MSPACES
638 
639 /* ------------------- Declarations of public routines ------------------- */
640 
641 #ifndef USE_DL_PREFIX
642 #define dlcalloc               calloc
643 #define dlfree                 free
644 #define dlmalloc               malloc
645 #define dlmemalign             memalign
646 #define dlrealloc              realloc
647 #define dlvalloc               valloc
648 #define dlpvalloc              pvalloc
649 #define dlmallinfo             mallinfo
650 #define dlmallopt              mallopt
651 #define dlmalloc_trim          malloc_trim
652 #define dlmalloc_stats         malloc_stats
653 #define dlmalloc_usable_size   malloc_usable_size
654 #define dlmalloc_footprint     malloc_footprint
655 #define dlmalloc_max_footprint malloc_max_footprint
656 #define dlindependent_calloc   independent_calloc
657 #define dlindependent_comalloc independent_comalloc
658 #endif /* USE_DL_PREFIX */
659 
660 
661 /*
662   malloc(size_t n)
663   Returns a pointer to a newly allocated chunk of at least n bytes, or
664   null if no space is available, in which case errno is set to ENOMEM
665   on ANSI C systems.
666 
667   If n is zero, malloc returns a minimum-sized chunk. (The minimum
668   size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
669   systems.)  Note that size_t is an unsigned type, so calls with
670   arguments that would be negative if signed are interpreted as
671   requests for huge amounts of space, which will often fail. The
672   maximum supported value of n differs across systems, but is in all
673   cases less than the maximum representable value of a size_t.
674 */
675 void* dlmalloc(size_t);
676 
677 /*
678   free(void* p)
679   Releases the chunk of memory pointed to by p, that had been previously
680   allocated using malloc or a related routine such as realloc.
681   It has no effect if p is null. If p was not malloced or already
682   freed, free(p) will by default cause the current program to abort.
683 */
684 void  dlfree(void*);
685 
686 /*
687   calloc(size_t n_elements, size_t element_size);
688   Returns a pointer to n_elements * element_size bytes, with all locations
689   set to zero.
690 */
691 void* dlcalloc(size_t, size_t);
692 
693 /*
694   realloc(void* p, size_t n)
695   Returns a pointer to a chunk of size n that contains the same data
696   as does chunk p up to the minimum of (n, p's size) bytes, or null
697   if no space is available.
698 
699   The returned pointer may or may not be the same as p. The algorithm
700   prefers extending p in most cases when possible, otherwise it
701   employs the equivalent of a malloc-copy-free sequence.
702 
703   If p is null, realloc is equivalent to malloc.
704 
705   If space is not available, realloc returns null, errno is set (if on
706   ANSI) and p is NOT freed.
707 
708   if n is for fewer bytes than already held by p, the newly unused
709   space is lopped off and freed if possible.  realloc with a size
710   argument of zero (re)allocates a minimum-sized chunk.
711 
712   The old unix realloc convention of allowing the last-free'd chunk
713   to be used as an argument to realloc is not supported.
714 */
715 
716 void* dlrealloc(void*, size_t);
717 
718 /*
719   memalign(size_t alignment, size_t n);
720   Returns a pointer to a newly allocated chunk of n bytes, aligned
721   in accord with the alignment argument.
722 
723   The alignment argument should be a power of two. If the argument is
724   not a power of two, the nearest greater power is used.
725   8-byte alignment is guaranteed by normal malloc calls, so don't
726   bother calling memalign with an argument of 8 or less.
727 
728   Overreliance on memalign is a sure way to fragment space.
729 */
730 void* dlmemalign(size_t, size_t);
731 
732 /*
733   valloc(size_t n);
734   Equivalent to memalign(pagesize, n), where pagesize is the page
735   size of the system. If the pagesize is unknown, 4096 is used.
736 */
737 void* dlvalloc(size_t);
738 
739 /*
740   mallopt(int parameter_number, int parameter_value)
741   Sets tunable parameters The format is to provide a
742   (parameter-number, parameter-value) pair.  mallopt then sets the
743   corresponding parameter to the argument value if it can (i.e., so
744   long as the value is meaningful), and returns 1 if successful else
745   0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
746   normally defined in malloc.h.  None of these are use in this malloc,
747   so setting them has no effect. But this malloc also supports other
748   options in mallopt. See below for details.  Briefly, supported
749   parameters are as follows (listed defaults are for "typical"
750   configurations).
751 
752   Symbol            param #  default    allowed param values
753   M_TRIM_THRESHOLD     -1   2*1024*1024   any   (MAX_SIZE_T disables)
754   M_GRANULARITY        -2     page size   any power of 2 >= page size
755   M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
756 */
757 int dlmallopt(int, int);
758 
759 /*
760   malloc_footprint();
761   Returns the number of bytes obtained from the system.  The total
762   number of bytes allocated by malloc, realloc etc., is less than this
763   value. Unlike mallinfo, this function returns only a precomputed
764   result, so can be called frequently to monitor memory consumption.
765   Even if locks are otherwise defined, this function does not use them,
766   so results might not be up to date.
767 */
768 size_t dlmalloc_footprint(void);
769 
770 /*
771   malloc_max_footprint();
772   Returns the maximum number of bytes obtained from the system. This
773   value will be greater than current footprint if deallocated space
774   has been reclaimed by the system. The peak number of bytes allocated
775   by malloc, realloc etc., is less than this value. Unlike mallinfo,
776   this function returns only a precomputed result, so can be called
777   frequently to monitor memory consumption.  Even if locks are
778   otherwise defined, this function does not use them, so results might
779   not be up to date.
780 */
781 size_t dlmalloc_max_footprint(void);
782 
783 #if !NO_MALLINFO
784 /*
785   mallinfo()
786   Returns (by copy) a struct containing various summary statistics:
787 
788   arena:     current total non-mmapped bytes allocated from system
789   ordblks:   the number of free chunks
790   smblks:    always zero.
791   hblks:     current number of mmapped regions
792   hblkhd:    total bytes held in mmapped regions
793   usmblks:   the maximum total allocated space. This will be greater
794                 than current total if trimming has occurred.
795   fsmblks:   always zero
796   uordblks:  current total allocated space (normal or mmapped)
797   fordblks:  total free space
798   keepcost:  the maximum number of bytes that could ideally be released
799                back to system via malloc_trim. ("ideally" means that
800                it ignores page restrictions etc.)
801 
802   Because these fields are ints, but internal bookkeeping may
803   be kept as longs, the reported values may wrap around zero and
804   thus be inaccurate.
805 */
806 struct mallinfo dlmallinfo(void);
807 #endif /* NO_MALLINFO */
808 
809 /*
810   independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
811 
812   independent_calloc is similar to calloc, but instead of returning a
813   single cleared space, it returns an array of pointers to n_elements
814   independent elements that can hold contents of size elem_size, each
815   of which starts out cleared, and can be independently freed,
816   realloc'ed etc. The elements are guaranteed to be adjacently
817   allocated (this is not guaranteed to occur with multiple callocs or
818   mallocs), which may also improve cache locality in some
819   applications.
820 
821   The "chunks" argument is optional (i.e., may be null, which is
822   probably the most typical usage). If it is null, the returned array
823   is itself dynamically allocated and should also be freed when it is
824   no longer needed. Otherwise, the chunks array must be of at least
825   n_elements in length. It is filled in with the pointers to the
826   chunks.
827 
828   In either case, independent_calloc returns this pointer array, or
829   null if the allocation failed.  If n_elements is zero and "chunks"
830   is null, it returns a chunk representing an array with zero elements
831   (which should be freed if not wanted).
832 
833   Each element must be individually freed when it is no longer
834   needed. If you'd like to instead be able to free all at once, you
835   should instead use regular calloc and assign pointers into this
836   space to represent elements.  (In this case though, you cannot
837   independently free elements.)
838 
839   independent_calloc simplifies and speeds up implementations of many
840   kinds of pools.  It may also be useful when constructing large data
841   structures that initially have a fixed number of fixed-sized nodes,
842   but the number is not known at compile time, and some of the nodes
843   may later need to be freed. For example:
844 
845   struct Node { int item; struct Node* next; };
846 
847   struct Node* build_list() {
848     struct Node** pool;
849     int n = read_number_of_nodes_needed();
850     if (n <= 0) return 0;
851     pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
852     if (pool == 0) die();
853     // organize into a linked list...
854     struct Node* first = pool[0];
855     for (i = 0; i < n-1; ++i)
856       pool[i]->next = pool[i+1];
857     free(pool);     // Can now free the array (or not, if it is needed later)
858     return first;
859   }
860 */
861 void** dlindependent_calloc(size_t, size_t, void**);
862 
863 /*
864   independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
865 
866   independent_comalloc allocates, all at once, a set of n_elements
867   chunks with sizes indicated in the "sizes" array.    It returns
868   an array of pointers to these elements, each of which can be
869   independently freed, realloc'ed etc. The elements are guaranteed to
870   be adjacently allocated (this is not guaranteed to occur with
871   multiple callocs or mallocs), which may also improve cache locality
872   in some applications.
873 
874   The "chunks" argument is optional (i.e., may be null). If it is null
875   the returned array is itself dynamically allocated and should also
876   be freed when it is no longer needed. Otherwise, the chunks array
877   must be of at least n_elements in length. It is filled in with the
878   pointers to the chunks.
879 
880   In either case, independent_comalloc returns this pointer array, or
881   null if the allocation failed.  If n_elements is zero and chunks is
882   null, it returns a chunk representing an array with zero elements
883   (which should be freed if not wanted).
884 
885   Each element must be individually freed when it is no longer
886   needed. If you'd like to instead be able to free all at once, you
887   should instead use a single regular malloc, and assign pointers at
888   particular offsets in the aggregate space. (In this case though, you
889   cannot independently free elements.)
890 
891   independent_comallac differs from independent_calloc in that each
892   element may have a different size, and also that it does not
893   automatically clear elements.
894 
895   independent_comalloc can be used to speed up allocation in cases
896   where several structs or objects must always be allocated at the
897   same time.  For example:
898 
899   struct Head { ... }
900   struct Foot { ... }
901 
902   void send_message(char* msg) {
903     int msglen = strlen(msg);
904     size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
905     void* chunks[3];
906     if (independent_comalloc(3, sizes, chunks) == 0)
907       die();
908     struct Head* head = (struct Head*)(chunks[0]);
909     char*        body = (char*)(chunks[1]);
910     struct Foot* foot = (struct Foot*)(chunks[2]);
911     // ...
912   }
913 
914   In general though, independent_comalloc is worth using only for
915   larger values of n_elements. For small values, you probably won't
916   detect enough difference from series of malloc calls to bother.
917 
918   Overuse of independent_comalloc can increase overall memory usage,
919   since it cannot reuse existing noncontiguous small chunks that
920   might be available for some of the elements.
921 */
922 void** dlindependent_comalloc(size_t, size_t*, void**);
923 
924 
925 /*
926   pvalloc(size_t n);
927   Equivalent to valloc(minimum-page-that-holds(n)), that is,
928   round up n to nearest pagesize.
929  */
930 void*  dlpvalloc(size_t);
931 
932 /*
933   malloc_trim(size_t pad);
934 
935   If possible, gives memory back to the system (via negative arguments
936   to sbrk) if there is unused memory at the `high' end of the malloc
937   pool or in unused MMAP segments. You can call this after freeing
938   large blocks of memory to potentially reduce the system-level memory
939   requirements of a program. However, it cannot guarantee to reduce
940   memory. Under some allocation patterns, some large free blocks of
941   memory will be locked between two used chunks, so they cannot be
942   given back to the system.
943 
944   The `pad' argument to malloc_trim represents the amount of free
945   trailing space to leave untrimmed. If this argument is zero, only
946   the minimum amount of memory to maintain internal data structures
947   will be left. Non-zero arguments can be supplied to maintain enough
948   trailing space to service future expected allocations without having
949   to re-obtain memory from the system.
950 
951   Malloc_trim returns 1 if it actually released any memory, else 0.
952 */
953 int  dlmalloc_trim(size_t);
954 
955 /*
956   malloc_usable_size(void* p);
957 
958   Returns the number of bytes you can actually use in
959   an allocated chunk, which may be more than you requested (although
960   often not) due to alignment and minimum size constraints.
961   You can use this many bytes without worrying about
962   overwriting other allocated objects. This is not a particularly great
963   programming practice. malloc_usable_size can be more useful in
964   debugging and assertions, for example:
965 
966   p = malloc(n);
967   assert(malloc_usable_size(p) >= 256);
968 */
969 size_t dlmalloc_usable_size(void*);
970 
971 /*
972   malloc_stats();
973   Prints on stderr the amount of space obtained from the system (both
974   via sbrk and mmap), the maximum amount (which may be more than
975   current if malloc_trim and/or munmap got called), and the current
976   number of bytes allocated via malloc (or realloc, etc) but not yet
977   freed. Note that this is the number of bytes allocated, not the
978   number requested. It will be larger than the number requested
979   because of alignment and bookkeeping overhead. Because it includes
980   alignment wastage as being in use, this figure may be greater than
981   zero even when no user-level chunks are allocated.
982 
983   The reported current and maximum system memory can be inaccurate if
984   a program makes other calls to system memory allocation functions
985   (normally sbrk) outside of malloc.
986 
987   malloc_stats prints only the most commonly interesting statistics.
988   More information can be obtained by calling mallinfo.
989 */
990 void  dlmalloc_stats(void);
991 
992 #endif /* ONLY_MSPACES */
993 
994 #if MSPACES
995 
996 /*
997   mspace is an opaque type representing an independent
998   region of space that supports mspace_malloc, etc.
999 */
1000 typedef void* mspace;
1001 
1002 /*
1003   create_mspace creates and returns a new independent space with the
1004   given initial capacity, or, if 0, the default granularity size.  It
1005   returns null if there is no system memory available to create the
1006   space.  If argument locked is non-zero, the space uses a separate
1007   lock to control access. The capacity of the space will grow
1008   dynamically as needed to service mspace_malloc requests.  You can
1009   control the sizes of incremental increases of this space by
1010   compiling with a different DEFAULT_GRANULARITY or dynamically
1011   setting with mallopt(M_GRANULARITY, value).
1012 */
1013 mspace create_mspace(size_t capacity, int locked);
1014 
1015 /*
1016   destroy_mspace destroys the given space, and attempts to return all
1017   of its memory back to the system, returning the total number of
1018   bytes freed. After destruction, the results of access to all memory
1019   used by the space become undefined.
1020 */
1021 size_t destroy_mspace(mspace msp);
1022 
1023 /*
1024   create_mspace_with_base uses the memory supplied as the initial base
1025   of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1026   space is used for bookkeeping, so the capacity must be at least this
1027   large. (Otherwise 0 is returned.) When this initial space is
1028   exhausted, additional memory will be obtained from the system.
1029   Destroying this space will deallocate all additionally allocated
1030   space (if possible) but not the initial base.
1031 */
1032 mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1033 
1034 /*
1035   mspace_malloc behaves as malloc, but operates within
1036   the given space.
1037 */
1038 void* mspace_malloc(mspace msp, size_t bytes);
1039 
1040 /*
1041   mspace_free behaves as free, but operates within
1042   the given space.
1043 
1044   If compiled with FOOTERS==1, mspace_free is not actually needed.
1045   free may be called instead of mspace_free because freed chunks from
1046   any space are handled by their originating spaces.
1047 */
1048 void mspace_free(mspace msp, void* mem);
1049 
1050 /*
1051   mspace_realloc behaves as realloc, but operates within
1052   the given space.
1053 
1054   If compiled with FOOTERS==1, mspace_realloc is not actually
1055   needed.  realloc may be called instead of mspace_realloc because
1056   realloced chunks from any space are handled by their originating
1057   spaces.
1058 */
1059 void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1060 
1061 /*
1062   mspace_calloc behaves as calloc, but operates within
1063   the given space.
1064 */
1065 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1066 
1067 /*
1068   mspace_memalign behaves as memalign, but operates within
1069   the given space.
1070 */
1071 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1072 
1073 /*
1074   mspace_independent_calloc behaves as independent_calloc, but
1075   operates within the given space.
1076 */
1077 void** mspace_independent_calloc(mspace msp, size_t n_elements,
1078                                  size_t elem_size, void* chunks[]);
1079 
1080 /*
1081   mspace_independent_comalloc behaves as independent_comalloc, but
1082   operates within the given space.
1083 */
1084 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1085                                    size_t sizes[], void* chunks[]);
1086 
1087 /*
1088   mspace_footprint() returns the number of bytes obtained from the
1089   system for this space.
1090 */
1091 size_t mspace_footprint(mspace msp);
1092 
1093 /*
1094   mspace_max_footprint() returns the peak number of bytes obtained from the
1095   system for this space.
1096 */
1097 size_t mspace_max_footprint(mspace msp);
1098 
1099 
1100 #if !NO_MALLINFO
1101 /*
1102   mspace_mallinfo behaves as mallinfo, but reports properties of
1103   the given space.
1104 */
1105 struct mallinfo mspace_mallinfo(mspace msp);
1106 #endif /* NO_MALLINFO */
1107 
1108 /*
1109   mspace_malloc_stats behaves as malloc_stats, but reports
1110   properties of the given space.
1111 */
1112 void mspace_malloc_stats(mspace msp);
1113 
1114 /*
1115   mspace_trim behaves as malloc_trim, but
1116   operates within the given space.
1117 */
1118 int mspace_trim(mspace msp, size_t pad);
1119 
1120 /*
1121   An alias for mallopt.
1122 */
1123 int mspace_mallopt(int, int);
1124 
1125 #endif /* MSPACES */
1126 
1127 #ifdef __cplusplus
1128 };  /* end of extern "C" */
1129 #endif /* __cplusplus */
1130 
1131 /*
1132   ========================================================================
1133   To make a fully customizable malloc.h header file, cut everything
1134   above this line, put into file malloc.h, edit to suit, and #include it
1135   on the next line, as well as in programs that use this malloc.
1136   ========================================================================
1137 */
1138 
1139 /* #include "malloc.h" */
1140 
1141 /*------------------------------ internal #includes ---------------------- */
1142 
1143 #ifdef WIN32
1144 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1145 #endif /* WIN32 */
1146 
1147 #include <stdio.h>       /* for printing in malloc_stats */
1148 
1149 #ifndef LACKS_ERRNO_H
1150 #include <errno.h>       /* for MALLOC_FAILURE_ACTION */
1151 #endif /* LACKS_ERRNO_H */
1152 #if FOOTERS
1153 #include <time.h>        /* for magic initialization */
1154 #endif /* FOOTERS */
1155 #ifndef LACKS_STDLIB_H
1156 #include <stdlib.h>      /* for abort() */
1157 #endif /* LACKS_STDLIB_H */
1158 #ifdef DEBUG
1159 #if ABORT_ON_ASSERT_FAILURE
1160 #define assert(x) if(!(x)) ABORT
1161 #else /* ABORT_ON_ASSERT_FAILURE */
1162 #include <assert.h>
1163 #endif /* ABORT_ON_ASSERT_FAILURE */
1164 #else  /* DEBUG */
1165 #define assert(x)
1166 #endif /* DEBUG */
1167 #ifndef LACKS_STRING_H
1168 #include <string.h>      /* for memset etc */
1169 #endif  /* LACKS_STRING_H */
1170 #if USE_BUILTIN_FFS
1171 #ifndef LACKS_STRINGS_H
1172 #include <strings.h>     /* for ffs */
1173 #endif /* LACKS_STRINGS_H */
1174 #endif /* USE_BUILTIN_FFS */
1175 #if HAVE_MMAP
1176 #ifndef LACKS_SYS_MMAN_H
1177 #include <sys/mman.h>    /* for mmap */
1178 #endif /* LACKS_SYS_MMAN_H */
1179 #ifndef LACKS_FCNTL_H
1180 #include <fcntl.h>
1181 #endif /* LACKS_FCNTL_H */
1182 #endif /* HAVE_MMAP */
1183 #if HAVE_MORECORE
1184 #ifndef LACKS_UNISTD_H
1185 #include <unistd.h>     /* for sbrk */
1186 #else /* LACKS_UNISTD_H */
1187 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1188 extern void*     sbrk(ptrdiff_t);
1189 #endif /* FreeBSD etc */
1190 #endif /* LACKS_UNISTD_H */
1191 #endif /* HAVE_MMAP */
1192 
1193 #ifndef WIN32
1194 #ifndef malloc_getpagesize
1195 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
1196 #    ifndef _SC_PAGE_SIZE
1197 #      define _SC_PAGE_SIZE _SC_PAGESIZE
1198 #    endif
1199 #  endif
1200 #  ifdef _SC_PAGE_SIZE
1201 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1202 #  else
1203 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1204        extern size_t getpagesize();
1205 #      define malloc_getpagesize getpagesize()
1206 #    else
1207 #      ifdef WIN32 /* use supplied emulation of getpagesize */
1208 #        define malloc_getpagesize getpagesize()
1209 #      else
1210 #        ifndef LACKS_SYS_PARAM_H
1211 #          include <sys/param.h>
1212 #        endif
1213 #        ifdef EXEC_PAGESIZE
1214 #          define malloc_getpagesize EXEC_PAGESIZE
1215 #        else
1216 #          ifdef NBPG
1217 #            ifndef CLSIZE
1218 #              define malloc_getpagesize NBPG
1219 #            else
1220 #              define malloc_getpagesize (NBPG * CLSIZE)
1221 #            endif
1222 #          else
1223 #            ifdef NBPC
1224 #              define malloc_getpagesize NBPC
1225 #            else
1226 #              ifdef PAGESIZE
1227 #                define malloc_getpagesize PAGESIZE
1228 #              else /* just guess */
1229 #                define malloc_getpagesize ((size_t)4096U)
1230 #              endif
1231 #            endif
1232 #          endif
1233 #        endif
1234 #      endif
1235 #    endif
1236 #  endif
1237 #endif
1238 #endif
1239 
1240 /* ------------------- size_t and alignment properties -------------------- */
1241 
1242 /* The byte and bit size of a size_t */
1243 #define SIZE_T_SIZE         (sizeof(size_t))
1244 #define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
1245 
1246 /* Some constants coerced to size_t */
1247 /* Annoying but necessary to avoid errors on some plaftorms */
1248 #define SIZE_T_ZERO         ((size_t)0)
1249 #define SIZE_T_ONE          ((size_t)1)
1250 #define SIZE_T_TWO          ((size_t)2)
1251 #define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
1252 #define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
1253 #define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1254 #define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
1255 
1256 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1257 #define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
1258 
1259 /* True if address a has acceptable alignment */
1260 #define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1261 
1262 /* the number of bytes to offset an address to align it */
1263 #define align_offset(A)\
1264  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1265   ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1266 
1267 /* -------------------------- MMAP preliminaries ------------------------- */
1268 
1269 /*
1270    If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1271    checks to fail so compiler optimizer can delete code rather than
1272    using so many "#if"s.
1273 */
1274 
1275 
1276 /* MORECORE and MMAP must return MFAIL on failure */
1277 #define MFAIL                ((void*)(MAX_SIZE_T))
1278 #define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
1279 
1280 #if !HAVE_MMAP
1281 #define IS_MMAPPED_BIT       (SIZE_T_ZERO)
1282 #define USE_MMAP_BIT         (SIZE_T_ZERO)
1283 #define CALL_MMAP(s)         MFAIL
1284 #define CALL_MUNMAP(a, s)    (-1)
1285 #define DIRECT_MMAP(s)       MFAIL
1286 
1287 #else /* HAVE_MMAP */
1288 #define IS_MMAPPED_BIT       (SIZE_T_ONE)
1289 #define USE_MMAP_BIT         (SIZE_T_ONE)
1290 
1291 #ifndef WIN32
1292 #define CALL_MUNMAP(a, s)    munmap((a), (s))
1293 #define MMAP_PROT            (PROT_READ|PROT_WRITE)
1294 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1295 #define MAP_ANONYMOUS        MAP_ANON
1296 #endif /* MAP_ANON */
1297 #ifdef MAP_ANONYMOUS
1298 #define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
1299 #define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1300 #else /* MAP_ANONYMOUS */
1301 /*
1302    Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1303    is unlikely to be needed, but is supplied just in case.
1304 */
1305 #define MMAP_FLAGS           (MAP_PRIVATE)
1306 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1307 #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1308            (dev_zero_fd = open("/dev/zero", O_RDWR), \
1309             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1310             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1311 #endif /* MAP_ANONYMOUS */
1312 
1313 #define DIRECT_MMAP(s)       CALL_MMAP(s)
1314 #else /* WIN32 */
1315 
1316 /* Win32 MMAP via VirtualAlloc */
win32mmap(size_t size)1317 static void* win32mmap(size_t size) {
1318   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1319   return (ptr != 0)? ptr: MFAIL;
1320 }
1321 
1322 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
win32direct_mmap(size_t size)1323 static void* win32direct_mmap(size_t size) {
1324   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1325                            PAGE_READWRITE);
1326   return (ptr != 0)? ptr: MFAIL;
1327 }
1328 
1329 /* This function supports releasing coalesed segments */
win32munmap(void * ptr,size_t size)1330 static int win32munmap(void* ptr, size_t size) {
1331   MEMORY_BASIC_INFORMATION minfo;
1332   char* cptr = ptr;
1333   while (size) {
1334     if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1335       return -1;
1336     if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1337         minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1338       return -1;
1339     if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1340       return -1;
1341     cptr += minfo.RegionSize;
1342     size -= minfo.RegionSize;
1343   }
1344   return 0;
1345 }
1346 
1347 #define CALL_MMAP(s)         win32mmap(s)
1348 #define CALL_MUNMAP(a, s)    win32munmap((a), (s))
1349 #define DIRECT_MMAP(s)       win32direct_mmap(s)
1350 #endif /* WIN32 */
1351 #endif /* HAVE_MMAP */
1352 
1353 #if HAVE_MMAP && HAVE_MREMAP
1354 #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1355 #else  /* HAVE_MMAP && HAVE_MREMAP */
1356 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1357 #endif /* HAVE_MMAP && HAVE_MREMAP */
1358 
1359 #if HAVE_MORECORE
1360 #define CALL_MORECORE(S)     MORECORE(S)
1361 #else  /* HAVE_MORECORE */
1362 #define CALL_MORECORE(S)     MFAIL
1363 #endif /* HAVE_MORECORE */
1364 
1365 /* mstate bit set if continguous morecore disabled or failed */
1366 #define USE_NONCONTIGUOUS_BIT (4U)
1367 
1368 /* segment bit set in create_mspace_with_base */
1369 #define EXTERN_BIT            (8U)
1370 
1371 
1372 /* --------------------------- Lock preliminaries ------------------------ */
1373 
1374 #if USE_LOCKS
1375 
1376 /*
1377   When locks are defined, there are up to two global locks:
1378 
1379   * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1380     MORECORE.  In many cases sys_alloc requires two calls, that should
1381     not be interleaved with calls by other threads.  This does not
1382     protect against direct calls to MORECORE by other threads not
1383     using this lock, so there is still code to cope the best we can on
1384     interference.
1385 
1386   * magic_init_mutex ensures that mparams.magic and other
1387     unique mparams values are initialized only once.
1388 */
1389 
1390 #ifndef WIN32
1391 /* By default use posix locks */
1392 #include <pthread.h>
1393 #define MLOCK_T pthread_mutex_t
1394 #define INITIAL_LOCK(l)      pthread_mutex_init(l, NULL)
1395 #define ACQUIRE_LOCK(l)      pthread_mutex_lock(l)
1396 #define RELEASE_LOCK(l)      pthread_mutex_unlock(l)
1397 
1398 #if HAVE_MORECORE
1399 static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
1400 #endif /* HAVE_MORECORE */
1401 
1402 static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
1403 
1404 #else /* WIN32 */
1405 /*
1406    Because lock-protected regions have bounded times, and there
1407    are no recursive lock calls, we can use simple spinlocks.
1408 */
1409 
1410 #define MLOCK_T long
win32_acquire_lock(MLOCK_T * sl)1411 static int win32_acquire_lock (MLOCK_T *sl) {
1412   for (;;) {
1413 #ifdef InterlockedCompareExchangePointer
1414     if (!InterlockedCompareExchange(sl, 1, 0))
1415       return 0;
1416 #else  /* Use older void* version */
1417     if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
1418       return 0;
1419 #endif /* InterlockedCompareExchangePointer */
1420     Sleep (0);
1421   }
1422 }
1423 
win32_release_lock(MLOCK_T * sl)1424 static void win32_release_lock (MLOCK_T *sl) {
1425   InterlockedExchange (sl, 0);
1426 }
1427 
1428 #define INITIAL_LOCK(l)      *(l)=0
1429 #define ACQUIRE_LOCK(l)      win32_acquire_lock(l)
1430 #define RELEASE_LOCK(l)      win32_release_lock(l)
1431 #if HAVE_MORECORE
1432 static MLOCK_T morecore_mutex;
1433 #endif /* HAVE_MORECORE */
1434 static MLOCK_T magic_init_mutex;
1435 #endif /* WIN32 */
1436 
1437 #define USE_LOCK_BIT               (2U)
1438 #else  /* USE_LOCKS */
1439 #define USE_LOCK_BIT               (0U)
1440 #define INITIAL_LOCK(l)
1441 #endif /* USE_LOCKS */
1442 
1443 #if USE_LOCKS && HAVE_MORECORE
1444 #define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
1445 #define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
1446 #else /* USE_LOCKS && HAVE_MORECORE */
1447 #define ACQUIRE_MORECORE_LOCK()
1448 #define RELEASE_MORECORE_LOCK()
1449 #endif /* USE_LOCKS && HAVE_MORECORE */
1450 
1451 #if USE_LOCKS
1452 #define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
1453 #define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
1454 #else  /* USE_LOCKS */
1455 #define ACQUIRE_MAGIC_INIT_LOCK()
1456 #define RELEASE_MAGIC_INIT_LOCK()
1457 #endif /* USE_LOCKS */
1458 
1459 
1460 /* -----------------------  Chunk representations ------------------------ */
1461 
1462 /*
1463   (The following includes lightly edited explanations by Colin Plumb.)
1464 
1465   The malloc_chunk declaration below is misleading (but accurate and
1466   necessary).  It declares a "view" into memory allowing access to
1467   necessary fields at known offsets from a given base.
1468 
1469   Chunks of memory are maintained using a `boundary tag' method as
1470   originally described by Knuth.  (See the paper by Paul Wilson
1471   ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1472   techniques.)  Sizes of free chunks are stored both in the front of
1473   each chunk and at the end.  This makes consolidating fragmented
1474   chunks into bigger chunks fast.  The head fields also hold bits
1475   representing whether chunks are free or in use.
1476 
1477   Here are some pictures to make it clearer.  They are "exploded" to
1478   show that the state of a chunk can be thought of as extending from
1479   the high 31 bits of the head field of its header through the
1480   prev_foot and PINUSE_BIT bit of the following chunk header.
1481 
1482   A chunk that's in use looks like:
1483 
1484    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1485            | Size of previous chunk (if P = 1)                             |
1486            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1487          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1488          | Size of this chunk                                         1| +-+
1489    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1490          |                                                               |
1491          +-                                                             -+
1492          |                                                               |
1493          +-                                                             -+
1494          |                                                               :
1495          +-      size - sizeof(size_t) available payload bytes          -+
1496          :                                                               |
1497  chunk-> +-                                                             -+
1498          |                                                               |
1499          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1500        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1501        | Size of next chunk (may or may not be in use)               | +-+
1502  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1503 
1504     And if it's free, it looks like this:
1505 
1506    chunk-> +-                                                             -+
1507            | User payload (must be in use, or we would have merged!)       |
1508            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1509          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1510          | Size of this chunk                                         0| +-+
1511    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1512          | Next pointer                                                  |
1513          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1514          | Prev pointer                                                  |
1515          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1516          |                                                               :
1517          +-      size - sizeof(struct chunk) unused bytes               -+
1518          :                                                               |
1519  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1520          | Size of this chunk                                            |
1521          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1522        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1523        | Size of next chunk (must be in use, or we would have merged)| +-+
1524  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1525        |                                                               :
1526        +- User payload                                                -+
1527        :                                                               |
1528        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1529                                                                      |0|
1530                                                                      +-+
1531   Note that since we always merge adjacent free chunks, the chunks
1532   adjacent to a free chunk must be in use.
1533 
1534   Given a pointer to a chunk (which can be derived trivially from the
1535   payload pointer) we can, in O(1) time, find out whether the adjacent
1536   chunks are free, and if so, unlink them from the lists that they
1537   are on and merge them with the current chunk.
1538 
1539   Chunks always begin on even word boundaries, so the mem portion
1540   (which is returned to the user) is also on an even word boundary, and
1541   thus at least double-word aligned.
1542 
1543   The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1544   chunk size (which is always a multiple of two words), is an in-use
1545   bit for the *previous* chunk.  If that bit is *clear*, then the
1546   word before the current chunk size contains the previous chunk
1547   size, and can be used to find the front of the previous chunk.
1548   The very first chunk allocated always has this bit set, preventing
1549   access to non-existent (or non-owned) memory. If pinuse is set for
1550   any given chunk, then you CANNOT determine the size of the
1551   previous chunk, and might even get a memory addressing fault when
1552   trying to do so.
1553 
1554   The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1555   the chunk size redundantly records whether the current chunk is
1556   inuse. This redundancy enables usage checks within free and realloc,
1557   and reduces indirection when freeing and consolidating chunks.
1558 
1559   Each freshly allocated chunk must have both cinuse and pinuse set.
1560   That is, each allocated chunk borders either a previously allocated
1561   and still in-use chunk, or the base of its memory arena. This is
1562   ensured by making all allocations from the the `lowest' part of any
1563   found chunk.  Further, no free chunk physically borders another one,
1564   so each free chunk is known to be preceded and followed by either
1565   inuse chunks or the ends of memory.
1566 
1567   Note that the `foot' of the current chunk is actually represented
1568   as the prev_foot of the NEXT chunk. This makes it easier to
1569   deal with alignments etc but can be very confusing when trying
1570   to extend or adapt this code.
1571 
1572   The exceptions to all this are
1573 
1574      1. The special chunk `top' is the top-most available chunk (i.e.,
1575         the one bordering the end of available memory). It is treated
1576         specially.  Top is never included in any bin, is used only if
1577         no other chunk is available, and is released back to the
1578         system if it is very large (see M_TRIM_THRESHOLD).  In effect,
1579         the top chunk is treated as larger (and thus less well
1580         fitting) than any other available chunk.  The top chunk
1581         doesn't update its trailing size field since there is no next
1582         contiguous chunk that would have to index off it. However,
1583         space is still allocated for it (TOP_FOOT_SIZE) to enable
1584         separation or merging when space is extended.
1585 
1586      3. Chunks allocated via mmap, which have the lowest-order bit
1587         (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1588         PINUSE_BIT in their head fields.  Because they are allocated
1589         one-by-one, each must carry its own prev_foot field, which is
1590         also used to hold the offset this chunk has within its mmapped
1591         region, which is needed to preserve alignment. Each mmapped
1592         chunk is trailed by the first two fields of a fake next-chunk
1593         for sake of usage checks.
1594 
1595 */
1596 
1597 struct malloc_chunk {
1598   size_t               prev_foot;  /* Size of previous chunk (if free).  */
1599   size_t               head;       /* Size and inuse bits. */
1600   struct malloc_chunk* fd;         /* double links -- used only if free. */
1601   struct malloc_chunk* bk;
1602 };
1603 
1604 typedef struct malloc_chunk  mchunk;
1605 typedef struct malloc_chunk* mchunkptr;
1606 typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
1607 typedef unsigned int bindex_t;         /* Described below */
1608 typedef unsigned int binmap_t;         /* Described below */
1609 typedef unsigned int flag_t;           /* The type of various bit flag sets */
1610 
1611 /* ------------------- Chunks sizes and alignments ----------------------- */
1612 
1613 #define MCHUNK_SIZE         (sizeof(mchunk))
1614 
1615 #if FOOTERS
1616 #define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
1617 #else /* FOOTERS */
1618 #define CHUNK_OVERHEAD      (SIZE_T_SIZE)
1619 #endif /* FOOTERS */
1620 
1621 /* MMapped chunks need a second word of overhead ... */
1622 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1623 /* ... and additional padding for fake next-chunk at foot */
1624 #define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
1625 
1626 /* The smallest size we can malloc is an aligned minimal chunk */
1627 #define MIN_CHUNK_SIZE\
1628   ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1629 
1630 /* conversion from malloc headers to user pointers, and back */
1631 #define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
1632 #define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1633 /* chunk associated with aligned address A */
1634 #define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
1635 
1636 /* Bounds on request (not chunk) sizes. */
1637 #define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
1638 #define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1639 
1640 /* pad request bytes into a usable size */
1641 #define pad_request(req) \
1642    (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1643 
1644 /* pad request, checking for minimum (but not maximum) */
1645 #define request2size(req) \
1646   (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1647 
1648 
1649 /* ------------------ Operations on head and foot fields ----------------- */
1650 
1651 /*
1652   The head field of a chunk is or'ed with PINUSE_BIT when previous
1653   adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1654   use. If the chunk was obtained with mmap, the prev_foot field has
1655   IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1656   mmapped region to the base of the chunk.
1657 */
1658 
1659 #define PINUSE_BIT          (SIZE_T_ONE)
1660 #define CINUSE_BIT          (SIZE_T_TWO)
1661 #define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1662 
1663 /* Head value for fenceposts */
1664 #define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1665 
1666 /* extraction of fields from head words */
1667 #define cinuse(p)           ((p)->head & CINUSE_BIT)
1668 #define pinuse(p)           ((p)->head & PINUSE_BIT)
1669 #define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1670 
1671 #define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1672 #define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1673 
1674 /* Treat space at ptr +/- offset as a chunk */
1675 #define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1676 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1677 
1678 /* Ptr to next or previous physical malloc_chunk. */
1679 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1680 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1681 
1682 /* extract next chunk's pinuse bit */
1683 #define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1684 
1685 /* Get/set size at footer */
1686 #define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1687 #define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1688 
1689 /* Set size, pinuse bit, and foot */
1690 #define set_size_and_pinuse_of_free_chunk(p, s)\
1691   ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1692 
1693 /* Set size, pinuse bit, foot, and clear next pinuse */
1694 #define set_free_with_pinuse(p, s, n)\
1695   (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1696 
1697 #define is_mmapped(p)\
1698   (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1699 
1700 /* Get the internal overhead associated with chunk p */
1701 #define overhead_for(p)\
1702  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1703 
1704 /* Return true if malloced space is not necessarily cleared */
1705 #if MMAP_CLEARS
1706 #define calloc_must_clear(p) (!is_mmapped(p))
1707 #else /* MMAP_CLEARS */
1708 #define calloc_must_clear(p) (1)
1709 #endif /* MMAP_CLEARS */
1710 
1711 /* ---------------------- Overlaid data structures ----------------------- */
1712 
1713 /*
1714   When chunks are not in use, they are treated as nodes of either
1715   lists or trees.
1716 
1717   "Small"  chunks are stored in circular doubly-linked lists, and look
1718   like this:
1719 
1720     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1721             |             Size of previous chunk                            |
1722             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1723     `head:' |             Size of chunk, in bytes                         |P|
1724       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1725             |             Forward pointer to next chunk in list             |
1726             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1727             |             Back pointer to previous chunk in list            |
1728             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1729             |             Unused space (may be 0 bytes long)                .
1730             .                                                               .
1731             .                                                               |
1732 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1733     `foot:' |             Size of chunk, in bytes                           |
1734             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1735 
1736   Larger chunks are kept in a form of bitwise digital trees (aka
1737   tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1738   free chunks greater than 256 bytes, their size doesn't impose any
1739   constraints on user chunk sizes.  Each node looks like:
1740 
1741     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1742             |             Size of previous chunk                            |
1743             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1744     `head:' |             Size of chunk, in bytes                         |P|
1745       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1746             |             Forward pointer to next chunk of same size        |
1747             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1748             |             Back pointer to previous chunk of same size       |
1749             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1750             |             Pointer to left child (child[0])                  |
1751             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1752             |             Pointer to right child (child[1])                 |
1753             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1754             |             Pointer to parent                                 |
1755             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1756             |             bin index of this chunk                           |
1757             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1758             |             Unused space                                      .
1759             .                                                               |
1760 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1761     `foot:' |             Size of chunk, in bytes                           |
1762             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1763 
1764   Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1765   of the same size are arranged in a circularly-linked list, with only
1766   the oldest chunk (the next to be used, in our FIFO ordering)
1767   actually in the tree.  (Tree members are distinguished by a non-null
1768   parent pointer.)  If a chunk with the same size an an existing node
1769   is inserted, it is linked off the existing node using pointers that
1770   work in the same way as fd/bk pointers of small chunks.
1771 
1772   Each tree contains a power of 2 sized range of chunk sizes (the
1773   smallest is 0x100 <= x < 0x180), which is is divided in half at each
1774   tree level, with the chunks in the smaller half of the range (0x100
1775   <= x < 0x140 for the top nose) in the left subtree and the larger
1776   half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1777   done by inspecting individual bits.
1778 
1779   Using these rules, each node's left subtree contains all smaller
1780   sizes than its right subtree.  However, the node at the root of each
1781   subtree has no particular ordering relationship to either.  (The
1782   dividing line between the subtree sizes is based on trie relation.)
1783   If we remove the last chunk of a given size from the interior of the
1784   tree, we need to replace it with a leaf node.  The tree ordering
1785   rules permit a node to be replaced by any leaf below it.
1786 
1787   The smallest chunk in a tree (a common operation in a best-fit
1788   allocator) can be found by walking a path to the leftmost leaf in
1789   the tree.  Unlike a usual binary tree, where we follow left child
1790   pointers until we reach a null, here we follow the right child
1791   pointer any time the left one is null, until we reach a leaf with
1792   both child pointers null. The smallest chunk in the tree will be
1793   somewhere along that path.
1794 
1795   The worst case number of steps to add, find, or remove a node is
1796   bounded by the number of bits differentiating chunks within
1797   bins. Under current bin calculations, this ranges from 6 up to 21
1798   (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1799   is of course much better.
1800 */
1801 
1802 struct malloc_tree_chunk {
1803   /* The first four fields must be compatible with malloc_chunk */
1804   size_t                    prev_foot;
1805   size_t                    head;
1806   struct malloc_tree_chunk* fd;
1807   struct malloc_tree_chunk* bk;
1808 
1809   struct malloc_tree_chunk* child[2];
1810   struct malloc_tree_chunk* parent;
1811   bindex_t                  index;
1812 };
1813 
1814 typedef struct malloc_tree_chunk  tchunk;
1815 typedef struct malloc_tree_chunk* tchunkptr;
1816 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1817 
1818 /* A little helper macro for trees */
1819 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1820 
1821 /* ----------------------------- Segments -------------------------------- */
1822 
1823 /*
1824   Each malloc space may include non-contiguous segments, held in a
1825   list headed by an embedded malloc_segment record representing the
1826   top-most space. Segments also include flags holding properties of
1827   the space. Large chunks that are directly allocated by mmap are not
1828   included in this list. They are instead independently created and
1829   destroyed without otherwise keeping track of them.
1830 
1831   Segment management mainly comes into play for spaces allocated by
1832   MMAP.  Any call to MMAP might or might not return memory that is
1833   adjacent to an existing segment.  MORECORE normally contiguously
1834   extends the current space, so this space is almost always adjacent,
1835   which is simpler and faster to deal with. (This is why MORECORE is
1836   used preferentially to MMAP when both are available -- see
1837   sys_alloc.)  When allocating using MMAP, we don't use any of the
1838   hinting mechanisms (inconsistently) supported in various
1839   implementations of unix mmap, or distinguish reserving from
1840   committing memory. Instead, we just ask for space, and exploit
1841   contiguity when we get it.  It is probably possible to do
1842   better than this on some systems, but no general scheme seems
1843   to be significantly better.
1844 
1845   Management entails a simpler variant of the consolidation scheme
1846   used for chunks to reduce fragmentation -- new adjacent memory is
1847   normally prepended or appended to an existing segment. However,
1848   there are limitations compared to chunk consolidation that mostly
1849   reflect the fact that segment processing is relatively infrequent
1850   (occurring only when getting memory from system) and that we
1851   don't expect to have huge numbers of segments:
1852 
1853   * Segments are not indexed, so traversal requires linear scans.  (It
1854     would be possible to index these, but is not worth the extra
1855     overhead and complexity for most programs on most platforms.)
1856   * New segments are only appended to old ones when holding top-most
1857     memory; if they cannot be prepended to others, they are held in
1858     different segments.
1859 
1860   Except for the top-most segment of an mstate, each segment record
1861   is kept at the tail of its segment. Segments are added by pushing
1862   segment records onto the list headed by &mstate.seg for the
1863   containing mstate.
1864 
1865   Segment flags control allocation/merge/deallocation policies:
1866   * If EXTERN_BIT set, then we did not allocate this segment,
1867     and so should not try to deallocate or merge with others.
1868     (This currently holds only for the initial segment passed
1869     into create_mspace_with_base.)
1870   * If IS_MMAPPED_BIT set, the segment may be merged with
1871     other surrounding mmapped segments and trimmed/de-allocated
1872     using munmap.
1873   * If neither bit is set, then the segment was obtained using
1874     MORECORE so can be merged with surrounding MORECORE'd segments
1875     and deallocated/trimmed using MORECORE with negative arguments.
1876 */
1877 
1878 struct malloc_segment {
1879   char*        base;             /* base address */
1880   size_t       size;             /* allocated size */
1881   struct malloc_segment* next;   /* ptr to next segment */
1882 #if FFI_MMAP_EXEC_WRIT
1883   /* The mmap magic is supposed to store the address of the executable
1884      segment at the very end of the requested block.  */
1885 
1886 # define mmap_exec_offset(b,s) (*(ptrdiff_t*)((b)+(s)-sizeof(ptrdiff_t)))
1887 
1888   /* We can only merge segments if their corresponding executable
1889      segments are at identical offsets.  */
1890 # define check_segment_merge(S,b,s) \
1891   (mmap_exec_offset((b),(s)) == (S)->exec_offset)
1892 
1893 # define add_segment_exec_offset(p,S) ((char*)(p) + (S)->exec_offset)
1894 # define sub_segment_exec_offset(p,S) ((char*)(p) - (S)->exec_offset)
1895 
1896   /* The removal of sflags only works with HAVE_MORECORE == 0.  */
1897 
1898 # define get_segment_flags(S)   (IS_MMAPPED_BIT)
1899 # define set_segment_flags(S,v) \
1900   (((v) != IS_MMAPPED_BIT) ? (ABORT, (v)) :				\
1901    (((S)->exec_offset =							\
1902      mmap_exec_offset((S)->base, (S)->size)),				\
1903     (mmap_exec_offset((S)->base + (S)->exec_offset, (S)->size) !=	\
1904      (S)->exec_offset) ? (ABORT, (v)) :					\
1905    (mmap_exec_offset((S)->base, (S)->size) = 0), (v)))
1906 
1907   /* We use an offset here, instead of a pointer, because then, when
1908      base changes, we don't have to modify this.  On architectures
1909      with segmented addresses, this might not work.  */
1910   ptrdiff_t    exec_offset;
1911 #else
1912 
1913 # define get_segment_flags(S)   ((S)->sflags)
1914 # define set_segment_flags(S,v) ((S)->sflags = (v))
1915 # define check_segment_merge(S,b,s) (1)
1916 
1917   flag_t       sflags;           /* mmap and extern flag */
1918 #endif
1919 };
1920 
1921 #define is_mmapped_segment(S)  (get_segment_flags(S) & IS_MMAPPED_BIT)
1922 #define is_extern_segment(S)   (get_segment_flags(S) & EXTERN_BIT)
1923 
1924 typedef struct malloc_segment  msegment;
1925 typedef struct malloc_segment* msegmentptr;
1926 
1927 /* ---------------------------- malloc_state ----------------------------- */
1928 
1929 /*
1930    A malloc_state holds all of the bookkeeping for a space.
1931    The main fields are:
1932 
1933   Top
1934     The topmost chunk of the currently active segment. Its size is
1935     cached in topsize.  The actual size of topmost space is
1936     topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1937     fenceposts and segment records if necessary when getting more
1938     space from the system.  The size at which to autotrim top is
1939     cached from mparams in trim_check, except that it is disabled if
1940     an autotrim fails.
1941 
1942   Designated victim (dv)
1943     This is the preferred chunk for servicing small requests that
1944     don't have exact fits.  It is normally the chunk split off most
1945     recently to service another small request.  Its size is cached in
1946     dvsize. The link fields of this chunk are not maintained since it
1947     is not kept in a bin.
1948 
1949   SmallBins
1950     An array of bin headers for free chunks.  These bins hold chunks
1951     with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1952     chunks of all the same size, spaced 8 bytes apart.  To simplify
1953     use in double-linked lists, each bin header acts as a malloc_chunk
1954     pointing to the real first node, if it exists (else pointing to
1955     itself).  This avoids special-casing for headers.  But to avoid
1956     waste, we allocate only the fd/bk pointers of bins, and then use
1957     repositioning tricks to treat these as the fields of a chunk.
1958 
1959   TreeBins
1960     Treebins are pointers to the roots of trees holding a range of
1961     sizes. There are 2 equally spaced treebins for each power of two
1962     from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1963     larger.
1964 
1965   Bin maps
1966     There is one bit map for small bins ("smallmap") and one for
1967     treebins ("treemap).  Each bin sets its bit when non-empty, and
1968     clears the bit when empty.  Bit operations are then used to avoid
1969     bin-by-bin searching -- nearly all "search" is done without ever
1970     looking at bins that won't be selected.  The bit maps
1971     conservatively use 32 bits per map word, even if on 64bit system.
1972     For a good description of some of the bit-based techniques used
1973     here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1974     supplement at http://hackersdelight.org/). Many of these are
1975     intended to reduce the branchiness of paths through malloc etc, as
1976     well as to reduce the number of memory locations read or written.
1977 
1978   Segments
1979     A list of segments headed by an embedded malloc_segment record
1980     representing the initial space.
1981 
1982   Address check support
1983     The least_addr field is the least address ever obtained from
1984     MORECORE or MMAP. Attempted frees and reallocs of any address less
1985     than this are trapped (unless INSECURE is defined).
1986 
1987   Magic tag
1988     A cross-check field that should always hold same value as mparams.magic.
1989 
1990   Flags
1991     Bits recording whether to use MMAP, locks, or contiguous MORECORE
1992 
1993   Statistics
1994     Each space keeps track of current and maximum system memory
1995     obtained via MORECORE or MMAP.
1996 
1997   Locking
1998     If USE_LOCKS is defined, the "mutex" lock is acquired and released
1999     around every public call using this mspace.
2000 */
2001 
2002 /* Bin types, widths and sizes */
2003 #define NSMALLBINS        (32U)
2004 #define NTREEBINS         (32U)
2005 #define SMALLBIN_SHIFT    (3U)
2006 #define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
2007 #define TREEBIN_SHIFT     (8U)
2008 #define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
2009 #define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
2010 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2011 
2012 struct malloc_state {
2013   binmap_t   smallmap;
2014   binmap_t   treemap;
2015   size_t     dvsize;
2016   size_t     topsize;
2017   char*      least_addr;
2018   mchunkptr  dv;
2019   mchunkptr  top;
2020   size_t     trim_check;
2021   size_t     magic;
2022   mchunkptr  smallbins[(NSMALLBINS+1)*2];
2023   tbinptr    treebins[NTREEBINS];
2024   size_t     footprint;
2025   size_t     max_footprint;
2026   flag_t     mflags;
2027 #if USE_LOCKS
2028   MLOCK_T    mutex;     /* locate lock among fields that rarely change */
2029 #endif /* USE_LOCKS */
2030   msegment   seg;
2031 };
2032 
2033 typedef struct malloc_state*    mstate;
2034 
2035 /* ------------- Global malloc_state and malloc_params ------------------- */
2036 
2037 /*
2038   malloc_params holds global properties, including those that can be
2039   dynamically set using mallopt. There is a single instance, mparams,
2040   initialized in init_mparams.
2041 */
2042 
2043 struct malloc_params {
2044   size_t magic;
2045   size_t page_size;
2046   size_t granularity;
2047   size_t mmap_threshold;
2048   size_t trim_threshold;
2049   flag_t default_mflags;
2050 };
2051 
2052 static struct malloc_params mparams;
2053 
2054 /* The global malloc_state used for all non-"mspace" calls */
2055 static struct malloc_state _gm_;
2056 #define gm                 (&_gm_)
2057 #define is_global(M)       ((M) == &_gm_)
2058 #define is_initialized(M)  ((M)->top != 0)
2059 
2060 /* -------------------------- system alloc setup ------------------------- */
2061 
2062 /* Operations on mflags */
2063 
2064 #define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
2065 #define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
2066 #define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
2067 
2068 #define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
2069 #define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
2070 #define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
2071 
2072 #define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
2073 #define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
2074 
2075 #define set_lock(M,L)\
2076  ((M)->mflags = (L)?\
2077   ((M)->mflags | USE_LOCK_BIT) :\
2078   ((M)->mflags & ~USE_LOCK_BIT))
2079 
2080 /* page-align a size */
2081 #define page_align(S)\
2082  (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2083 
2084 /* granularity-align a size */
2085 #define granularity_align(S)\
2086   (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2087 
2088 #define is_page_aligned(S)\
2089    (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2090 #define is_granularity_aligned(S)\
2091    (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2092 
2093 /*  True if segment S holds address A */
2094 #define segment_holds(S, A)\
2095   ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2096 
2097 /* Return segment holding given address */
segment_holding(mstate m,char * addr)2098 static msegmentptr segment_holding(mstate m, char* addr) {
2099   msegmentptr sp = &m->seg;
2100   for (;;) {
2101     if (addr >= sp->base && addr < sp->base + sp->size)
2102       return sp;
2103     if ((sp = sp->next) == 0)
2104       return 0;
2105   }
2106 }
2107 
2108 /* Return true if segment contains a segment link */
has_segment_link(mstate m,msegmentptr ss)2109 static int has_segment_link(mstate m, msegmentptr ss) {
2110   msegmentptr sp = &m->seg;
2111   for (;;) {
2112     if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2113       return 1;
2114     if ((sp = sp->next) == 0)
2115       return 0;
2116   }
2117 }
2118 
2119 #ifndef MORECORE_CANNOT_TRIM
2120 #define should_trim(M,s)  ((s) > (M)->trim_check)
2121 #else  /* MORECORE_CANNOT_TRIM */
2122 #define should_trim(M,s)  (0)
2123 #endif /* MORECORE_CANNOT_TRIM */
2124 
2125 /*
2126   TOP_FOOT_SIZE is padding at the end of a segment, including space
2127   that may be needed to place segment records and fenceposts when new
2128   noncontiguous segments are added.
2129 */
2130 #define TOP_FOOT_SIZE\
2131   (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2132 
2133 
2134 /* -------------------------------  Hooks -------------------------------- */
2135 
2136 /*
2137   PREACTION should be defined to return 0 on success, and nonzero on
2138   failure. If you are not using locking, you can redefine these to do
2139   anything you like.
2140 */
2141 
2142 #if USE_LOCKS
2143 
2144 /* Ensure locks are initialized */
2145 #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2146 
2147 #define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2148 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2149 #else /* USE_LOCKS */
2150 
2151 #ifndef PREACTION
2152 #define PREACTION(M) (0)
2153 #endif  /* PREACTION */
2154 
2155 #ifndef POSTACTION
2156 #define POSTACTION(M)
2157 #endif  /* POSTACTION */
2158 
2159 #endif /* USE_LOCKS */
2160 
2161 /*
2162   CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2163   USAGE_ERROR_ACTION is triggered on detected bad frees and
2164   reallocs. The argument p is an address that might have triggered the
2165   fault. It is ignored by the two predefined actions, but might be
2166   useful in custom actions that try to help diagnose errors.
2167 */
2168 
2169 #if PROCEED_ON_ERROR
2170 
2171 /* A count of the number of corruption errors causing resets */
2172 int malloc_corruption_error_count;
2173 
2174 /* default corruption action */
2175 static void reset_on_error(mstate m);
2176 
2177 #define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
2178 #define USAGE_ERROR_ACTION(m, p)
2179 
2180 #else /* PROCEED_ON_ERROR */
2181 
2182 #ifndef CORRUPTION_ERROR_ACTION
2183 #define CORRUPTION_ERROR_ACTION(m) ABORT
2184 #endif /* CORRUPTION_ERROR_ACTION */
2185 
2186 #ifndef USAGE_ERROR_ACTION
2187 #define USAGE_ERROR_ACTION(m,p) ABORT
2188 #endif /* USAGE_ERROR_ACTION */
2189 
2190 #endif /* PROCEED_ON_ERROR */
2191 
2192 /* -------------------------- Debugging setup ---------------------------- */
2193 
2194 #if ! DEBUG
2195 
2196 #define check_free_chunk(M,P)
2197 #define check_inuse_chunk(M,P)
2198 #define check_malloced_chunk(M,P,N)
2199 #define check_mmapped_chunk(M,P)
2200 #define check_malloc_state(M)
2201 #define check_top_chunk(M,P)
2202 
2203 #else /* DEBUG */
2204 #define check_free_chunk(M,P)       do_check_free_chunk(M,P)
2205 #define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
2206 #define check_top_chunk(M,P)        do_check_top_chunk(M,P)
2207 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2208 #define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
2209 #define check_malloc_state(M)       do_check_malloc_state(M)
2210 
2211 static void   do_check_any_chunk(mstate m, mchunkptr p);
2212 static void   do_check_top_chunk(mstate m, mchunkptr p);
2213 static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
2214 static void   do_check_inuse_chunk(mstate m, mchunkptr p);
2215 static void   do_check_free_chunk(mstate m, mchunkptr p);
2216 static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
2217 static void   do_check_tree(mstate m, tchunkptr t);
2218 static void   do_check_treebin(mstate m, bindex_t i);
2219 static void   do_check_smallbin(mstate m, bindex_t i);
2220 static void   do_check_malloc_state(mstate m);
2221 static int    bin_find(mstate m, mchunkptr x);
2222 static size_t traverse_and_check(mstate m);
2223 #endif /* DEBUG */
2224 
2225 /* ---------------------------- Indexing Bins ---------------------------- */
2226 
2227 #define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2228 #define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
2229 #define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
2230 #define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
2231 
2232 /* addressing by index. See above about smallbin repositioning */
2233 #define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2234 #define treebin_at(M,i)     (&((M)->treebins[i]))
2235 
2236 /* assign tree index for size S to variable I */
2237 #if defined(__GNUC__) && defined(i386)
2238 #define compute_tree_index(S, I)\
2239 {\
2240   size_t X = S >> TREEBIN_SHIFT;\
2241   if (X == 0)\
2242     I = 0;\
2243   else if (X > 0xFFFF)\
2244     I = NTREEBINS-1;\
2245   else {\
2246     unsigned int K;\
2247     __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
2248     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2249   }\
2250 }
2251 #else /* GNUC */
2252 #define compute_tree_index(S, I)\
2253 {\
2254   size_t X = S >> TREEBIN_SHIFT;\
2255   if (X == 0)\
2256     I = 0;\
2257   else if (X > 0xFFFF)\
2258     I = NTREEBINS-1;\
2259   else {\
2260     unsigned int Y = (unsigned int)X;\
2261     unsigned int N = ((Y - 0x100) >> 16) & 8;\
2262     unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2263     N += K;\
2264     N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2265     K = 14 - N + ((Y <<= K) >> 15);\
2266     I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2267   }\
2268 }
2269 #endif /* GNUC */
2270 
2271 /* Bit representing maximum resolved size in a treebin at i */
2272 #define bit_for_tree_index(i) \
2273    (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2274 
2275 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2276 #define leftshift_for_tree_index(i) \
2277    ((i == NTREEBINS-1)? 0 : \
2278     ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2279 
2280 /* The size of the smallest chunk held in bin with index i */
2281 #define minsize_for_tree_index(i) \
2282    ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
2283    (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2284 
2285 
2286 /* ------------------------ Operations on bin maps ----------------------- */
2287 
2288 /* bit corresponding to given index */
2289 #define idx2bit(i)              ((binmap_t)(1) << (i))
2290 
2291 /* Mark/Clear bits with given index */
2292 #define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
2293 #define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
2294 #define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
2295 
2296 #define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
2297 #define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
2298 #define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
2299 
2300 /* index corresponding to given bit */
2301 
2302 #if defined(__GNUC__) && defined(i386)
2303 #define compute_bit2idx(X, I)\
2304 {\
2305   unsigned int J;\
2306   __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2307   I = (bindex_t)J;\
2308 }
2309 
2310 #else /* GNUC */
2311 #if  USE_BUILTIN_FFS
2312 #define compute_bit2idx(X, I) I = ffs(X)-1
2313 
2314 #else /* USE_BUILTIN_FFS */
2315 #define compute_bit2idx(X, I)\
2316 {\
2317   unsigned int Y = X - 1;\
2318   unsigned int K = Y >> (16-4) & 16;\
2319   unsigned int N = K;        Y >>= K;\
2320   N += K = Y >> (8-3) &  8;  Y >>= K;\
2321   N += K = Y >> (4-2) &  4;  Y >>= K;\
2322   N += K = Y >> (2-1) &  2;  Y >>= K;\
2323   N += K = Y >> (1-0) &  1;  Y >>= K;\
2324   I = (bindex_t)(N + Y);\
2325 }
2326 #endif /* USE_BUILTIN_FFS */
2327 #endif /* GNUC */
2328 
2329 /* isolate the least set bit of a bitmap */
2330 #define least_bit(x)         ((x) & -(x))
2331 
2332 /* mask with all bits to left of least bit of x on */
2333 #define left_bits(x)         ((x<<1) | -(x<<1))
2334 
2335 /* mask with all bits to left of or equal to least bit of x on */
2336 #define same_or_left_bits(x) ((x) | -(x))
2337 
2338 
2339 /* ----------------------- Runtime Check Support ------------------------- */
2340 
2341 /*
2342   For security, the main invariant is that malloc/free/etc never
2343   writes to a static address other than malloc_state, unless static
2344   malloc_state itself has been corrupted, which cannot occur via
2345   malloc (because of these checks). In essence this means that we
2346   believe all pointers, sizes, maps etc held in malloc_state, but
2347   check all of those linked or offsetted from other embedded data
2348   structures.  These checks are interspersed with main code in a way
2349   that tends to minimize their run-time cost.
2350 
2351   When FOOTERS is defined, in addition to range checking, we also
2352   verify footer fields of inuse chunks, which can be used guarantee
2353   that the mstate controlling malloc/free is intact.  This is a
2354   streamlined version of the approach described by William Robertson
2355   et al in "Run-time Detection of Heap-based Overflows" LISA'03
2356   http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2357   of an inuse chunk holds the xor of its mstate and a random seed,
2358   that is checked upon calls to free() and realloc().  This is
2359   (probablistically) unguessable from outside the program, but can be
2360   computed by any code successfully malloc'ing any chunk, so does not
2361   itself provide protection against code that has already broken
2362   security through some other means.  Unlike Robertson et al, we
2363   always dynamically check addresses of all offset chunks (previous,
2364   next, etc). This turns out to be cheaper than relying on hashes.
2365 */
2366 
2367 #if !INSECURE
2368 /* Check if address a is at least as high as any from MORECORE or MMAP */
2369 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2370 /* Check if address of next chunk n is higher than base chunk p */
2371 #define ok_next(p, n)    ((char*)(p) < (char*)(n))
2372 /* Check if p has its cinuse bit on */
2373 #define ok_cinuse(p)     cinuse(p)
2374 /* Check if p has its pinuse bit on */
2375 #define ok_pinuse(p)     pinuse(p)
2376 
2377 #else /* !INSECURE */
2378 #define ok_address(M, a) (1)
2379 #define ok_next(b, n)    (1)
2380 #define ok_cinuse(p)     (1)
2381 #define ok_pinuse(p)     (1)
2382 #endif /* !INSECURE */
2383 
2384 #if (FOOTERS && !INSECURE)
2385 /* Check if (alleged) mstate m has expected magic field */
2386 #define ok_magic(M)      ((M)->magic == mparams.magic)
2387 #else  /* (FOOTERS && !INSECURE) */
2388 #define ok_magic(M)      (1)
2389 #endif /* (FOOTERS && !INSECURE) */
2390 
2391 
2392 /* In gcc, use __builtin_expect to minimize impact of checks */
2393 #if !INSECURE
2394 #if defined(__GNUC__) && __GNUC__ >= 3
2395 #define RTCHECK(e)  __builtin_expect(e, 1)
2396 #else /* GNUC */
2397 #define RTCHECK(e)  (e)
2398 #endif /* GNUC */
2399 #else /* !INSECURE */
2400 #define RTCHECK(e)  (1)
2401 #endif /* !INSECURE */
2402 
2403 /* macros to set up inuse chunks with or without footers */
2404 
2405 #if !FOOTERS
2406 
2407 #define mark_inuse_foot(M,p,s)
2408 
2409 /* Set cinuse bit and pinuse bit of next chunk */
2410 #define set_inuse(M,p,s)\
2411   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2412   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2413 
2414 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2415 #define set_inuse_and_pinuse(M,p,s)\
2416   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2417   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2418 
2419 /* Set size, cinuse and pinuse bit of this chunk */
2420 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2421   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2422 
2423 #else /* FOOTERS */
2424 
2425 /* Set foot of inuse chunk to be xor of mstate and seed */
2426 #define mark_inuse_foot(M,p,s)\
2427   (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2428 
2429 #define get_mstate_for(p)\
2430   ((mstate)(((mchunkptr)((char*)(p) +\
2431     (chunksize(p))))->prev_foot ^ mparams.magic))
2432 
2433 #define set_inuse(M,p,s)\
2434   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2435   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2436   mark_inuse_foot(M,p,s))
2437 
2438 #define set_inuse_and_pinuse(M,p,s)\
2439   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2440   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2441  mark_inuse_foot(M,p,s))
2442 
2443 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2444   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2445   mark_inuse_foot(M, p, s))
2446 
2447 #endif /* !FOOTERS */
2448 
2449 /* ---------------------------- setting mparams -------------------------- */
2450 
2451 /* Initialize mparams */
init_mparams(void)2452 static int init_mparams(void) {
2453   if (mparams.page_size == 0) {
2454     size_t s;
2455 
2456     mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2457     mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2458 #if MORECORE_CONTIGUOUS
2459     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2460 #else  /* MORECORE_CONTIGUOUS */
2461     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2462 #endif /* MORECORE_CONTIGUOUS */
2463 
2464 #if (FOOTERS && !INSECURE)
2465     {
2466 #if USE_DEV_RANDOM
2467       int fd;
2468       unsigned char buf[sizeof(size_t)];
2469       /* Try to use /dev/urandom, else fall back on using time */
2470       if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2471           read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2472         s = *((size_t *) buf);
2473         close(fd);
2474       }
2475       else
2476 #endif /* USE_DEV_RANDOM */
2477         s = (size_t)(time(0) ^ (size_t)0x55555555U);
2478 
2479       s |= (size_t)8U;    /* ensure nonzero */
2480       s &= ~(size_t)7U;   /* improve chances of fault for bad values */
2481 
2482     }
2483 #else /* (FOOTERS && !INSECURE) */
2484     s = (size_t)0x58585858U;
2485 #endif /* (FOOTERS && !INSECURE) */
2486     ACQUIRE_MAGIC_INIT_LOCK();
2487     if (mparams.magic == 0) {
2488       mparams.magic = s;
2489       /* Set up lock for main malloc area */
2490       INITIAL_LOCK(&gm->mutex);
2491       gm->mflags = mparams.default_mflags;
2492     }
2493     RELEASE_MAGIC_INIT_LOCK();
2494 
2495 #ifndef WIN32
2496     mparams.page_size = malloc_getpagesize;
2497     mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2498                            DEFAULT_GRANULARITY : mparams.page_size);
2499 #else /* WIN32 */
2500     {
2501       SYSTEM_INFO system_info;
2502       GetSystemInfo(&system_info);
2503       mparams.page_size = system_info.dwPageSize;
2504       mparams.granularity = system_info.dwAllocationGranularity;
2505     }
2506 #endif /* WIN32 */
2507 
2508     /* Sanity-check configuration:
2509        size_t must be unsigned and as wide as pointer type.
2510        ints must be at least 4 bytes.
2511        alignment must be at least 8.
2512        Alignment, min chunk size, and page size must all be powers of 2.
2513     */
2514     if ((sizeof(size_t) != sizeof(char*)) ||
2515         (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
2516         (sizeof(int) < 4)  ||
2517         (MALLOC_ALIGNMENT < (size_t)8U) ||
2518         ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
2519         ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
2520         ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2521         ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
2522       ABORT;
2523   }
2524   return 0;
2525 }
2526 
2527 /* support for mallopt */
change_mparam(int param_number,int value)2528 static int change_mparam(int param_number, int value) {
2529   size_t val = (size_t)value;
2530   init_mparams();
2531   switch(param_number) {
2532   case M_TRIM_THRESHOLD:
2533     mparams.trim_threshold = val;
2534     return 1;
2535   case M_GRANULARITY:
2536     if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2537       mparams.granularity = val;
2538       return 1;
2539     }
2540     else
2541       return 0;
2542   case M_MMAP_THRESHOLD:
2543     mparams.mmap_threshold = val;
2544     return 1;
2545   default:
2546     return 0;
2547   }
2548 }
2549 
2550 #if DEBUG
2551 /* ------------------------- Debugging Support --------------------------- */
2552 
2553 /* Check properties of any chunk, whether free, inuse, mmapped etc  */
do_check_any_chunk(mstate m,mchunkptr p)2554 static void do_check_any_chunk(mstate m, mchunkptr p) {
2555   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2556   assert(ok_address(m, p));
2557 }
2558 
2559 /* Check properties of top chunk */
do_check_top_chunk(mstate m,mchunkptr p)2560 static void do_check_top_chunk(mstate m, mchunkptr p) {
2561   msegmentptr sp = segment_holding(m, (char*)p);
2562   size_t  sz = chunksize(p);
2563   assert(sp != 0);
2564   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2565   assert(ok_address(m, p));
2566   assert(sz == m->topsize);
2567   assert(sz > 0);
2568   assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2569   assert(pinuse(p));
2570   assert(!next_pinuse(p));
2571 }
2572 
2573 /* Check properties of (inuse) mmapped chunks */
do_check_mmapped_chunk(mstate m,mchunkptr p)2574 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2575   size_t  sz = chunksize(p);
2576   size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2577   assert(is_mmapped(p));
2578   assert(use_mmap(m));
2579   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2580   assert(ok_address(m, p));
2581   assert(!is_small(sz));
2582   assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2583   assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2584   assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2585 }
2586 
2587 /* Check properties of inuse chunks */
do_check_inuse_chunk(mstate m,mchunkptr p)2588 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2589   do_check_any_chunk(m, p);
2590   assert(cinuse(p));
2591   assert(next_pinuse(p));
2592   /* If not pinuse and not mmapped, previous chunk has OK offset */
2593   assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2594   if (is_mmapped(p))
2595     do_check_mmapped_chunk(m, p);
2596 }
2597 
2598 /* Check properties of free chunks */
do_check_free_chunk(mstate m,mchunkptr p)2599 static void do_check_free_chunk(mstate m, mchunkptr p) {
2600   size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2601   mchunkptr next = chunk_plus_offset(p, sz);
2602   do_check_any_chunk(m, p);
2603   assert(!cinuse(p));
2604   assert(!next_pinuse(p));
2605   assert (!is_mmapped(p));
2606   if (p != m->dv && p != m->top) {
2607     if (sz >= MIN_CHUNK_SIZE) {
2608       assert((sz & CHUNK_ALIGN_MASK) == 0);
2609       assert(is_aligned(chunk2mem(p)));
2610       assert(next->prev_foot == sz);
2611       assert(pinuse(p));
2612       assert (next == m->top || cinuse(next));
2613       assert(p->fd->bk == p);
2614       assert(p->bk->fd == p);
2615     }
2616     else  /* markers are always of size SIZE_T_SIZE */
2617       assert(sz == SIZE_T_SIZE);
2618   }
2619 }
2620 
2621 /* Check properties of malloced chunks at the point they are malloced */
do_check_malloced_chunk(mstate m,void * mem,size_t s)2622 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2623   if (mem != 0) {
2624     mchunkptr p = mem2chunk(mem);
2625     size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2626     do_check_inuse_chunk(m, p);
2627     assert((sz & CHUNK_ALIGN_MASK) == 0);
2628     assert(sz >= MIN_CHUNK_SIZE);
2629     assert(sz >= s);
2630     /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2631     assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2632   }
2633 }
2634 
2635 /* Check a tree and its subtrees.  */
do_check_tree(mstate m,tchunkptr t)2636 static void do_check_tree(mstate m, tchunkptr t) {
2637   tchunkptr head = 0;
2638   tchunkptr u = t;
2639   bindex_t tindex = t->index;
2640   size_t tsize = chunksize(t);
2641   bindex_t idx;
2642   compute_tree_index(tsize, idx);
2643   assert(tindex == idx);
2644   assert(tsize >= MIN_LARGE_SIZE);
2645   assert(tsize >= minsize_for_tree_index(idx));
2646   assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2647 
2648   do { /* traverse through chain of same-sized nodes */
2649     do_check_any_chunk(m, ((mchunkptr)u));
2650     assert(u->index == tindex);
2651     assert(chunksize(u) == tsize);
2652     assert(!cinuse(u));
2653     assert(!next_pinuse(u));
2654     assert(u->fd->bk == u);
2655     assert(u->bk->fd == u);
2656     if (u->parent == 0) {
2657       assert(u->child[0] == 0);
2658       assert(u->child[1] == 0);
2659     }
2660     else {
2661       assert(head == 0); /* only one node on chain has parent */
2662       head = u;
2663       assert(u->parent != u);
2664       assert (u->parent->child[0] == u ||
2665               u->parent->child[1] == u ||
2666               *((tbinptr*)(u->parent)) == u);
2667       if (u->child[0] != 0) {
2668         assert(u->child[0]->parent == u);
2669         assert(u->child[0] != u);
2670         do_check_tree(m, u->child[0]);
2671       }
2672       if (u->child[1] != 0) {
2673         assert(u->child[1]->parent == u);
2674         assert(u->child[1] != u);
2675         do_check_tree(m, u->child[1]);
2676       }
2677       if (u->child[0] != 0 && u->child[1] != 0) {
2678         assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2679       }
2680     }
2681     u = u->fd;
2682   } while (u != t);
2683   assert(head != 0);
2684 }
2685 
2686 /*  Check all the chunks in a treebin.  */
do_check_treebin(mstate m,bindex_t i)2687 static void do_check_treebin(mstate m, bindex_t i) {
2688   tbinptr* tb = treebin_at(m, i);
2689   tchunkptr t = *tb;
2690   int empty = (m->treemap & (1U << i)) == 0;
2691   if (t == 0)
2692     assert(empty);
2693   if (!empty)
2694     do_check_tree(m, t);
2695 }
2696 
2697 /*  Check all the chunks in a smallbin.  */
do_check_smallbin(mstate m,bindex_t i)2698 static void do_check_smallbin(mstate m, bindex_t i) {
2699   sbinptr b = smallbin_at(m, i);
2700   mchunkptr p = b->bk;
2701   unsigned int empty = (m->smallmap & (1U << i)) == 0;
2702   if (p == b)
2703     assert(empty);
2704   if (!empty) {
2705     for (; p != b; p = p->bk) {
2706       size_t size = chunksize(p);
2707       mchunkptr q;
2708       /* each chunk claims to be free */
2709       do_check_free_chunk(m, p);
2710       /* chunk belongs in bin */
2711       assert(small_index(size) == i);
2712       assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2713       /* chunk is followed by an inuse chunk */
2714       q = next_chunk(p);
2715       if (q->head != FENCEPOST_HEAD)
2716         do_check_inuse_chunk(m, q);
2717     }
2718   }
2719 }
2720 
2721 /* Find x in a bin. Used in other check functions. */
bin_find(mstate m,mchunkptr x)2722 static int bin_find(mstate m, mchunkptr x) {
2723   size_t size = chunksize(x);
2724   if (is_small(size)) {
2725     bindex_t sidx = small_index(size);
2726     sbinptr b = smallbin_at(m, sidx);
2727     if (smallmap_is_marked(m, sidx)) {
2728       mchunkptr p = b;
2729       do {
2730         if (p == x)
2731           return 1;
2732       } while ((p = p->fd) != b);
2733     }
2734   }
2735   else {
2736     bindex_t tidx;
2737     compute_tree_index(size, tidx);
2738     if (treemap_is_marked(m, tidx)) {
2739       tchunkptr t = *treebin_at(m, tidx);
2740       size_t sizebits = size << leftshift_for_tree_index(tidx);
2741       while (t != 0 && chunksize(t) != size) {
2742         t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2743         sizebits <<= 1;
2744       }
2745       if (t != 0) {
2746         tchunkptr u = t;
2747         do {
2748           if (u == (tchunkptr)x)
2749             return 1;
2750         } while ((u = u->fd) != t);
2751       }
2752     }
2753   }
2754   return 0;
2755 }
2756 
2757 /* Traverse each chunk and check it; return total */
traverse_and_check(mstate m)2758 static size_t traverse_and_check(mstate m) {
2759   size_t sum = 0;
2760   if (is_initialized(m)) {
2761     msegmentptr s = &m->seg;
2762     sum += m->topsize + TOP_FOOT_SIZE;
2763     while (s != 0) {
2764       mchunkptr q = align_as_chunk(s->base);
2765       mchunkptr lastq = 0;
2766       assert(pinuse(q));
2767       while (segment_holds(s, q) &&
2768              q != m->top && q->head != FENCEPOST_HEAD) {
2769         sum += chunksize(q);
2770         if (cinuse(q)) {
2771           assert(!bin_find(m, q));
2772           do_check_inuse_chunk(m, q);
2773         }
2774         else {
2775           assert(q == m->dv || bin_find(m, q));
2776           assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2777           do_check_free_chunk(m, q);
2778         }
2779         lastq = q;
2780         q = next_chunk(q);
2781       }
2782       s = s->next;
2783     }
2784   }
2785   return sum;
2786 }
2787 
2788 /* Check all properties of malloc_state. */
do_check_malloc_state(mstate m)2789 static void do_check_malloc_state(mstate m) {
2790   bindex_t i;
2791   size_t total;
2792   /* check bins */
2793   for (i = 0; i < NSMALLBINS; ++i)
2794     do_check_smallbin(m, i);
2795   for (i = 0; i < NTREEBINS; ++i)
2796     do_check_treebin(m, i);
2797 
2798   if (m->dvsize != 0) { /* check dv chunk */
2799     do_check_any_chunk(m, m->dv);
2800     assert(m->dvsize == chunksize(m->dv));
2801     assert(m->dvsize >= MIN_CHUNK_SIZE);
2802     assert(bin_find(m, m->dv) == 0);
2803   }
2804 
2805   if (m->top != 0) {   /* check top chunk */
2806     do_check_top_chunk(m, m->top);
2807     assert(m->topsize == chunksize(m->top));
2808     assert(m->topsize > 0);
2809     assert(bin_find(m, m->top) == 0);
2810   }
2811 
2812   total = traverse_and_check(m);
2813   assert(total <= m->footprint);
2814   assert(m->footprint <= m->max_footprint);
2815 }
2816 #endif /* DEBUG */
2817 
2818 /* ----------------------------- statistics ------------------------------ */
2819 
2820 #if !NO_MALLINFO
internal_mallinfo(mstate m)2821 static struct mallinfo internal_mallinfo(mstate m) {
2822   struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2823   if (!PREACTION(m)) {
2824     check_malloc_state(m);
2825     if (is_initialized(m)) {
2826       size_t nfree = SIZE_T_ONE; /* top always free */
2827       size_t mfree = m->topsize + TOP_FOOT_SIZE;
2828       size_t sum = mfree;
2829       msegmentptr s = &m->seg;
2830       while (s != 0) {
2831         mchunkptr q = align_as_chunk(s->base);
2832         while (segment_holds(s, q) &&
2833                q != m->top && q->head != FENCEPOST_HEAD) {
2834           size_t sz = chunksize(q);
2835           sum += sz;
2836           if (!cinuse(q)) {
2837             mfree += sz;
2838             ++nfree;
2839           }
2840           q = next_chunk(q);
2841         }
2842         s = s->next;
2843       }
2844 
2845       nm.arena    = sum;
2846       nm.ordblks  = nfree;
2847       nm.hblkhd   = m->footprint - sum;
2848       nm.usmblks  = m->max_footprint;
2849       nm.uordblks = m->footprint - mfree;
2850       nm.fordblks = mfree;
2851       nm.keepcost = m->topsize;
2852     }
2853 
2854     POSTACTION(m);
2855   }
2856   return nm;
2857 }
2858 #endif /* !NO_MALLINFO */
2859 
internal_malloc_stats(mstate m)2860 static void internal_malloc_stats(mstate m) {
2861   if (!PREACTION(m)) {
2862     size_t maxfp = 0;
2863     size_t fp = 0;
2864     size_t used = 0;
2865     check_malloc_state(m);
2866     if (is_initialized(m)) {
2867       msegmentptr s = &m->seg;
2868       maxfp = m->max_footprint;
2869       fp = m->footprint;
2870       used = fp - (m->topsize + TOP_FOOT_SIZE);
2871 
2872       while (s != 0) {
2873         mchunkptr q = align_as_chunk(s->base);
2874         while (segment_holds(s, q) &&
2875                q != m->top && q->head != FENCEPOST_HEAD) {
2876           if (!cinuse(q))
2877             used -= chunksize(q);
2878           q = next_chunk(q);
2879         }
2880         s = s->next;
2881       }
2882     }
2883 
2884     fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2885     fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2886     fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2887 
2888     POSTACTION(m);
2889   }
2890 }
2891 
2892 /* ----------------------- Operations on smallbins ----------------------- */
2893 
2894 /*
2895   Various forms of linking and unlinking are defined as macros.  Even
2896   the ones for trees, which are very long but have very short typical
2897   paths.  This is ugly but reduces reliance on inlining support of
2898   compilers.
2899 */
2900 
2901 /* Link a free chunk into a smallbin  */
2902 #define insert_small_chunk(M, P, S) {\
2903   bindex_t I  = small_index(S);\
2904   mchunkptr B = smallbin_at(M, I);\
2905   mchunkptr F = B;\
2906   assert(S >= MIN_CHUNK_SIZE);\
2907   if (!smallmap_is_marked(M, I))\
2908     mark_smallmap(M, I);\
2909   else if (RTCHECK(ok_address(M, B->fd)))\
2910     F = B->fd;\
2911   else {\
2912     CORRUPTION_ERROR_ACTION(M);\
2913   }\
2914   B->fd = P;\
2915   F->bk = P;\
2916   P->fd = F;\
2917   P->bk = B;\
2918 }
2919 
2920 /* Unlink a chunk from a smallbin  */
2921 #define unlink_small_chunk(M, P, S) {\
2922   mchunkptr F = P->fd;\
2923   mchunkptr B = P->bk;\
2924   bindex_t I = small_index(S);\
2925   assert(P != B);\
2926   assert(P != F);\
2927   assert(chunksize(P) == small_index2size(I));\
2928   if (F == B)\
2929     clear_smallmap(M, I);\
2930   else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2931                    (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2932     F->bk = B;\
2933     B->fd = F;\
2934   }\
2935   else {\
2936     CORRUPTION_ERROR_ACTION(M);\
2937   }\
2938 }
2939 
2940 /* Unlink the first chunk from a smallbin */
2941 #define unlink_first_small_chunk(M, B, P, I) {\
2942   mchunkptr F = P->fd;\
2943   assert(P != B);\
2944   assert(P != F);\
2945   assert(chunksize(P) == small_index2size(I));\
2946   if (B == F)\
2947     clear_smallmap(M, I);\
2948   else if (RTCHECK(ok_address(M, F))) {\
2949     B->fd = F;\
2950     F->bk = B;\
2951   }\
2952   else {\
2953     CORRUPTION_ERROR_ACTION(M);\
2954   }\
2955 }
2956 
2957 /* Replace dv node, binning the old one */
2958 /* Used only when dvsize known to be small */
2959 #define replace_dv(M, P, S) {\
2960   size_t DVS = M->dvsize;\
2961   if (DVS != 0) {\
2962     mchunkptr DV = M->dv;\
2963     assert(is_small(DVS));\
2964     insert_small_chunk(M, DV, DVS);\
2965   }\
2966   M->dvsize = S;\
2967   M->dv = P;\
2968 }
2969 
2970 /* ------------------------- Operations on trees ------------------------- */
2971 
2972 /* Insert chunk into tree */
2973 #define insert_large_chunk(M, X, S) {\
2974   tbinptr* H;\
2975   bindex_t I;\
2976   compute_tree_index(S, I);\
2977   H = treebin_at(M, I);\
2978   X->index = I;\
2979   X->child[0] = X->child[1] = 0;\
2980   if (!treemap_is_marked(M, I)) {\
2981     mark_treemap(M, I);\
2982     *H = X;\
2983     X->parent = (tchunkptr)H;\
2984     X->fd = X->bk = X;\
2985   }\
2986   else {\
2987     tchunkptr T = *H;\
2988     size_t K = S << leftshift_for_tree_index(I);\
2989     for (;;) {\
2990       if (chunksize(T) != S) {\
2991         tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2992         K <<= 1;\
2993         if (*C != 0)\
2994           T = *C;\
2995         else if (RTCHECK(ok_address(M, C))) {\
2996           *C = X;\
2997           X->parent = T;\
2998           X->fd = X->bk = X;\
2999           break;\
3000         }\
3001         else {\
3002           CORRUPTION_ERROR_ACTION(M);\
3003           break;\
3004         }\
3005       }\
3006       else {\
3007         tchunkptr F = T->fd;\
3008         if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3009           T->fd = F->bk = X;\
3010           X->fd = F;\
3011           X->bk = T;\
3012           X->parent = 0;\
3013           break;\
3014         }\
3015         else {\
3016           CORRUPTION_ERROR_ACTION(M);\
3017           break;\
3018         }\
3019       }\
3020     }\
3021   }\
3022 }
3023 
3024 /*
3025   Unlink steps:
3026 
3027   1. If x is a chained node, unlink it from its same-sized fd/bk links
3028      and choose its bk node as its replacement.
3029   2. If x was the last node of its size, but not a leaf node, it must
3030      be replaced with a leaf node (not merely one with an open left or
3031      right), to make sure that lefts and rights of descendents
3032      correspond properly to bit masks.  We use the rightmost descendent
3033      of x.  We could use any other leaf, but this is easy to locate and
3034      tends to counteract removal of leftmosts elsewhere, and so keeps
3035      paths shorter than minimally guaranteed.  This doesn't loop much
3036      because on average a node in a tree is near the bottom.
3037   3. If x is the base of a chain (i.e., has parent links) relink
3038      x's parent and children to x's replacement (or null if none).
3039 */
3040 
3041 #define unlink_large_chunk(M, X) {\
3042   tchunkptr XP = X->parent;\
3043   tchunkptr R;\
3044   if (X->bk != X) {\
3045     tchunkptr F = X->fd;\
3046     R = X->bk;\
3047     if (RTCHECK(ok_address(M, F))) {\
3048       F->bk = R;\
3049       R->fd = F;\
3050     }\
3051     else {\
3052       CORRUPTION_ERROR_ACTION(M);\
3053     }\
3054   }\
3055   else {\
3056     tchunkptr* RP;\
3057     if (((R = *(RP = &(X->child[1]))) != 0) ||\
3058         ((R = *(RP = &(X->child[0]))) != 0)) {\
3059       tchunkptr* CP;\
3060       while ((*(CP = &(R->child[1])) != 0) ||\
3061              (*(CP = &(R->child[0])) != 0)) {\
3062         R = *(RP = CP);\
3063       }\
3064       if (RTCHECK(ok_address(M, RP)))\
3065         *RP = 0;\
3066       else {\
3067         CORRUPTION_ERROR_ACTION(M);\
3068       }\
3069     }\
3070   }\
3071   if (XP != 0) {\
3072     tbinptr* H = treebin_at(M, X->index);\
3073     if (X == *H) {\
3074       if ((*H = R) == 0) \
3075         clear_treemap(M, X->index);\
3076     }\
3077     else if (RTCHECK(ok_address(M, XP))) {\
3078       if (XP->child[0] == X) \
3079         XP->child[0] = R;\
3080       else \
3081         XP->child[1] = R;\
3082     }\
3083     else\
3084       CORRUPTION_ERROR_ACTION(M);\
3085     if (R != 0) {\
3086       if (RTCHECK(ok_address(M, R))) {\
3087         tchunkptr C0, C1;\
3088         R->parent = XP;\
3089         if ((C0 = X->child[0]) != 0) {\
3090           if (RTCHECK(ok_address(M, C0))) {\
3091             R->child[0] = C0;\
3092             C0->parent = R;\
3093           }\
3094           else\
3095             CORRUPTION_ERROR_ACTION(M);\
3096         }\
3097         if ((C1 = X->child[1]) != 0) {\
3098           if (RTCHECK(ok_address(M, C1))) {\
3099             R->child[1] = C1;\
3100             C1->parent = R;\
3101           }\
3102           else\
3103             CORRUPTION_ERROR_ACTION(M);\
3104         }\
3105       }\
3106       else\
3107         CORRUPTION_ERROR_ACTION(M);\
3108     }\
3109   }\
3110 }
3111 
3112 /* Relays to large vs small bin operations */
3113 
3114 #define insert_chunk(M, P, S)\
3115   if (is_small(S)) insert_small_chunk(M, P, S)\
3116   else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3117 
3118 #define unlink_chunk(M, P, S)\
3119   if (is_small(S)) unlink_small_chunk(M, P, S)\
3120   else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3121 
3122 
3123 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3124 
3125 #if ONLY_MSPACES
3126 #define internal_malloc(m, b) mspace_malloc(m, b)
3127 #define internal_free(m, mem) mspace_free(m,mem);
3128 #else /* ONLY_MSPACES */
3129 #if MSPACES
3130 #define internal_malloc(m, b)\
3131    (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3132 #define internal_free(m, mem)\
3133    if (m == gm) dlfree(mem); else mspace_free(m,mem);
3134 #else /* MSPACES */
3135 #define internal_malloc(m, b) dlmalloc(b)
3136 #define internal_free(m, mem) dlfree(mem)
3137 #endif /* MSPACES */
3138 #endif /* ONLY_MSPACES */
3139 
3140 /* -----------------------  Direct-mmapping chunks ----------------------- */
3141 
3142 /*
3143   Directly mmapped chunks are set up with an offset to the start of
3144   the mmapped region stored in the prev_foot field of the chunk. This
3145   allows reconstruction of the required argument to MUNMAP when freed,
3146   and also allows adjustment of the returned chunk to meet alignment
3147   requirements (especially in memalign).  There is also enough space
3148   allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3149   the PINUSE bit so frees can be checked.
3150 */
3151 
3152 /* Malloc using mmap */
mmap_alloc(mstate m,size_t nb)3153 static void* mmap_alloc(mstate m, size_t nb) {
3154   size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3155   if (mmsize > nb) {     /* Check for wrap around 0 */
3156     char* mm = (char*)(DIRECT_MMAP(mmsize));
3157     if (mm != CMFAIL) {
3158       size_t offset = align_offset(chunk2mem(mm));
3159       size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3160       mchunkptr p = (mchunkptr)(mm + offset);
3161       p->prev_foot = offset | IS_MMAPPED_BIT;
3162       (p)->head = (psize|CINUSE_BIT);
3163       mark_inuse_foot(m, p, psize);
3164       chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3165       chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3166 
3167       if (mm < m->least_addr)
3168         m->least_addr = mm;
3169       if ((m->footprint += mmsize) > m->max_footprint)
3170         m->max_footprint = m->footprint;
3171       assert(is_aligned(chunk2mem(p)));
3172       check_mmapped_chunk(m, p);
3173       return chunk2mem(p);
3174     }
3175   }
3176   return 0;
3177 }
3178 
3179 /* Realloc using mmap */
mmap_resize(mstate m,mchunkptr oldp,size_t nb)3180 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3181   size_t oldsize = chunksize(oldp);
3182   if (is_small(nb)) /* Can't shrink mmap regions below small size */
3183     return 0;
3184   /* Keep old chunk if big enough but not too big */
3185   if (oldsize >= nb + SIZE_T_SIZE &&
3186       (oldsize - nb) <= (mparams.granularity << 1))
3187     return oldp;
3188   else {
3189     size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3190     size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3191     size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
3192                                          CHUNK_ALIGN_MASK);
3193     char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3194                                   oldmmsize, newmmsize, 1);
3195     if (cp != CMFAIL) {
3196       mchunkptr newp = (mchunkptr)(cp + offset);
3197       size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3198       newp->head = (psize|CINUSE_BIT);
3199       mark_inuse_foot(m, newp, psize);
3200       chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3201       chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3202 
3203       if (cp < m->least_addr)
3204         m->least_addr = cp;
3205       if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3206         m->max_footprint = m->footprint;
3207       check_mmapped_chunk(m, newp);
3208       return newp;
3209     }
3210   }
3211   return 0;
3212 }
3213 
3214 /* -------------------------- mspace management -------------------------- */
3215 
3216 /* Initialize top chunk and its size */
init_top(mstate m,mchunkptr p,size_t psize)3217 static void init_top(mstate m, mchunkptr p, size_t psize) {
3218   /* Ensure alignment */
3219   size_t offset = align_offset(chunk2mem(p));
3220   p = (mchunkptr)((char*)p + offset);
3221   psize -= offset;
3222 
3223   m->top = p;
3224   m->topsize = psize;
3225   p->head = psize | PINUSE_BIT;
3226   /* set size of fake trailing chunk holding overhead space only once */
3227   chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3228   m->trim_check = mparams.trim_threshold; /* reset on each update */
3229 }
3230 
3231 /* Initialize bins for a new mstate that is otherwise zeroed out */
init_bins(mstate m)3232 static void init_bins(mstate m) {
3233   /* Establish circular links for smallbins */
3234   bindex_t i;
3235   for (i = 0; i < NSMALLBINS; ++i) {
3236     sbinptr bin = smallbin_at(m,i);
3237     bin->fd = bin->bk = bin;
3238   }
3239 }
3240 
3241 #if PROCEED_ON_ERROR
3242 
3243 /* default corruption action */
reset_on_error(mstate m)3244 static void reset_on_error(mstate m) {
3245   int i;
3246   ++malloc_corruption_error_count;
3247   /* Reinitialize fields to forget about all memory */
3248   m->smallbins = m->treebins = 0;
3249   m->dvsize = m->topsize = 0;
3250   m->seg.base = 0;
3251   m->seg.size = 0;
3252   m->seg.next = 0;
3253   m->top = m->dv = 0;
3254   for (i = 0; i < NTREEBINS; ++i)
3255     *treebin_at(m, i) = 0;
3256   init_bins(m);
3257 }
3258 #endif /* PROCEED_ON_ERROR */
3259 
3260 /* Allocate chunk and prepend remainder with chunk in successor base. */
prepend_alloc(mstate m,char * newbase,char * oldbase,size_t nb)3261 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3262                            size_t nb) {
3263   mchunkptr p = align_as_chunk(newbase);
3264   mchunkptr oldfirst = align_as_chunk(oldbase);
3265   size_t psize = (char*)oldfirst - (char*)p;
3266   mchunkptr q = chunk_plus_offset(p, nb);
3267   size_t qsize = psize - nb;
3268   set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3269 
3270   assert((char*)oldfirst > (char*)q);
3271   assert(pinuse(oldfirst));
3272   assert(qsize >= MIN_CHUNK_SIZE);
3273 
3274   /* consolidate remainder with first chunk of old base */
3275   if (oldfirst == m->top) {
3276     size_t tsize = m->topsize += qsize;
3277     m->top = q;
3278     q->head = tsize | PINUSE_BIT;
3279     check_top_chunk(m, q);
3280   }
3281   else if (oldfirst == m->dv) {
3282     size_t dsize = m->dvsize += qsize;
3283     m->dv = q;
3284     set_size_and_pinuse_of_free_chunk(q, dsize);
3285   }
3286   else {
3287     if (!cinuse(oldfirst)) {
3288       size_t nsize = chunksize(oldfirst);
3289       unlink_chunk(m, oldfirst, nsize);
3290       oldfirst = chunk_plus_offset(oldfirst, nsize);
3291       qsize += nsize;
3292     }
3293     set_free_with_pinuse(q, qsize, oldfirst);
3294     insert_chunk(m, q, qsize);
3295     check_free_chunk(m, q);
3296   }
3297 
3298   check_malloced_chunk(m, chunk2mem(p), nb);
3299   return chunk2mem(p);
3300 }
3301 
3302 
3303 /* Add a segment to hold a new noncontiguous region */
add_segment(mstate m,char * tbase,size_t tsize,flag_t mmapped)3304 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3305   /* Determine locations and sizes of segment, fenceposts, old top */
3306   char* old_top = (char*)m->top;
3307   msegmentptr oldsp = segment_holding(m, old_top);
3308   char* old_end = oldsp->base + oldsp->size;
3309   size_t ssize = pad_request(sizeof(struct malloc_segment));
3310   char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3311   size_t offset = align_offset(chunk2mem(rawsp));
3312   char* asp = rawsp + offset;
3313   char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3314   mchunkptr sp = (mchunkptr)csp;
3315   msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3316   mchunkptr tnext = chunk_plus_offset(sp, ssize);
3317   mchunkptr p = tnext;
3318   int nfences = 0;
3319 
3320   /* reset top to new space */
3321   init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3322 
3323   /* Set up segment record */
3324   assert(is_aligned(ss));
3325   set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3326   *ss = m->seg; /* Push current record */
3327   m->seg.base = tbase;
3328   m->seg.size = tsize;
3329   set_segment_flags(&m->seg, mmapped);
3330   m->seg.next = ss;
3331 
3332   /* Insert trailing fenceposts */
3333   for (;;) {
3334     mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3335     p->head = FENCEPOST_HEAD;
3336     ++nfences;
3337     if ((char*)(&(nextp->head)) < old_end)
3338       p = nextp;
3339     else
3340       break;
3341   }
3342   assert(nfences >= 2);
3343 
3344   /* Insert the rest of old top into a bin as an ordinary free chunk */
3345   if (csp != old_top) {
3346     mchunkptr q = (mchunkptr)old_top;
3347     size_t psize = csp - old_top;
3348     mchunkptr tn = chunk_plus_offset(q, psize);
3349     set_free_with_pinuse(q, psize, tn);
3350     insert_chunk(m, q, psize);
3351   }
3352 
3353   check_top_chunk(m, m->top);
3354 }
3355 
3356 /* -------------------------- System allocation -------------------------- */
3357 
3358 /* Get memory from system using MORECORE or MMAP */
sys_alloc(mstate m,size_t nb)3359 static void* sys_alloc(mstate m, size_t nb) {
3360   char* tbase = CMFAIL;
3361   size_t tsize = 0;
3362   flag_t mmap_flag = 0;
3363 
3364   init_mparams();
3365 
3366   /* Directly map large chunks */
3367   if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3368     void* mem = mmap_alloc(m, nb);
3369     if (mem != 0)
3370       return mem;
3371   }
3372 
3373   /*
3374     Try getting memory in any of three ways (in most-preferred to
3375     least-preferred order):
3376     1. A call to MORECORE that can normally contiguously extend memory.
3377        (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3378        or main space is mmapped or a previous contiguous call failed)
3379     2. A call to MMAP new space (disabled if not HAVE_MMAP).
3380        Note that under the default settings, if MORECORE is unable to
3381        fulfill a request, and HAVE_MMAP is true, then mmap is
3382        used as a noncontiguous system allocator. This is a useful backup
3383        strategy for systems with holes in address spaces -- in this case
3384        sbrk cannot contiguously expand the heap, but mmap may be able to
3385        find space.
3386     3. A call to MORECORE that cannot usually contiguously extend memory.
3387        (disabled if not HAVE_MORECORE)
3388   */
3389 
3390   if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3391     char* br = CMFAIL;
3392     msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3393     size_t asize = 0;
3394     ACQUIRE_MORECORE_LOCK();
3395 
3396     if (ss == 0) {  /* First time through or recovery */
3397       char* base = (char*)CALL_MORECORE(0);
3398       if (base != CMFAIL) {
3399         asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3400         /* Adjust to end on a page boundary */
3401         if (!is_page_aligned(base))
3402           asize += (page_align((size_t)base) - (size_t)base);
3403         /* Can't call MORECORE if size is negative when treated as signed */
3404         if (asize < HALF_MAX_SIZE_T &&
3405             (br = (char*)(CALL_MORECORE(asize))) == base) {
3406           tbase = base;
3407           tsize = asize;
3408         }
3409       }
3410     }
3411     else {
3412       /* Subtract out existing available top space from MORECORE request. */
3413       asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
3414       /* Use mem here only if it did continuously extend old space */
3415       if (asize < HALF_MAX_SIZE_T &&
3416           (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3417         tbase = br;
3418         tsize = asize;
3419       }
3420     }
3421 
3422     if (tbase == CMFAIL) {    /* Cope with partial failure */
3423       if (br != CMFAIL) {    /* Try to use/extend the space we did get */
3424         if (asize < HALF_MAX_SIZE_T &&
3425             asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
3426           size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
3427           if (esize < HALF_MAX_SIZE_T) {
3428             char* end = (char*)CALL_MORECORE(esize);
3429             if (end != CMFAIL)
3430               asize += esize;
3431             else {            /* Can't use; try to release */
3432               (void)CALL_MORECORE(-asize);
3433               br = CMFAIL;
3434             }
3435           }
3436         }
3437       }
3438       if (br != CMFAIL) {    /* Use the space we did get */
3439         tbase = br;
3440         tsize = asize;
3441       }
3442       else
3443         disable_contiguous(m); /* Don't try contiguous path in the future */
3444     }
3445 
3446     RELEASE_MORECORE_LOCK();
3447   }
3448 
3449   if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
3450     size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
3451     size_t rsize = granularity_align(req);
3452     if (rsize > nb) { /* Fail if wraps around zero */
3453       char* mp = (char*)(CALL_MMAP(rsize));
3454       if (mp != CMFAIL) {
3455         tbase = mp;
3456         tsize = rsize;
3457         mmap_flag = IS_MMAPPED_BIT;
3458       }
3459     }
3460   }
3461 
3462   if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3463     size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3464     if (asize < HALF_MAX_SIZE_T) {
3465       char* br = CMFAIL;
3466       char* end = CMFAIL;
3467       ACQUIRE_MORECORE_LOCK();
3468       br = (char*)(CALL_MORECORE(asize));
3469       end = (char*)(CALL_MORECORE(0));
3470       RELEASE_MORECORE_LOCK();
3471       if (br != CMFAIL && end != CMFAIL && br < end) {
3472         size_t ssize = end - br;
3473         if (ssize > nb + TOP_FOOT_SIZE) {
3474           tbase = br;
3475           tsize = ssize;
3476         }
3477       }
3478     }
3479   }
3480 
3481   if (tbase != CMFAIL) {
3482 
3483     if ((m->footprint += tsize) > m->max_footprint)
3484       m->max_footprint = m->footprint;
3485 
3486     if (!is_initialized(m)) { /* first-time initialization */
3487       m->seg.base = m->least_addr = tbase;
3488       m->seg.size = tsize;
3489       set_segment_flags(&m->seg, mmap_flag);
3490       m->magic = mparams.magic;
3491       init_bins(m);
3492       if (is_global(m))
3493         init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3494       else {
3495         /* Offset top by embedded malloc_state */
3496         mchunkptr mn = next_chunk(mem2chunk(m));
3497         init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
3498       }
3499     }
3500 
3501     else {
3502       /* Try to merge with an existing segment */
3503       msegmentptr sp = &m->seg;
3504       while (sp != 0 && tbase != sp->base + sp->size)
3505         sp = sp->next;
3506       if (sp != 0 &&
3507           !is_extern_segment(sp) &&
3508 	  check_segment_merge(sp, tbase, tsize) &&
3509           (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag &&
3510           segment_holds(sp, m->top)) { /* append */
3511         sp->size += tsize;
3512         init_top(m, m->top, m->topsize + tsize);
3513       }
3514       else {
3515         if (tbase < m->least_addr)
3516           m->least_addr = tbase;
3517         sp = &m->seg;
3518         while (sp != 0 && sp->base != tbase + tsize)
3519           sp = sp->next;
3520         if (sp != 0 &&
3521             !is_extern_segment(sp) &&
3522 	    check_segment_merge(sp, tbase, tsize) &&
3523             (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag) {
3524           char* oldbase = sp->base;
3525           sp->base = tbase;
3526           sp->size += tsize;
3527           return prepend_alloc(m, tbase, oldbase, nb);
3528         }
3529         else
3530           add_segment(m, tbase, tsize, mmap_flag);
3531       }
3532     }
3533 
3534     if (nb < m->topsize) { /* Allocate from new or extended top space */
3535       size_t rsize = m->topsize -= nb;
3536       mchunkptr p = m->top;
3537       mchunkptr r = m->top = chunk_plus_offset(p, nb);
3538       r->head = rsize | PINUSE_BIT;
3539       set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3540       check_top_chunk(m, m->top);
3541       check_malloced_chunk(m, chunk2mem(p), nb);
3542       return chunk2mem(p);
3543     }
3544   }
3545 
3546   MALLOC_FAILURE_ACTION;
3547   return 0;
3548 }
3549 
3550 /* -----------------------  system deallocation -------------------------- */
3551 
3552 /* Unmap and unlink any mmapped segments that don't contain used chunks */
release_unused_segments(mstate m)3553 static size_t release_unused_segments(mstate m) {
3554   size_t released = 0;
3555   msegmentptr pred = &m->seg;
3556   msegmentptr sp = pred->next;
3557   while (sp != 0) {
3558     char* base = sp->base;
3559     size_t size = sp->size;
3560     msegmentptr next = sp->next;
3561     if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
3562       mchunkptr p = align_as_chunk(base);
3563       size_t psize = chunksize(p);
3564       /* Can unmap if first chunk holds entire segment and not pinned */
3565       if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
3566         tchunkptr tp = (tchunkptr)p;
3567         assert(segment_holds(sp, (char*)sp));
3568         if (p == m->dv) {
3569           m->dv = 0;
3570           m->dvsize = 0;
3571         }
3572         else {
3573           unlink_large_chunk(m, tp);
3574         }
3575         if (CALL_MUNMAP(base, size) == 0) {
3576           released += size;
3577           m->footprint -= size;
3578           /* unlink obsoleted record */
3579           sp = pred;
3580           sp->next = next;
3581         }
3582         else { /* back out if cannot unmap */
3583           insert_large_chunk(m, tp, psize);
3584         }
3585       }
3586     }
3587     pred = sp;
3588     sp = next;
3589   }
3590   return released;
3591 }
3592 
sys_trim(mstate m,size_t pad)3593 static int sys_trim(mstate m, size_t pad) {
3594   size_t released = 0;
3595   if (pad < MAX_REQUEST && is_initialized(m)) {
3596     pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3597 
3598     if (m->topsize > pad) {
3599       /* Shrink top space in granularity-size units, keeping at least one */
3600       size_t unit = mparams.granularity;
3601       size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3602                       SIZE_T_ONE) * unit;
3603       msegmentptr sp = segment_holding(m, (char*)m->top);
3604 
3605       if (!is_extern_segment(sp)) {
3606         if (is_mmapped_segment(sp)) {
3607           if (HAVE_MMAP &&
3608               sp->size >= extra &&
3609               !has_segment_link(m, sp)) { /* can't shrink if pinned */
3610             size_t newsize = sp->size - extra;
3611             /* Prefer mremap, fall back to munmap */
3612             if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3613                 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3614               released = extra;
3615             }
3616           }
3617         }
3618         else if (HAVE_MORECORE) {
3619           if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3620             extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3621           ACQUIRE_MORECORE_LOCK();
3622           {
3623             /* Make sure end of memory is where we last set it. */
3624             char* old_br = (char*)(CALL_MORECORE(0));
3625             if (old_br == sp->base + sp->size) {
3626               char* rel_br = (char*)(CALL_MORECORE(-extra));
3627               char* new_br = (char*)(CALL_MORECORE(0));
3628               if (rel_br != CMFAIL && new_br < old_br)
3629                 released = old_br - new_br;
3630             }
3631           }
3632           RELEASE_MORECORE_LOCK();
3633         }
3634       }
3635 
3636       if (released != 0) {
3637         sp->size -= released;
3638         m->footprint -= released;
3639         init_top(m, m->top, m->topsize - released);
3640         check_top_chunk(m, m->top);
3641       }
3642     }
3643 
3644     /* Unmap any unused mmapped segments */
3645     if (HAVE_MMAP)
3646       released += release_unused_segments(m);
3647 
3648     /* On failure, disable autotrim to avoid repeated failed future calls */
3649     if (released == 0)
3650       m->trim_check = MAX_SIZE_T;
3651   }
3652 
3653   return (released != 0)? 1 : 0;
3654 }
3655 
3656 /* ---------------------------- malloc support --------------------------- */
3657 
3658 /* allocate a large request from the best fitting chunk in a treebin */
tmalloc_large(mstate m,size_t nb)3659 static void* tmalloc_large(mstate m, size_t nb) {
3660   tchunkptr v = 0;
3661   size_t rsize = -nb; /* Unsigned negation */
3662   tchunkptr t;
3663   bindex_t idx;
3664   compute_tree_index(nb, idx);
3665 
3666   if ((t = *treebin_at(m, idx)) != 0) {
3667     /* Traverse tree for this bin looking for node with size == nb */
3668     size_t sizebits = nb << leftshift_for_tree_index(idx);
3669     tchunkptr rst = 0;  /* The deepest untaken right subtree */
3670     for (;;) {
3671       tchunkptr rt;
3672       size_t trem = chunksize(t) - nb;
3673       if (trem < rsize) {
3674         v = t;
3675         if ((rsize = trem) == 0)
3676           break;
3677       }
3678       rt = t->child[1];
3679       t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3680       if (rt != 0 && rt != t)
3681         rst = rt;
3682       if (t == 0) {
3683         t = rst; /* set t to least subtree holding sizes > nb */
3684         break;
3685       }
3686       sizebits <<= 1;
3687     }
3688   }
3689 
3690   if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3691     binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3692     if (leftbits != 0) {
3693       bindex_t i;
3694       binmap_t leastbit = least_bit(leftbits);
3695       compute_bit2idx(leastbit, i);
3696       t = *treebin_at(m, i);
3697     }
3698   }
3699 
3700   while (t != 0) { /* find smallest of tree or subtree */
3701     size_t trem = chunksize(t) - nb;
3702     if (trem < rsize) {
3703       rsize = trem;
3704       v = t;
3705     }
3706     t = leftmost_child(t);
3707   }
3708 
3709   /*  If dv is a better fit, return 0 so malloc will use it */
3710   if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3711     if (RTCHECK(ok_address(m, v))) { /* split */
3712       mchunkptr r = chunk_plus_offset(v, nb);
3713       assert(chunksize(v) == rsize + nb);
3714       if (RTCHECK(ok_next(v, r))) {
3715         unlink_large_chunk(m, v);
3716         if (rsize < MIN_CHUNK_SIZE)
3717           set_inuse_and_pinuse(m, v, (rsize + nb));
3718         else {
3719           set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3720           set_size_and_pinuse_of_free_chunk(r, rsize);
3721           insert_chunk(m, r, rsize);
3722         }
3723         return chunk2mem(v);
3724       }
3725     }
3726     CORRUPTION_ERROR_ACTION(m);
3727   }
3728   return 0;
3729 }
3730 
3731 /* allocate a small request from the best fitting chunk in a treebin */
tmalloc_small(mstate m,size_t nb)3732 static void* tmalloc_small(mstate m, size_t nb) {
3733   tchunkptr t, v;
3734   size_t rsize;
3735   bindex_t i;
3736   binmap_t leastbit = least_bit(m->treemap);
3737   compute_bit2idx(leastbit, i);
3738 
3739   v = t = *treebin_at(m, i);
3740   rsize = chunksize(t) - nb;
3741 
3742   while ((t = leftmost_child(t)) != 0) {
3743     size_t trem = chunksize(t) - nb;
3744     if (trem < rsize) {
3745       rsize = trem;
3746       v = t;
3747     }
3748   }
3749 
3750   if (RTCHECK(ok_address(m, v))) {
3751     mchunkptr r = chunk_plus_offset(v, nb);
3752     assert(chunksize(v) == rsize + nb);
3753     if (RTCHECK(ok_next(v, r))) {
3754       unlink_large_chunk(m, v);
3755       if (rsize < MIN_CHUNK_SIZE)
3756         set_inuse_and_pinuse(m, v, (rsize + nb));
3757       else {
3758         set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3759         set_size_and_pinuse_of_free_chunk(r, rsize);
3760         replace_dv(m, r, rsize);
3761       }
3762       return chunk2mem(v);
3763     }
3764   }
3765 
3766   CORRUPTION_ERROR_ACTION(m);
3767   return 0;
3768 }
3769 
3770 /* --------------------------- realloc support --------------------------- */
3771 
internal_realloc(mstate m,void * oldmem,size_t bytes)3772 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3773   if (bytes >= MAX_REQUEST) {
3774     MALLOC_FAILURE_ACTION;
3775     return 0;
3776   }
3777   if (!PREACTION(m)) {
3778     mchunkptr oldp = mem2chunk(oldmem);
3779     size_t oldsize = chunksize(oldp);
3780     mchunkptr next = chunk_plus_offset(oldp, oldsize);
3781     mchunkptr newp = 0;
3782     void* extra = 0;
3783 
3784     /* Try to either shrink or extend into top. Else malloc-copy-free */
3785 
3786     if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3787                 ok_next(oldp, next) && ok_pinuse(next))) {
3788       size_t nb = request2size(bytes);
3789       if (is_mmapped(oldp))
3790         newp = mmap_resize(m, oldp, nb);
3791       else if (oldsize >= nb) { /* already big enough */
3792         size_t rsize = oldsize - nb;
3793         newp = oldp;
3794         if (rsize >= MIN_CHUNK_SIZE) {
3795           mchunkptr remainder = chunk_plus_offset(newp, nb);
3796           set_inuse(m, newp, nb);
3797           set_inuse(m, remainder, rsize);
3798           extra = chunk2mem(remainder);
3799         }
3800       }
3801       else if (next == m->top && oldsize + m->topsize > nb) {
3802         /* Expand into top */
3803         size_t newsize = oldsize + m->topsize;
3804         size_t newtopsize = newsize - nb;
3805         mchunkptr newtop = chunk_plus_offset(oldp, nb);
3806         set_inuse(m, oldp, nb);
3807         newtop->head = newtopsize |PINUSE_BIT;
3808         m->top = newtop;
3809         m->topsize = newtopsize;
3810         newp = oldp;
3811       }
3812     }
3813     else {
3814       USAGE_ERROR_ACTION(m, oldmem);
3815       POSTACTION(m);
3816       return 0;
3817     }
3818 
3819     POSTACTION(m);
3820 
3821     if (newp != 0) {
3822       if (extra != 0) {
3823         internal_free(m, extra);
3824       }
3825       check_inuse_chunk(m, newp);
3826       return chunk2mem(newp);
3827     }
3828     else {
3829       void* newmem = internal_malloc(m, bytes);
3830       if (newmem != 0) {
3831         size_t oc = oldsize - overhead_for(oldp);
3832         memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3833         internal_free(m, oldmem);
3834       }
3835       return newmem;
3836     }
3837   }
3838   return 0;
3839 }
3840 
3841 /* --------------------------- memalign support -------------------------- */
3842 
internal_memalign(mstate m,size_t alignment,size_t bytes)3843 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3844   if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3845     return internal_malloc(m, bytes);
3846   if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3847     alignment = MIN_CHUNK_SIZE;
3848   if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3849     size_t a = MALLOC_ALIGNMENT << 1;
3850     while (a < alignment) a <<= 1;
3851     alignment = a;
3852   }
3853 
3854   if (bytes >= MAX_REQUEST - alignment) {
3855     if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3856       MALLOC_FAILURE_ACTION;
3857     }
3858   }
3859   else {
3860     size_t nb = request2size(bytes);
3861     size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3862     char* mem = (char*)internal_malloc(m, req);
3863     if (mem != 0) {
3864       void* leader = 0;
3865       void* trailer = 0;
3866       mchunkptr p = mem2chunk(mem);
3867 
3868       if (PREACTION(m)) return 0;
3869       if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3870         /*
3871           Find an aligned spot inside chunk.  Since we need to give
3872           back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3873           the first calculation places us at a spot with less than
3874           MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3875           We've allocated enough total room so that this is always
3876           possible.
3877         */
3878         char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3879                                                        alignment -
3880                                                        SIZE_T_ONE)) &
3881                                              -alignment));
3882         char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3883           br : br+alignment;
3884         mchunkptr newp = (mchunkptr)pos;
3885         size_t leadsize = pos - (char*)(p);
3886         size_t newsize = chunksize(p) - leadsize;
3887 
3888         if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3889           newp->prev_foot = p->prev_foot + leadsize;
3890           newp->head = (newsize|CINUSE_BIT);
3891         }
3892         else { /* Otherwise, give back leader, use the rest */
3893           set_inuse(m, newp, newsize);
3894           set_inuse(m, p, leadsize);
3895           leader = chunk2mem(p);
3896         }
3897         p = newp;
3898       }
3899 
3900       /* Give back spare room at the end */
3901       if (!is_mmapped(p)) {
3902         size_t size = chunksize(p);
3903         if (size > nb + MIN_CHUNK_SIZE) {
3904           size_t remainder_size = size - nb;
3905           mchunkptr remainder = chunk_plus_offset(p, nb);
3906           set_inuse(m, p, nb);
3907           set_inuse(m, remainder, remainder_size);
3908           trailer = chunk2mem(remainder);
3909         }
3910       }
3911 
3912       assert (chunksize(p) >= nb);
3913       assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3914       check_inuse_chunk(m, p);
3915       POSTACTION(m);
3916       if (leader != 0) {
3917         internal_free(m, leader);
3918       }
3919       if (trailer != 0) {
3920         internal_free(m, trailer);
3921       }
3922       return chunk2mem(p);
3923     }
3924   }
3925   return 0;
3926 }
3927 
3928 /* ------------------------ comalloc/coalloc support --------------------- */
3929 
ialloc(mstate m,size_t n_elements,size_t * sizes,int opts,void * chunks[])3930 static void** ialloc(mstate m,
3931                      size_t n_elements,
3932                      size_t* sizes,
3933                      int opts,
3934                      void* chunks[]) {
3935   /*
3936     This provides common support for independent_X routines, handling
3937     all of the combinations that can result.
3938 
3939     The opts arg has:
3940     bit 0 set if all elements are same size (using sizes[0])
3941     bit 1 set if elements should be zeroed
3942   */
3943 
3944   size_t    element_size;   /* chunksize of each element, if all same */
3945   size_t    contents_size;  /* total size of elements */
3946   size_t    array_size;     /* request size of pointer array */
3947   void*     mem;            /* malloced aggregate space */
3948   mchunkptr p;              /* corresponding chunk */
3949   size_t    remainder_size; /* remaining bytes while splitting */
3950   void**    marray;         /* either "chunks" or malloced ptr array */
3951   mchunkptr array_chunk;    /* chunk for malloced ptr array */
3952   flag_t    was_enabled;    /* to disable mmap */
3953   size_t    size;
3954   size_t    i;
3955 
3956   /* compute array length, if needed */
3957   if (chunks != 0) {
3958     if (n_elements == 0)
3959       return chunks; /* nothing to do */
3960     marray = chunks;
3961     array_size = 0;
3962   }
3963   else {
3964     /* if empty req, must still return chunk representing empty array */
3965     if (n_elements == 0)
3966       return (void**)internal_malloc(m, 0);
3967     marray = 0;
3968     array_size = request2size(n_elements * (sizeof(void*)));
3969   }
3970 
3971   /* compute total element size */
3972   if (opts & 0x1) { /* all-same-size */
3973     element_size = request2size(*sizes);
3974     contents_size = n_elements * element_size;
3975   }
3976   else { /* add up all the sizes */
3977     element_size = 0;
3978     contents_size = 0;
3979     for (i = 0; i != n_elements; ++i)
3980       contents_size += request2size(sizes[i]);
3981   }
3982 
3983   size = contents_size + array_size;
3984 
3985   /*
3986      Allocate the aggregate chunk.  First disable direct-mmapping so
3987      malloc won't use it, since we would not be able to later
3988      free/realloc space internal to a segregated mmap region.
3989   */
3990   was_enabled = use_mmap(m);
3991   disable_mmap(m);
3992   mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3993   if (was_enabled)
3994     enable_mmap(m);
3995   if (mem == 0)
3996     return 0;
3997 
3998   if (PREACTION(m)) return 0;
3999   p = mem2chunk(mem);
4000   remainder_size = chunksize(p);
4001 
4002   assert(!is_mmapped(p));
4003 
4004   if (opts & 0x2) {       /* optionally clear the elements */
4005     memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4006   }
4007 
4008   /* If not provided, allocate the pointer array as final part of chunk */
4009   if (marray == 0) {
4010     size_t  array_chunk_size;
4011     array_chunk = chunk_plus_offset(p, contents_size);
4012     array_chunk_size = remainder_size - contents_size;
4013     marray = (void**) (chunk2mem(array_chunk));
4014     set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4015     remainder_size = contents_size;
4016   }
4017 
4018   /* split out elements */
4019   for (i = 0; ; ++i) {
4020     marray[i] = chunk2mem(p);
4021     if (i != n_elements-1) {
4022       if (element_size != 0)
4023         size = element_size;
4024       else
4025         size = request2size(sizes[i]);
4026       remainder_size -= size;
4027       set_size_and_pinuse_of_inuse_chunk(m, p, size);
4028       p = chunk_plus_offset(p, size);
4029     }
4030     else { /* the final element absorbs any overallocation slop */
4031       set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4032       break;
4033     }
4034   }
4035 
4036 #if DEBUG
4037   if (marray != chunks) {
4038     /* final element must have exactly exhausted chunk */
4039     if (element_size != 0) {
4040       assert(remainder_size == element_size);
4041     }
4042     else {
4043       assert(remainder_size == request2size(sizes[i]));
4044     }
4045     check_inuse_chunk(m, mem2chunk(marray));
4046   }
4047   for (i = 0; i != n_elements; ++i)
4048     check_inuse_chunk(m, mem2chunk(marray[i]));
4049 
4050 #endif /* DEBUG */
4051 
4052   POSTACTION(m);
4053   return marray;
4054 }
4055 
4056 
4057 /* -------------------------- public routines ---------------------------- */
4058 
4059 #if !ONLY_MSPACES
4060 
dlmalloc(size_t bytes)4061 void* dlmalloc(size_t bytes) {
4062   /*
4063      Basic algorithm:
4064      If a small request (< 256 bytes minus per-chunk overhead):
4065        1. If one exists, use a remainderless chunk in associated smallbin.
4066           (Remainderless means that there are too few excess bytes to
4067           represent as a chunk.)
4068        2. If it is big enough, use the dv chunk, which is normally the
4069           chunk adjacent to the one used for the most recent small request.
4070        3. If one exists, split the smallest available chunk in a bin,
4071           saving remainder in dv.
4072        4. If it is big enough, use the top chunk.
4073        5. If available, get memory from system and use it
4074      Otherwise, for a large request:
4075        1. Find the smallest available binned chunk that fits, and use it
4076           if it is better fitting than dv chunk, splitting if necessary.
4077        2. If better fitting than any binned chunk, use the dv chunk.
4078        3. If it is big enough, use the top chunk.
4079        4. If request size >= mmap threshold, try to directly mmap this chunk.
4080        5. If available, get memory from system and use it
4081 
4082      The ugly goto's here ensure that postaction occurs along all paths.
4083   */
4084 
4085   if (!PREACTION(gm)) {
4086     void* mem;
4087     size_t nb;
4088     if (bytes <= MAX_SMALL_REQUEST) {
4089       bindex_t idx;
4090       binmap_t smallbits;
4091       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4092       idx = small_index(nb);
4093       smallbits = gm->smallmap >> idx;
4094 
4095       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4096         mchunkptr b, p;
4097         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4098         b = smallbin_at(gm, idx);
4099         p = b->fd;
4100         assert(chunksize(p) == small_index2size(idx));
4101         unlink_first_small_chunk(gm, b, p, idx);
4102         set_inuse_and_pinuse(gm, p, small_index2size(idx));
4103         mem = chunk2mem(p);
4104         check_malloced_chunk(gm, mem, nb);
4105         goto postaction;
4106       }
4107 
4108       else if (nb > gm->dvsize) {
4109         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4110           mchunkptr b, p, r;
4111           size_t rsize;
4112           bindex_t i;
4113           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4114           binmap_t leastbit = least_bit(leftbits);
4115           compute_bit2idx(leastbit, i);
4116           b = smallbin_at(gm, i);
4117           p = b->fd;
4118           assert(chunksize(p) == small_index2size(i));
4119           unlink_first_small_chunk(gm, b, p, i);
4120           rsize = small_index2size(i) - nb;
4121           /* Fit here cannot be remainderless if 4byte sizes */
4122           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4123             set_inuse_and_pinuse(gm, p, small_index2size(i));
4124           else {
4125             set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4126             r = chunk_plus_offset(p, nb);
4127             set_size_and_pinuse_of_free_chunk(r, rsize);
4128             replace_dv(gm, r, rsize);
4129           }
4130           mem = chunk2mem(p);
4131           check_malloced_chunk(gm, mem, nb);
4132           goto postaction;
4133         }
4134 
4135         else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4136           check_malloced_chunk(gm, mem, nb);
4137           goto postaction;
4138         }
4139       }
4140     }
4141     else if (bytes >= MAX_REQUEST)
4142       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4143     else {
4144       nb = pad_request(bytes);
4145       if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4146         check_malloced_chunk(gm, mem, nb);
4147         goto postaction;
4148       }
4149     }
4150 
4151     if (nb <= gm->dvsize) {
4152       size_t rsize = gm->dvsize - nb;
4153       mchunkptr p = gm->dv;
4154       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4155         mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4156         gm->dvsize = rsize;
4157         set_size_and_pinuse_of_free_chunk(r, rsize);
4158         set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4159       }
4160       else { /* exhaust dv */
4161         size_t dvs = gm->dvsize;
4162         gm->dvsize = 0;
4163         gm->dv = 0;
4164         set_inuse_and_pinuse(gm, p, dvs);
4165       }
4166       mem = chunk2mem(p);
4167       check_malloced_chunk(gm, mem, nb);
4168       goto postaction;
4169     }
4170 
4171     else if (nb < gm->topsize) { /* Split top */
4172       size_t rsize = gm->topsize -= nb;
4173       mchunkptr p = gm->top;
4174       mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4175       r->head = rsize | PINUSE_BIT;
4176       set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4177       mem = chunk2mem(p);
4178       check_top_chunk(gm, gm->top);
4179       check_malloced_chunk(gm, mem, nb);
4180       goto postaction;
4181     }
4182 
4183     mem = sys_alloc(gm, nb);
4184 
4185   postaction:
4186     POSTACTION(gm);
4187     return mem;
4188   }
4189 
4190   return 0;
4191 }
4192 
dlfree(void * mem)4193 void dlfree(void* mem) {
4194   /*
4195      Consolidate freed chunks with preceeding or succeeding bordering
4196      free chunks, if they exist, and then place in a bin.  Intermixed
4197      with special cases for top, dv, mmapped chunks, and usage errors.
4198   */
4199 
4200   if (mem != 0) {
4201     mchunkptr p  = mem2chunk(mem);
4202 #if FOOTERS
4203     mstate fm = get_mstate_for(p);
4204     if (!ok_magic(fm)) {
4205       USAGE_ERROR_ACTION(fm, p);
4206       return;
4207     }
4208 #else /* FOOTERS */
4209 #define fm gm
4210 #endif /* FOOTERS */
4211     if (!PREACTION(fm)) {
4212       check_inuse_chunk(fm, p);
4213       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4214         size_t psize = chunksize(p);
4215         mchunkptr next = chunk_plus_offset(p, psize);
4216         if (!pinuse(p)) {
4217           size_t prevsize = p->prev_foot;
4218           if ((prevsize & IS_MMAPPED_BIT) != 0) {
4219             prevsize &= ~IS_MMAPPED_BIT;
4220             psize += prevsize + MMAP_FOOT_PAD;
4221             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4222               fm->footprint -= psize;
4223             goto postaction;
4224           }
4225           else {
4226             mchunkptr prev = chunk_minus_offset(p, prevsize);
4227             psize += prevsize;
4228             p = prev;
4229             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4230               if (p != fm->dv) {
4231                 unlink_chunk(fm, p, prevsize);
4232               }
4233               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4234                 fm->dvsize = psize;
4235                 set_free_with_pinuse(p, psize, next);
4236                 goto postaction;
4237               }
4238             }
4239             else
4240               goto erroraction;
4241           }
4242         }
4243 
4244         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4245           if (!cinuse(next)) {  /* consolidate forward */
4246             if (next == fm->top) {
4247               size_t tsize = fm->topsize += psize;
4248               fm->top = p;
4249               p->head = tsize | PINUSE_BIT;
4250               if (p == fm->dv) {
4251                 fm->dv = 0;
4252                 fm->dvsize = 0;
4253               }
4254               if (should_trim(fm, tsize))
4255                 sys_trim(fm, 0);
4256               goto postaction;
4257             }
4258             else if (next == fm->dv) {
4259               size_t dsize = fm->dvsize += psize;
4260               fm->dv = p;
4261               set_size_and_pinuse_of_free_chunk(p, dsize);
4262               goto postaction;
4263             }
4264             else {
4265               size_t nsize = chunksize(next);
4266               psize += nsize;
4267               unlink_chunk(fm, next, nsize);
4268               set_size_and_pinuse_of_free_chunk(p, psize);
4269               if (p == fm->dv) {
4270                 fm->dvsize = psize;
4271                 goto postaction;
4272               }
4273             }
4274           }
4275           else
4276             set_free_with_pinuse(p, psize, next);
4277           insert_chunk(fm, p, psize);
4278           check_free_chunk(fm, p);
4279           goto postaction;
4280         }
4281       }
4282     erroraction:
4283       USAGE_ERROR_ACTION(fm, p);
4284     postaction:
4285       POSTACTION(fm);
4286     }
4287   }
4288 #if !FOOTERS
4289 #undef fm
4290 #endif /* FOOTERS */
4291 }
4292 
dlcalloc(size_t n_elements,size_t elem_size)4293 void* dlcalloc(size_t n_elements, size_t elem_size) {
4294   void* mem;
4295   size_t req = 0;
4296   if (n_elements != 0) {
4297     req = n_elements * elem_size;
4298     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4299         (req / n_elements != elem_size))
4300       req = MAX_SIZE_T; /* force downstream failure on overflow */
4301   }
4302   mem = dlmalloc(req);
4303   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4304     memset(mem, 0, req);
4305   return mem;
4306 }
4307 
dlrealloc(void * oldmem,size_t bytes)4308 void* dlrealloc(void* oldmem, size_t bytes) {
4309   if (oldmem == 0)
4310     return dlmalloc(bytes);
4311 #ifdef REALLOC_ZERO_BYTES_FREES
4312   if (bytes == 0) {
4313     dlfree(oldmem);
4314     return 0;
4315   }
4316 #endif /* REALLOC_ZERO_BYTES_FREES */
4317   else {
4318 #if ! FOOTERS
4319     mstate m = gm;
4320 #else /* FOOTERS */
4321     mstate m = get_mstate_for(mem2chunk(oldmem));
4322     if (!ok_magic(m)) {
4323       USAGE_ERROR_ACTION(m, oldmem);
4324       return 0;
4325     }
4326 #endif /* FOOTERS */
4327     return internal_realloc(m, oldmem, bytes);
4328   }
4329 }
4330 
dlmemalign(size_t alignment,size_t bytes)4331 void* dlmemalign(size_t alignment, size_t bytes) {
4332   return internal_memalign(gm, alignment, bytes);
4333 }
4334 
dlindependent_calloc(size_t n_elements,size_t elem_size,void * chunks[])4335 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4336                                  void* chunks[]) {
4337   size_t sz = elem_size; /* serves as 1-element array */
4338   return ialloc(gm, n_elements, &sz, 3, chunks);
4339 }
4340 
dlindependent_comalloc(size_t n_elements,size_t sizes[],void * chunks[])4341 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4342                                    void* chunks[]) {
4343   return ialloc(gm, n_elements, sizes, 0, chunks);
4344 }
4345 
dlvalloc(size_t bytes)4346 void* dlvalloc(size_t bytes) {
4347   size_t pagesz;
4348   init_mparams();
4349   pagesz = mparams.page_size;
4350   return dlmemalign(pagesz, bytes);
4351 }
4352 
dlpvalloc(size_t bytes)4353 void* dlpvalloc(size_t bytes) {
4354   size_t pagesz;
4355   init_mparams();
4356   pagesz = mparams.page_size;
4357   return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4358 }
4359 
dlmalloc_trim(size_t pad)4360 int dlmalloc_trim(size_t pad) {
4361   int result = 0;
4362   if (!PREACTION(gm)) {
4363     result = sys_trim(gm, pad);
4364     POSTACTION(gm);
4365   }
4366   return result;
4367 }
4368 
dlmalloc_footprint(void)4369 size_t dlmalloc_footprint(void) {
4370   return gm->footprint;
4371 }
4372 
dlmalloc_max_footprint(void)4373 size_t dlmalloc_max_footprint(void) {
4374   return gm->max_footprint;
4375 }
4376 
4377 #if !NO_MALLINFO
dlmallinfo(void)4378 struct mallinfo dlmallinfo(void) {
4379   return internal_mallinfo(gm);
4380 }
4381 #endif /* NO_MALLINFO */
4382 
dlmalloc_stats()4383 void dlmalloc_stats() {
4384   internal_malloc_stats(gm);
4385 }
4386 
dlmalloc_usable_size(void * mem)4387 size_t dlmalloc_usable_size(void* mem) {
4388   if (mem != 0) {
4389     mchunkptr p = mem2chunk(mem);
4390     if (cinuse(p))
4391       return chunksize(p) - overhead_for(p);
4392   }
4393   return 0;
4394 }
4395 
dlmallopt(int param_number,int value)4396 int dlmallopt(int param_number, int value) {
4397   return change_mparam(param_number, value);
4398 }
4399 
4400 #endif /* !ONLY_MSPACES */
4401 
4402 /* ----------------------------- user mspaces ---------------------------- */
4403 
4404 #if MSPACES
4405 
init_user_mstate(char * tbase,size_t tsize)4406 static mstate init_user_mstate(char* tbase, size_t tsize) {
4407   size_t msize = pad_request(sizeof(struct malloc_state));
4408   mchunkptr mn;
4409   mchunkptr msp = align_as_chunk(tbase);
4410   mstate m = (mstate)(chunk2mem(msp));
4411   memset(m, 0, msize);
4412   INITIAL_LOCK(&m->mutex);
4413   msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
4414   m->seg.base = m->least_addr = tbase;
4415   m->seg.size = m->footprint = m->max_footprint = tsize;
4416   m->magic = mparams.magic;
4417   m->mflags = mparams.default_mflags;
4418   disable_contiguous(m);
4419   init_bins(m);
4420   mn = next_chunk(mem2chunk(m));
4421   init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4422   check_top_chunk(m, m->top);
4423   return m;
4424 }
4425 
create_mspace(size_t capacity,int locked)4426 mspace create_mspace(size_t capacity, int locked) {
4427   mstate m = 0;
4428   size_t msize = pad_request(sizeof(struct malloc_state));
4429   init_mparams(); /* Ensure pagesize etc initialized */
4430 
4431   if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4432     size_t rs = ((capacity == 0)? mparams.granularity :
4433                  (capacity + TOP_FOOT_SIZE + msize));
4434     size_t tsize = granularity_align(rs);
4435     char* tbase = (char*)(CALL_MMAP(tsize));
4436     if (tbase != CMFAIL) {
4437       m = init_user_mstate(tbase, tsize);
4438       set_segment_flags(&m->seg, IS_MMAPPED_BIT);
4439       set_lock(m, locked);
4440     }
4441   }
4442   return (mspace)m;
4443 }
4444 
create_mspace_with_base(void * base,size_t capacity,int locked)4445 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4446   mstate m = 0;
4447   size_t msize = pad_request(sizeof(struct malloc_state));
4448   init_mparams(); /* Ensure pagesize etc initialized */
4449 
4450   if (capacity > msize + TOP_FOOT_SIZE &&
4451       capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4452     m = init_user_mstate((char*)base, capacity);
4453     set_segment_flags(&m->seg, EXTERN_BIT);
4454     set_lock(m, locked);
4455   }
4456   return (mspace)m;
4457 }
4458 
destroy_mspace(mspace msp)4459 size_t destroy_mspace(mspace msp) {
4460   size_t freed = 0;
4461   mstate ms = (mstate)msp;
4462   if (ok_magic(ms)) {
4463     msegmentptr sp = &ms->seg;
4464     while (sp != 0) {
4465       char* base = sp->base;
4466       size_t size = sp->size;
4467       flag_t flag = get_segment_flags(sp);
4468       sp = sp->next;
4469       if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
4470           CALL_MUNMAP(base, size) == 0)
4471         freed += size;
4472     }
4473   }
4474   else {
4475     USAGE_ERROR_ACTION(ms,ms);
4476   }
4477   return freed;
4478 }
4479 
4480 /*
4481   mspace versions of routines are near-clones of the global
4482   versions. This is not so nice but better than the alternatives.
4483 */
4484 
4485 
mspace_malloc(mspace msp,size_t bytes)4486 void* mspace_malloc(mspace msp, size_t bytes) {
4487   mstate ms = (mstate)msp;
4488   if (!ok_magic(ms)) {
4489     USAGE_ERROR_ACTION(ms,ms);
4490     return 0;
4491   }
4492   if (!PREACTION(ms)) {
4493     void* mem;
4494     size_t nb;
4495     if (bytes <= MAX_SMALL_REQUEST) {
4496       bindex_t idx;
4497       binmap_t smallbits;
4498       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4499       idx = small_index(nb);
4500       smallbits = ms->smallmap >> idx;
4501 
4502       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4503         mchunkptr b, p;
4504         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4505         b = smallbin_at(ms, idx);
4506         p = b->fd;
4507         assert(chunksize(p) == small_index2size(idx));
4508         unlink_first_small_chunk(ms, b, p, idx);
4509         set_inuse_and_pinuse(ms, p, small_index2size(idx));
4510         mem = chunk2mem(p);
4511         check_malloced_chunk(ms, mem, nb);
4512         goto postaction;
4513       }
4514 
4515       else if (nb > ms->dvsize) {
4516         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4517           mchunkptr b, p, r;
4518           size_t rsize;
4519           bindex_t i;
4520           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4521           binmap_t leastbit = least_bit(leftbits);
4522           compute_bit2idx(leastbit, i);
4523           b = smallbin_at(ms, i);
4524           p = b->fd;
4525           assert(chunksize(p) == small_index2size(i));
4526           unlink_first_small_chunk(ms, b, p, i);
4527           rsize = small_index2size(i) - nb;
4528           /* Fit here cannot be remainderless if 4byte sizes */
4529           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4530             set_inuse_and_pinuse(ms, p, small_index2size(i));
4531           else {
4532             set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4533             r = chunk_plus_offset(p, nb);
4534             set_size_and_pinuse_of_free_chunk(r, rsize);
4535             replace_dv(ms, r, rsize);
4536           }
4537           mem = chunk2mem(p);
4538           check_malloced_chunk(ms, mem, nb);
4539           goto postaction;
4540         }
4541 
4542         else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
4543           check_malloced_chunk(ms, mem, nb);
4544           goto postaction;
4545         }
4546       }
4547     }
4548     else if (bytes >= MAX_REQUEST)
4549       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4550     else {
4551       nb = pad_request(bytes);
4552       if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
4553         check_malloced_chunk(ms, mem, nb);
4554         goto postaction;
4555       }
4556     }
4557 
4558     if (nb <= ms->dvsize) {
4559       size_t rsize = ms->dvsize - nb;
4560       mchunkptr p = ms->dv;
4561       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4562         mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
4563         ms->dvsize = rsize;
4564         set_size_and_pinuse_of_free_chunk(r, rsize);
4565         set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4566       }
4567       else { /* exhaust dv */
4568         size_t dvs = ms->dvsize;
4569         ms->dvsize = 0;
4570         ms->dv = 0;
4571         set_inuse_and_pinuse(ms, p, dvs);
4572       }
4573       mem = chunk2mem(p);
4574       check_malloced_chunk(ms, mem, nb);
4575       goto postaction;
4576     }
4577 
4578     else if (nb < ms->topsize) { /* Split top */
4579       size_t rsize = ms->topsize -= nb;
4580       mchunkptr p = ms->top;
4581       mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4582       r->head = rsize | PINUSE_BIT;
4583       set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4584       mem = chunk2mem(p);
4585       check_top_chunk(ms, ms->top);
4586       check_malloced_chunk(ms, mem, nb);
4587       goto postaction;
4588     }
4589 
4590     mem = sys_alloc(ms, nb);
4591 
4592   postaction:
4593     POSTACTION(ms);
4594     return mem;
4595   }
4596 
4597   return 0;
4598 }
4599 
mspace_free(mspace msp,void * mem)4600 void mspace_free(mspace msp, void* mem) {
4601   if (mem != 0) {
4602     mchunkptr p  = mem2chunk(mem);
4603 #if FOOTERS
4604     mstate fm = get_mstate_for(p);
4605 #else /* FOOTERS */
4606     mstate fm = (mstate)msp;
4607 #endif /* FOOTERS */
4608     if (!ok_magic(fm)) {
4609       USAGE_ERROR_ACTION(fm, p);
4610       return;
4611     }
4612     if (!PREACTION(fm)) {
4613       check_inuse_chunk(fm, p);
4614       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4615         size_t psize = chunksize(p);
4616         mchunkptr next = chunk_plus_offset(p, psize);
4617         if (!pinuse(p)) {
4618           size_t prevsize = p->prev_foot;
4619           if ((prevsize & IS_MMAPPED_BIT) != 0) {
4620             prevsize &= ~IS_MMAPPED_BIT;
4621             psize += prevsize + MMAP_FOOT_PAD;
4622             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4623               fm->footprint -= psize;
4624             goto postaction;
4625           }
4626           else {
4627             mchunkptr prev = chunk_minus_offset(p, prevsize);
4628             psize += prevsize;
4629             p = prev;
4630             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4631               if (p != fm->dv) {
4632                 unlink_chunk(fm, p, prevsize);
4633               }
4634               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4635                 fm->dvsize = psize;
4636                 set_free_with_pinuse(p, psize, next);
4637                 goto postaction;
4638               }
4639             }
4640             else
4641               goto erroraction;
4642           }
4643         }
4644 
4645         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4646           if (!cinuse(next)) {  /* consolidate forward */
4647             if (next == fm->top) {
4648               size_t tsize = fm->topsize += psize;
4649               fm->top = p;
4650               p->head = tsize | PINUSE_BIT;
4651               if (p == fm->dv) {
4652                 fm->dv = 0;
4653                 fm->dvsize = 0;
4654               }
4655               if (should_trim(fm, tsize))
4656                 sys_trim(fm, 0);
4657               goto postaction;
4658             }
4659             else if (next == fm->dv) {
4660               size_t dsize = fm->dvsize += psize;
4661               fm->dv = p;
4662               set_size_and_pinuse_of_free_chunk(p, dsize);
4663               goto postaction;
4664             }
4665             else {
4666               size_t nsize = chunksize(next);
4667               psize += nsize;
4668               unlink_chunk(fm, next, nsize);
4669               set_size_and_pinuse_of_free_chunk(p, psize);
4670               if (p == fm->dv) {
4671                 fm->dvsize = psize;
4672                 goto postaction;
4673               }
4674             }
4675           }
4676           else
4677             set_free_with_pinuse(p, psize, next);
4678           insert_chunk(fm, p, psize);
4679           check_free_chunk(fm, p);
4680           goto postaction;
4681         }
4682       }
4683     erroraction:
4684       USAGE_ERROR_ACTION(fm, p);
4685     postaction:
4686       POSTACTION(fm);
4687     }
4688   }
4689 }
4690 
mspace_calloc(mspace msp,size_t n_elements,size_t elem_size)4691 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4692   void* mem;
4693   size_t req = 0;
4694   mstate ms = (mstate)msp;
4695   if (!ok_magic(ms)) {
4696     USAGE_ERROR_ACTION(ms,ms);
4697     return 0;
4698   }
4699   if (n_elements != 0) {
4700     req = n_elements * elem_size;
4701     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4702         (req / n_elements != elem_size))
4703       req = MAX_SIZE_T; /* force downstream failure on overflow */
4704   }
4705   mem = internal_malloc(ms, req);
4706   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4707     memset(mem, 0, req);
4708   return mem;
4709 }
4710 
mspace_realloc(mspace msp,void * oldmem,size_t bytes)4711 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4712   if (oldmem == 0)
4713     return mspace_malloc(msp, bytes);
4714 #ifdef REALLOC_ZERO_BYTES_FREES
4715   if (bytes == 0) {
4716     mspace_free(msp, oldmem);
4717     return 0;
4718   }
4719 #endif /* REALLOC_ZERO_BYTES_FREES */
4720   else {
4721 #if FOOTERS
4722     mchunkptr p  = mem2chunk(oldmem);
4723     mstate ms = get_mstate_for(p);
4724 #else /* FOOTERS */
4725     mstate ms = (mstate)msp;
4726 #endif /* FOOTERS */
4727     if (!ok_magic(ms)) {
4728       USAGE_ERROR_ACTION(ms,ms);
4729       return 0;
4730     }
4731     return internal_realloc(ms, oldmem, bytes);
4732   }
4733 }
4734 
mspace_memalign(mspace msp,size_t alignment,size_t bytes)4735 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4736   mstate ms = (mstate)msp;
4737   if (!ok_magic(ms)) {
4738     USAGE_ERROR_ACTION(ms,ms);
4739     return 0;
4740   }
4741   return internal_memalign(ms, alignment, bytes);
4742 }
4743 
mspace_independent_calloc(mspace msp,size_t n_elements,size_t elem_size,void * chunks[])4744 void** mspace_independent_calloc(mspace msp, size_t n_elements,
4745                                  size_t elem_size, void* chunks[]) {
4746   size_t sz = elem_size; /* serves as 1-element array */
4747   mstate ms = (mstate)msp;
4748   if (!ok_magic(ms)) {
4749     USAGE_ERROR_ACTION(ms,ms);
4750     return 0;
4751   }
4752   return ialloc(ms, n_elements, &sz, 3, chunks);
4753 }
4754 
mspace_independent_comalloc(mspace msp,size_t n_elements,size_t sizes[],void * chunks[])4755 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4756                                    size_t sizes[], void* chunks[]) {
4757   mstate ms = (mstate)msp;
4758   if (!ok_magic(ms)) {
4759     USAGE_ERROR_ACTION(ms,ms);
4760     return 0;
4761   }
4762   return ialloc(ms, n_elements, sizes, 0, chunks);
4763 }
4764 
mspace_trim(mspace msp,size_t pad)4765 int mspace_trim(mspace msp, size_t pad) {
4766   int result = 0;
4767   mstate ms = (mstate)msp;
4768   if (ok_magic(ms)) {
4769     if (!PREACTION(ms)) {
4770       result = sys_trim(ms, pad);
4771       POSTACTION(ms);
4772     }
4773   }
4774   else {
4775     USAGE_ERROR_ACTION(ms,ms);
4776   }
4777   return result;
4778 }
4779 
mspace_malloc_stats(mspace msp)4780 void mspace_malloc_stats(mspace msp) {
4781   mstate ms = (mstate)msp;
4782   if (ok_magic(ms)) {
4783     internal_malloc_stats(ms);
4784   }
4785   else {
4786     USAGE_ERROR_ACTION(ms,ms);
4787   }
4788 }
4789 
mspace_footprint(mspace msp)4790 size_t mspace_footprint(mspace msp) {
4791   size_t result;
4792   mstate ms = (mstate)msp;
4793   if (ok_magic(ms)) {
4794     result = ms->footprint;
4795   }
4796   USAGE_ERROR_ACTION(ms,ms);
4797   return result;
4798 }
4799 
4800 
mspace_max_footprint(mspace msp)4801 size_t mspace_max_footprint(mspace msp) {
4802   size_t result;
4803   mstate ms = (mstate)msp;
4804   if (ok_magic(ms)) {
4805     result = ms->max_footprint;
4806   }
4807   USAGE_ERROR_ACTION(ms,ms);
4808   return result;
4809 }
4810 
4811 
4812 #if !NO_MALLINFO
mspace_mallinfo(mspace msp)4813 struct mallinfo mspace_mallinfo(mspace msp) {
4814   mstate ms = (mstate)msp;
4815   if (!ok_magic(ms)) {
4816     USAGE_ERROR_ACTION(ms,ms);
4817   }
4818   return internal_mallinfo(ms);
4819 }
4820 #endif /* NO_MALLINFO */
4821 
mspace_mallopt(int param_number,int value)4822 int mspace_mallopt(int param_number, int value) {
4823   return change_mparam(param_number, value);
4824 }
4825 
4826 #endif /* MSPACES */
4827 
4828 /* -------------------- Alternative MORECORE functions ------------------- */
4829 
4830 /*
4831   Guidelines for creating a custom version of MORECORE:
4832 
4833   * For best performance, MORECORE should allocate in multiples of pagesize.
4834   * MORECORE may allocate more memory than requested. (Or even less,
4835       but this will usually result in a malloc failure.)
4836   * MORECORE must not allocate memory when given argument zero, but
4837       instead return one past the end address of memory from previous
4838       nonzero call.
4839   * For best performance, consecutive calls to MORECORE with positive
4840       arguments should return increasing addresses, indicating that
4841       space has been contiguously extended.
4842   * Even though consecutive calls to MORECORE need not return contiguous
4843       addresses, it must be OK for malloc'ed chunks to span multiple
4844       regions in those cases where they do happen to be contiguous.
4845   * MORECORE need not handle negative arguments -- it may instead
4846       just return MFAIL when given negative arguments.
4847       Negative arguments are always multiples of pagesize. MORECORE
4848       must not misinterpret negative args as large positive unsigned
4849       args. You can suppress all such calls from even occurring by defining
4850       MORECORE_CANNOT_TRIM,
4851 
4852   As an example alternative MORECORE, here is a custom allocator
4853   kindly contributed for pre-OSX macOS.  It uses virtually but not
4854   necessarily physically contiguous non-paged memory (locked in,
4855   present and won't get swapped out).  You can use it by uncommenting
4856   this section, adding some #includes, and setting up the appropriate
4857   defines above:
4858 
4859       #define MORECORE osMoreCore
4860 
4861   There is also a shutdown routine that should somehow be called for
4862   cleanup upon program exit.
4863 
4864   #define MAX_POOL_ENTRIES 100
4865   #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4866   static int next_os_pool;
4867   void *our_os_pools[MAX_POOL_ENTRIES];
4868 
4869   void *osMoreCore(int size)
4870   {
4871     void *ptr = 0;
4872     static void *sbrk_top = 0;
4873 
4874     if (size > 0)
4875     {
4876       if (size < MINIMUM_MORECORE_SIZE)
4877          size = MINIMUM_MORECORE_SIZE;
4878       if (CurrentExecutionLevel() == kTaskLevel)
4879          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4880       if (ptr == 0)
4881       {
4882         return (void *) MFAIL;
4883       }
4884       // save ptrs so they can be freed during cleanup
4885       our_os_pools[next_os_pool] = ptr;
4886       next_os_pool++;
4887       ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4888       sbrk_top = (char *) ptr + size;
4889       return ptr;
4890     }
4891     else if (size < 0)
4892     {
4893       // we don't currently support shrink behavior
4894       return (void *) MFAIL;
4895     }
4896     else
4897     {
4898       return sbrk_top;
4899     }
4900   }
4901 
4902   // cleanup any allocated memory pools
4903   // called as last thing before shutting down driver
4904 
4905   void osCleanupMem(void)
4906   {
4907     void **ptr;
4908 
4909     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4910       if (*ptr)
4911       {
4912          PoolDeallocate(*ptr);
4913          *ptr = 0;
4914       }
4915   }
4916 
4917 */
4918 
4919 
4920 /* -----------------------------------------------------------------------
4921 History:
4922     V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4923       * Add max_footprint functions
4924       * Ensure all appropriate literals are size_t
4925       * Fix conditional compilation problem for some #define settings
4926       * Avoid concatenating segments with the one provided
4927         in create_mspace_with_base
4928       * Rename some variables to avoid compiler shadowing warnings
4929       * Use explicit lock initialization.
4930       * Better handling of sbrk interference.
4931       * Simplify and fix segment insertion, trimming and mspace_destroy
4932       * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4933       * Thanks especially to Dennis Flanagan for help on these.
4934 
4935     V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
4936       * Fix memalign brace error.
4937 
4938     V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
4939       * Fix improper #endif nesting in C++
4940       * Add explicit casts needed for C++
4941 
4942     V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
4943       * Use trees for large bins
4944       * Support mspaces
4945       * Use segments to unify sbrk-based and mmap-based system allocation,
4946         removing need for emulation on most platforms without sbrk.
4947       * Default safety checks
4948       * Optional footer checks. Thanks to William Robertson for the idea.
4949       * Internal code refactoring
4950       * Incorporate suggestions and platform-specific changes.
4951         Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4952         Aaron Bachmann,  Emery Berger, and others.
4953       * Speed up non-fastbin processing enough to remove fastbins.
4954       * Remove useless cfree() to avoid conflicts with other apps.
4955       * Remove internal memcpy, memset. Compilers handle builtins better.
4956       * Remove some options that no one ever used and rename others.
4957 
4958     V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
4959       * Fix malloc_state bitmap array misdeclaration
4960 
4961     V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
4962       * Allow tuning of FIRST_SORTED_BIN_SIZE
4963       * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4964       * Better detection and support for non-contiguousness of MORECORE.
4965         Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4966       * Bypass most of malloc if no frees. Thanks To Emery Berger.
4967       * Fix freeing of old top non-contiguous chunk im sysmalloc.
4968       * Raised default trim and map thresholds to 256K.
4969       * Fix mmap-related #defines. Thanks to Lubos Lunak.
4970       * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4971       * Branch-free bin calculation
4972       * Default trim and mmap thresholds now 256K.
4973 
4974     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
4975       * Introduce independent_comalloc and independent_calloc.
4976         Thanks to Michael Pachos for motivation and help.
4977       * Make optional .h file available
4978       * Allow > 2GB requests on 32bit systems.
4979       * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4980         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4981         and Anonymous.
4982       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4983         helping test this.)
4984       * memalign: check alignment arg
4985       * realloc: don't try to shift chunks backwards, since this
4986         leads to  more fragmentation in some programs and doesn't
4987         seem to help in any others.
4988       * Collect all cases in malloc requiring system memory into sysmalloc
4989       * Use mmap as backup to sbrk
4990       * Place all internal state in malloc_state
4991       * Introduce fastbins (although similar to 2.5.1)
4992       * Many minor tunings and cosmetic improvements
4993       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4994       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4995         Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4996       * Include errno.h to support default failure action.
4997 
4998     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
4999       * return null for negative arguments
5000       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5001          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5002           (e.g. WIN32 platforms)
5003          * Cleanup header file inclusion for WIN32 platforms
5004          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5005          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5006            memory allocation routines
5007          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5008          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5009            usage of 'assert' in non-WIN32 code
5010          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5011            avoid infinite loop
5012       * Always call 'fREe()' rather than 'free()'
5013 
5014     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
5015       * Fixed ordering problem with boundary-stamping
5016 
5017     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
5018       * Added pvalloc, as recommended by H.J. Liu
5019       * Added 64bit pointer support mainly from Wolfram Gloger
5020       * Added anonymously donated WIN32 sbrk emulation
5021       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5022       * malloc_extend_top: fix mask error that caused wastage after
5023         foreign sbrks
5024       * Add linux mremap support code from HJ Liu
5025 
5026     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
5027       * Integrated most documentation with the code.
5028       * Add support for mmap, with help from
5029         Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5030       * Use last_remainder in more cases.
5031       * Pack bins using idea from  colin@nyx10.cs.du.edu
5032       * Use ordered bins instead of best-fit threshhold
5033       * Eliminate block-local decls to simplify tracing and debugging.
5034       * Support another case of realloc via move into top
5035       * Fix error occuring when initial sbrk_base not word-aligned.
5036       * Rely on page size for units instead of SBRK_UNIT to
5037         avoid surprises about sbrk alignment conventions.
5038       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5039         (raymond@es.ele.tue.nl) for the suggestion.
5040       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5041       * More precautions for cases where other routines call sbrk,
5042         courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5043       * Added macros etc., allowing use in linux libc from
5044         H.J. Lu (hjl@gnu.ai.mit.edu)
5045       * Inverted this history list
5046 
5047     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
5048       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5049       * Removed all preallocation code since under current scheme
5050         the work required to undo bad preallocations exceeds
5051         the work saved in good cases for most test programs.
5052       * No longer use return list or unconsolidated bins since
5053         no scheme using them consistently outperforms those that don't
5054         given above changes.
5055       * Use best fit for very large chunks to prevent some worst-cases.
5056       * Added some support for debugging
5057 
5058     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
5059       * Removed footers when chunks are in use. Thanks to
5060         Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5061 
5062     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
5063       * Added malloc_trim, with help from Wolfram Gloger
5064         (wmglo@Dent.MED.Uni-Muenchen.DE).
5065 
5066     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
5067 
5068     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
5069       * realloc: try to expand in both directions
5070       * malloc: swap order of clean-bin strategy;
5071       * realloc: only conditionally expand backwards
5072       * Try not to scavenge used bins
5073       * Use bin counts as a guide to preallocation
5074       * Occasionally bin return list chunks in first scan
5075       * Add a few optimizations from colin@nyx10.cs.du.edu
5076 
5077     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
5078       * faster bin computation & slightly different binning
5079       * merged all consolidations to one part of malloc proper
5080          (eliminating old malloc_find_space & malloc_clean_bin)
5081       * Scan 2 returns chunks (not just 1)
5082       * Propagate failure in realloc if malloc returns 0
5083       * Add stuff to allow compilation on non-ANSI compilers
5084           from kpv@research.att.com
5085 
5086     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
5087       * removed potential for odd address access in prev_chunk
5088       * removed dependency on getpagesize.h
5089       * misc cosmetics and a bit more internal documentation
5090       * anticosmetics: mangled names in macros to evade debugger strangeness
5091       * tested on sparc, hp-700, dec-mips, rs6000
5092           with gcc & native cc (hp, dec only) allowing
5093           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5094 
5095     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
5096       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5097          structure of old version,  but most details differ.)
5098 
5099 */
5100