2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain. Use, modify, and
4 redistribute this code without permission or acknowledgement in any
5 way you wish. Send questions, comments, complaints, performance
6 data, etc to dl@cs.oswego.edu
8 * VERSION 2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
10 Note: There may be an updated version of this malloc obtainable at
11 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
12 Check before installing!
16 This library is all in one file to simplify the most common usage:
17 ftp it, compile it (-O), and link it into another program. All
18 of the compile-time options default to reasonable values for use on
19 most unix platforms. Compile -DWIN32 for reasonable defaults on windows.
20 You might later want to step through various compile-time and dynamic
23 For convenience, an include file for code using this malloc is at:
24 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.7.1.h
25 You don't really need this .h file unless you call functions not
26 defined in your system include files. The .h file contains only the
27 excerpts from this file needed for using this malloc on ANSI C/C++
28 systems, so long as you haven't changed compile-time options about
29 naming and tuning parameters. If you do, then you can create your
30 own malloc.h that does include all settings by cutting at the point
33 * Why use this malloc?
35 This is not the fastest, most space-conserving, most portable, or
36 most tunable malloc ever written. However it is among the fastest
37 while also being among the most space-conserving, portable and tunable.
38 Consistent balance across these factors results in a good general-purpose
39 allocator for malloc-intensive programs.
41 The main properties of the algorithms are:
42 * For large (>= 512 bytes) requests, it is a pure best-fit allocator,
43 with ties normally decided via FIFO (i.e. least recently used).
44 * For small (<= 64 bytes by default) requests, it is a caching
45 allocator, that maintains pools of quickly recycled chunks.
46 * In between, and for combinations of large and small requests, it does
47 the best it can trying to meet both goals at once.
48 * For very large requests (>= 128KB by default), it relies on system
49 memory mapping facilities, if supported.
51 For a longer but slightly out of date high-level description, see
52 http://gee.cs.oswego.edu/dl/html/malloc.html
54 You may already by default be using a C library containing a malloc
55 that is based on some version of this malloc (for example in
56 linux). You might still want to use the one in this file in order to
57 customize settings or to avoid overheads associated with library
60 * Contents, described in more detail in "description of public routines" below.
62 Standard (ANSI/SVID/...) functions:
64 calloc(size_t n_elements, size_t element_size);
66 realloc(Void_t* p, size_t n);
67 memalign(size_t alignment, size_t n);
70 mallopt(int parameter_number, int parameter_value)
73 independent_calloc(size_t n_elements, size_t size, Void_t* chunks[]);
74 independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
77 malloc_trim(size_t pad);
78 malloc_usable_size(Void_t* p);
83 Supported pointer representation: 4 or 8 bytes
84 Supported size_t representation: 4 or 8 bytes
85 Note that size_t is allowed to be 4 bytes even if pointers are 8.
86 You can adjust this by defining INTERNAL_SIZE_T
88 Alignment: 2 * sizeof(size_t) (default)
89 (i.e., 8 byte alignment with 4byte size_t). This suffices for
90 nearly all current machines and C compilers. However, you can
91 define MALLOC_ALIGNMENT to be wider than this if necessary.
93 Minimum overhead per allocated chunk: 4 or 8 bytes
94 Each malloced chunk has a hidden word of overhead holding size
95 and status information.
97 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
98 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
100 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
101 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
102 needed; 4 (8) for a trailing size field and 8 (16) bytes for
103 free list pointers. Thus, the minimum allocatable size is
106 Even a request for zero bytes (i.e., malloc(0)) returns a
107 pointer to something of the minimum allocatable size.
109 The maximum overhead wastage (i.e., number of extra bytes
110 allocated than were requested in malloc) is less than or equal
111 to the minimum size, except for requests >= mmap_threshold that
112 are serviced via mmap(), where the worst case wastage is 2 *
113 sizeof(size_t) bytes plus the remainder from a system page (the
114 minimal mmap unit); typically 4096 or 8192 bytes.
116 Maximum allocated size: 4-byte size_t: 2^32 minus about two pages
117 8-byte size_t: 2^64 minus about two pages
119 It is assumed that (possibly signed) size_t values suffice to
120 represent chunk sizes. `Possibly signed' is due to the fact
121 that `size_t' may be defined on a system as either a signed or
122 an unsigned type. The ISO C standard says that it must be
123 unsigned, but a few systems are known not to adhere to this.
124 Additionally, even when size_t is unsigned, sbrk (which is by
125 default used to obtain memory from system) accepts signed
126 arguments, and may not be able to handle size_t-wide arguments
127 with negative sign bit. Generally, values that would
128 appear as negative after accounting for overhead and alignment
129 are supported only via mmap(), which does not have this
132 Requests for sizes outside the allowed range will perform an optional
133 failure action and then return null. (Requests may also
134 also fail because a system is out of memory.)
136 Thread-safety: NOT thread-safe unless USE_MALLOC_LOCK defined
138 When USE_MALLOC_LOCK is defined, wrappers are created to
139 surround every public call with either a pthread mutex or
140 a win32 spinlock (depending on WIN32). This is not
141 especially fast, and can be a major bottleneck.
142 It is designed only to provide minimal protection
143 in concurrent environments, and to provide a basis for
144 extensions. If you are using malloc in a concurrent program,
145 you would be far better off obtaining ptmalloc, which is
146 derived from a version of this malloc, and is well-tuned for
147 concurrent programs. (See http://www.malloc.de) Note that
148 even when USE_MALLOC_LOCK is defined, you can can guarantee
149 full thread-safety only if no threads acquire memory through
150 direct calls to MORECORE or other system-level allocators.
152 Compliance: I believe it is compliant with the 1997 Single Unix Specification
153 (See http://www.opennc.org). Also SVID/XPG, ANSI C, and probably
156 * Synopsis of compile-time options:
158 People have reported using previous versions of this malloc on all
159 versions of Unix, sometimes by tweaking some of the defines
160 below. It has been tested most extensively on Solaris and
161 Linux. It is also reported to work on WIN32 platforms.
162 People also report using it in stand-alone embedded systems.
164 The implementation is in straight, hand-tuned ANSI C. It is not
165 at all modular. (Sorry!) It uses a lot of macros. To be at all
166 usable, this code should be compiled using an optimizing compiler
167 (for example gcc -O3) that can simplify expressions and control
168 paths. (FAQ: some macros import variables as arguments rather than
169 declare locals because people reported that some debuggers
170 otherwise get confused.)
174 Compilation Environment options:
176 __STD_C derived from C compiler defines
179 USE_MEMCPY 1 if HAVE_MEMCPY is defined
180 HAVE_MMAP defined as 1
182 HAVE_MREMAP 0 unless linux defined
183 malloc_getpagesize derived from system #includes, or 4096 if not
184 HAVE_USR_INCLUDE_MALLOC_H NOT defined
185 LACKS_UNISTD_H NOT defined unless WIN32
186 LACKS_SYS_PARAM_H NOT defined unless WIN32
187 LACKS_SYS_MMAN_H NOT defined unless WIN32
188 LACKS_FCNTL_H NOT defined
190 Changing default word sizes:
192 INTERNAL_SIZE_T size_t
193 MALLOC_ALIGNMENT 2 * sizeof(INTERNAL_SIZE_T)
194 PTR_UINT unsigned long
195 CHUNK_SIZE_T unsigned long
197 Configuration and functionality options:
199 USE_DL_PREFIX NOT defined
200 USE_PUBLIC_MALLOC_WRAPPERS NOT defined
201 USE_MALLOC_LOCK NOT defined
203 REALLOC_ZERO_BYTES_FREES NOT defined
204 MALLOC_FAILURE_ACTION errno = ENOMEM, if __STD_C defined, else no-op
206 FIRST_SORTED_BIN_SIZE 512
208 Options for customizing MORECORE:
211 MORECORE_CONTIGUOUS 1
212 MORECORE_CANNOT_TRIM NOT defined
213 MMAP_AS_MORECORE_SIZE (1024 * 1024)
215 Tuning options that are also dynamically changeable via mallopt:
218 DEFAULT_TRIM_THRESHOLD 256 * 1024
220 DEFAULT_MMAP_THRESHOLD 256 * 1024
221 DEFAULT_MMAP_MAX 65536
223 There are several other #defined constants and macros that you
224 probably don't want to touch unless you are extending or adapting malloc.
228 WIN32 sets up defaults for MS environment and compilers.
229 Otherwise defaults are for unix.
236 #define WIN32_LEAN_AND_MEAN
239 /* Win32 doesn't supply or need the following headers */
240 #define LACKS_UNISTD_H
241 #define LACKS_SYS_PARAM_H
242 #define LACKS_SYS_MMAN_H
244 /* Use the supplied emulation of sbrk */
245 #define MORECORE sbrk
246 #define MORECORE_CONTIGUOUS 1
247 #define MORECORE_FAILURE ((void*)(-1))
249 /* Use the supplied emulation of mmap and munmap */
251 #define MUNMAP_FAILURE (-1)
252 #define MMAP_CLEARS 1
254 /* These values don't really matter in windows mmap emulation */
255 #define MAP_PRIVATE 1
256 #define MAP_ANONYMOUS 2
260 /* Emulation functions defined at the end of this file */
262 /* If USE_MALLOC_LOCK, use supplied critical-section-based lock functions */
263 #ifdef USE_MALLOC_LOCK
264 static int slwait(int *sl
);
265 static int slrelease(int *sl
);
268 static long getpagesize(void);
269 static long getregionsize(void);
270 static void *sbrk(long size
);
271 static void *mmap(void *ptr
, long size
, long prot
, long type
, long handle
, long arg
);
272 static long munmap(void *ptr
, long size
);
274 static void vminfo (unsigned long*free
, unsigned long*reserved
, unsigned long*committed
);
275 static int cpuinfo (int whole
, unsigned long*kernel
, unsigned long*user
);
280 __STD_C should be nonzero if using ANSI-standard C compiler, a C++
281 compiler, or a C compiler sufficiently close to ANSI to get away
286 #if defined(__STDC__) || defined(_cplusplus)
295 Void_t* is the pointer type that malloc should say it returns
299 #if (__STD_C || defined(WIN32))
307 #include <stddef.h> /* for size_t */
309 #include <sys/types.h>
316 /* define LACKS_UNISTD_H if your system does not have a <unistd.h>. */
318 /* #define LACKS_UNISTD_H */
320 #ifndef LACKS_UNISTD_H
324 /* define LACKS_SYS_PARAM_H if your system does not have a <sys/param.h>. */
326 /* #define LACKS_SYS_PARAM_H */
329 #include <stdio.h> /* needed for malloc_stats */
330 #include <errno.h> /* needed for optional MALLOC_FAILURE_ACTION */
336 Because freed chunks may be overwritten with bookkeeping fields, this
337 malloc will often die when freed memory is overwritten by user
338 programs. This can be very effective (albeit in an annoying way)
339 in helping track down dangling pointers.
341 If you compile with -DDEBUG, a number of assertion checks are
342 enabled that will catch more memory errors. You probably won't be
343 able to make much sense of the actual assertion errors, but they
344 should help you locate incorrectly overwritten memory. The
345 checking is fairly extensive, and will slow down execution
346 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
347 attempt to check every non-mmapped allocated and free chunk in the
348 course of computing the summmaries. (By nature, mmapped regions
349 cannot be checked very much automatically.)
351 Setting DEBUG may also be helpful if you are trying to modify
352 this code. The assertions in the check routines spell out in more
353 detail the assumptions and invariants underlying the algorithms.
355 Setting DEBUG does NOT provide an automated mechanism for checking
356 that all accesses to malloced memory stay within their
357 bounds. However, there are several add-ons and adaptations of this
358 or other mallocs available that do this.
364 #define assert(x) ((void)0)
368 The unsigned integer type used for comparing any two chunk sizes.
369 This should be at least as wide as size_t, but should not be signed.
373 #define CHUNK_SIZE_T unsigned long
377 The unsigned integer type used to hold addresses when they are are
378 manipulated as integers. Except that it is not defined on all
379 systems, intptr_t would suffice.
382 #define PTR_UINT unsigned long
387 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
390 The default version is the same as size_t.
392 While not strictly necessary, it is best to define this as an
393 unsigned type, even if size_t is a signed type. This may avoid some
394 artificial size limitations on some systems.
396 On a 64-bit machine, you may be able to reduce malloc overhead by
397 defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the
398 expense of not being able to handle more than 2^32 of malloced
399 space. If this limitation is acceptable, you are encouraged to set
400 this unless you are on a platform requiring 16byte alignments. In
401 this case the alignment requirements turn out to negate any
402 potential advantages of decreasing size_t word size.
404 Implementors: Beware of the possible combinations of:
405 - INTERNAL_SIZE_T might be signed or unsigned, might be 32 or 64 bits,
406 and might be the same width as int or as long
407 - size_t might have different width and signedness as INTERNAL_SIZE_T
408 - int and long might be 32 or 64 bits, and might be the same width
409 To deal with this, most comparisons and difference computations
410 among INTERNAL_SIZE_Ts should cast them to CHUNK_SIZE_T, being
411 aware of the fact that casting an unsigned int to a wider long does
412 not sign-extend. (This also makes checking for negative numbers
413 awkward.) Some of these casts result in harmless compiler warnings
417 #ifndef INTERNAL_SIZE_T
418 #define INTERNAL_SIZE_T size_t
421 /* The corresponding word size */
422 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
427 MALLOC_ALIGNMENT is the minimum alignment for malloc'ed chunks.
428 It must be a power of two at least 2 * SIZE_SZ, even on machines
429 for which smaller alignments would suffice. It may be defined as
430 larger than this though. Note however that code and data structures
431 are optimized for the case of 8-byte alignment.
435 #ifndef MALLOC_ALIGNMENT
436 #define MALLOC_ALIGNMENT (2 * SIZE_SZ)
439 /* The corresponding bit mask value */
440 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
445 REALLOC_ZERO_BYTES_FREES should be set if a call to
446 realloc with zero bytes should be the same as a call to free.
447 Some people think it should. Otherwise, since this malloc
448 returns a unique pointer for malloc(0), so does realloc(p, 0).
451 /* #define REALLOC_ZERO_BYTES_FREES */
454 TRIM_FASTBINS controls whether free() of a very small chunk can
455 immediately lead to trimming. Setting to true (1) can reduce memory
456 footprint, but will almost always slow down programs that use a lot
459 Define this only if you are willing to give up some speed to more
460 aggressively reduce system-level memory footprint when releasing
461 memory in programs that use many small chunks. You can get
462 essentially the same effect by setting MXFAST to 0, but this can
463 lead to even greater slowdowns in programs using many small chunks.
464 TRIM_FASTBINS is an in-between compile-time option, that disables
465 only those chunks bordering topmost memory from being placed in
469 #ifndef TRIM_FASTBINS
470 #define TRIM_FASTBINS 0
475 USE_DL_PREFIX will prefix all public routines with the string 'dl'.
476 This is necessary when you only want to use this malloc in one part
477 of a program, using your regular system malloc elsewhere.
480 /* #define USE_DL_PREFIX */
484 USE_MALLOC_LOCK causes wrapper functions to surround each
485 callable routine with pthread mutex lock/unlock.
487 USE_MALLOC_LOCK forces USE_PUBLIC_MALLOC_WRAPPERS to be defined
491 /* #define USE_MALLOC_LOCK */
495 If USE_PUBLIC_MALLOC_WRAPPERS is defined, every public routine is
496 actually a wrapper function that first calls MALLOC_PREACTION, then
497 calls the internal routine, and follows it with
498 MALLOC_POSTACTION. This is needed for locking, but you can also use
499 this, without USE_MALLOC_LOCK, for purposes of interception,
500 instrumentation, etc. It is a sad fact that using wrappers often
501 noticeably degrades performance of malloc-intensive programs.
504 #ifdef USE_MALLOC_LOCK
505 #define USE_PUBLIC_MALLOC_WRAPPERS
507 /* #define USE_PUBLIC_MALLOC_WRAPPERS */
512 Two-phase name translation.
513 All of the actual routines are given mangled names.
514 When wrappers are used, they become the public callable versions.
515 When DL_PREFIX is used, the callable names are prefixed.
518 #ifndef USE_PUBLIC_MALLOC_WRAPPERS
519 #define cALLOc public_cALLOc
520 #define fREe public_fREe
521 #define cFREe public_cFREe
522 #define mALLOc public_mALLOc
523 #define mEMALIGn public_mEMALIGn
524 #define rEALLOc public_rEALLOc
525 #define vALLOc public_vALLOc
526 #define pVALLOc public_pVALLOc
527 #define mALLINFo public_mALLINFo
528 #define mALLOPt public_mALLOPt
529 #define mTRIm public_mTRIm
530 #define mSTATs public_mSTATs
531 #define mUSABLe public_mUSABLe
532 #define iCALLOc public_iCALLOc
533 #define iCOMALLOc public_iCOMALLOc
537 #define public_cALLOc dlcalloc
538 #define public_fREe dlfree
539 #define public_cFREe dlcfree
540 #define public_mALLOc dlmalloc
541 #define public_mEMALIGn dlmemalign
542 #define public_rEALLOc dlrealloc
543 #define public_vALLOc dlvalloc
544 #define public_pVALLOc dlpvalloc
545 #define public_mALLINFo dlmallinfo
546 #define public_mALLOPt dlmallopt
547 #define public_mTRIm dlmalloc_trim
548 #define public_mSTATs dlmalloc_stats
549 #define public_mUSABLe dlmalloc_usable_size
550 #define public_iCALLOc dlindependent_calloc
551 #define public_iCOMALLOc dlindependent_comalloc
552 #else /* USE_DL_PREFIX */
553 #define public_cALLOc calloc
554 #define public_fREe free
555 #define public_cFREe cfree
556 #define public_mALLOc malloc
557 #define public_mEMALIGn memalign
558 #define public_rEALLOc realloc
559 #define public_vALLOc valloc
560 #define public_pVALLOc pvalloc
561 #define public_mALLINFo mallinfo
562 #define public_mALLOPt mallopt
563 #define public_mTRIm malloc_trim
564 #define public_mSTATs malloc_stats
565 #define public_mUSABLe malloc_usable_size
566 #define public_iCALLOc independent_calloc
567 #define public_iCOMALLOc independent_comalloc
568 #endif /* USE_DL_PREFIX */
572 HAVE_MEMCPY should be defined if you are not otherwise using
573 ANSI STD C, but still have memcpy and memset in your C library
574 and want to use them in calloc and realloc. Otherwise simple
575 macro versions are defined below.
577 USE_MEMCPY should be defined as 1 if you actually want to
578 have memset and memcpy called. People report that the macro
579 versions are faster than libc versions on some systems.
581 Even if USE_MEMCPY is set to 1, loops to copy/clear small chunks
582 (of <= 36 bytes) are manually unrolled in realloc and calloc.
596 #if (__STD_C || defined(HAVE_MEMCPY))
599 /* On Win32 memset and memcpy are already declared in windows.h */
602 void* memset(void*, int, size_t);
603 void* memcpy(void*, const void*, size_t);
612 MALLOC_FAILURE_ACTION is the action to take before "return 0" when
613 malloc fails to be able to return memory, either because memory is
614 exhausted or because of illegal arguments.
616 By default, sets errno if running on STD_C platform, else does nothing.
619 #ifndef MALLOC_FAILURE_ACTION
621 #define MALLOC_FAILURE_ACTION \
625 #define MALLOC_FAILURE_ACTION
630 MORECORE-related declarations. By default, rely on sbrk
634 #ifdef LACKS_UNISTD_H
635 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
637 extern Void_t
* sbrk(ptrdiff_t);
639 extern Void_t
* sbrk();
645 MORECORE is the name of the routine to call to obtain more memory
646 from the system. See below for general guidance on writing
647 alternative MORECORE functions, as well as a version for WIN32 and a
648 sample version for pre-OSX macos.
652 #define MORECORE sbrk
656 MORECORE_FAILURE is the value returned upon failure of MORECORE
657 as well as mmap. Since it cannot be an otherwise valid memory address,
658 and must reflect values of standard sys calls, you probably ought not
662 #ifndef MORECORE_FAILURE
663 #define MORECORE_FAILURE (-1)
667 If MORECORE_CONTIGUOUS is true, take advantage of fact that
668 consecutive calls to MORECORE with positive arguments always return
669 contiguous increasing addresses. This is true of unix sbrk. Even
670 if not defined, when regions happen to be contiguous, malloc will
671 permit allocations spanning regions obtained from different
672 calls. But defining this when applicable enables some stronger
673 consistency checks and space efficiencies.
676 #ifndef MORECORE_CONTIGUOUS
677 #define MORECORE_CONTIGUOUS 1
681 Define MORECORE_CANNOT_TRIM if your version of MORECORE
682 cannot release space back to the system when given negative
683 arguments. This is generally necessary only if you are using
684 a hand-crafted MORECORE function that cannot handle negative arguments.
687 /* #define MORECORE_CANNOT_TRIM */
691 Define HAVE_MMAP as true to optionally make malloc() use mmap() to
692 allocate very large blocks. These will be returned to the
693 operating system immediately after a free(). Also, if mmap
694 is available, it is used as a backup strategy in cases where
695 MORECORE fails to provide space from system.
697 This malloc is best tuned to work with mmap for large requests.
698 If you do not have mmap, operations involving very large chunks (1MB
699 or so) may be slower than you'd like.
708 Standard unix mmap using /dev/zero clears memory so calloc doesn't
713 #define MMAP_CLEARS 1
718 #define MMAP_CLEARS 0
724 MMAP_AS_MORECORE_SIZE is the minimum mmap size argument to use if
725 sbrk fails, and mmap is used as a backup (which is done only if
726 HAVE_MMAP). The value must be a multiple of page size. This
727 backup strategy generally applies only when systems have "holes" in
728 address space, so sbrk cannot perform contiguous expansion, but
729 there is still space available on system. On systems for which
730 this is known to be useful (i.e. most linux kernels), this occurs
731 only when programs allocate huge amounts of memory. Between this,
732 and the fact that mmap regions tend to be limited, the size should
733 be large, to avoid too many mmap calls and thus avoid running out
737 #ifndef MMAP_AS_MORECORE_SIZE
738 #define MMAP_AS_MORECORE_SIZE (1024 * 1024)
742 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
743 large blocks. This is currently only possible on Linux with
744 kernel versions newer than 1.3.77.
749 #define HAVE_MREMAP 1
751 #define HAVE_MREMAP 0
754 #endif /* HAVE_MMAP */
758 The system page size. To the extent possible, this malloc manages
759 memory from the system in page-size units. Note that this value is
760 cached during initialization into a field of malloc_state. So even
761 if malloc_getpagesize is a function, it is only called once.
763 The following mechanics for getpagesize were adapted from bsd/gnu
764 getpagesize.h. If none of the system-probes here apply, a value of
765 4096 is used, which should be OK: If they don't apply, then using
766 the actual value probably doesn't impact performance.
770 #ifndef malloc_getpagesize
772 #ifndef LACKS_UNISTD_H
776 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
777 # ifndef _SC_PAGE_SIZE
778 # define _SC_PAGE_SIZE _SC_PAGESIZE
782 # ifdef _SC_PAGE_SIZE
783 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
785 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
786 extern size_t getpagesize();
787 # define malloc_getpagesize getpagesize()
789 # ifdef WIN32 /* use supplied emulation of getpagesize */
790 # define malloc_getpagesize getpagesize()
792 # ifndef LACKS_SYS_PARAM_H
793 # include <sys/param.h>
795 # ifdef EXEC_PAGESIZE
796 # define malloc_getpagesize EXEC_PAGESIZE
800 # define malloc_getpagesize NBPG
802 # define malloc_getpagesize (NBPG * CLSIZE)
806 # define malloc_getpagesize NBPC
809 # define malloc_getpagesize PAGESIZE
810 # else /* just guess */
811 # define malloc_getpagesize (4096)
822 This version of malloc supports the standard SVID/XPG mallinfo
823 routine that returns a struct containing usage properties and
824 statistics. It should work on any SVID/XPG compliant system that has
825 a /usr/include/malloc.h defining struct mallinfo. (If you'd like to
826 install such a thing yourself, cut out the preliminary declarations
827 as described above and below and save them in a malloc.h file. But
828 there's no compelling reason to bother to do this.)
830 The main declaration needed is the mallinfo struct that is returned
831 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
832 bunch of fields that are not even meaningful in this version of
833 malloc. These fields are are instead filled by mallinfo() with
834 other numbers that might be of interest.
836 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
837 /usr/include/malloc.h file that includes a declaration of struct
838 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
839 version is declared below. These must be precisely the same for
840 mallinfo() to work. The original SVID version of this struct,
841 defined on most systems with mallinfo, declares all fields as
842 ints. But some others define as unsigned long. If your system
843 defines the fields using a type of different width than listed here,
844 you must #include your system version and #define
845 HAVE_USR_INCLUDE_MALLOC_H.
848 /* #define HAVE_USR_INCLUDE_MALLOC_H */
850 #ifdef HAVE_USR_INCLUDE_MALLOC_H
851 #include "/usr/include/malloc.h"
854 /* SVID2/XPG mallinfo structure */
857 int arena
; /* non-mmapped space allocated from system */
858 int ordblks
; /* number of free chunks */
859 int smblks
; /* number of fastbin blocks */
860 int hblks
; /* number of mmapped regions */
861 int hblkhd
; /* space in mmapped regions */
862 int usmblks
; /* maximum total allocated space */
863 int fsmblks
; /* space available in freed fastbin blocks */
864 int uordblks
; /* total allocated space */
865 int fordblks
; /* total free space */
866 int keepcost
; /* top-most, releasable (via malloc_trim) space */
870 SVID/XPG defines four standard parameter numbers for mallopt,
871 normally defined in malloc.h. Only one of these (M_MXFAST) is used
872 in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
873 so setting them has no effect. But this malloc also supports other
874 options in mallopt described below.
879 /* ---------- description of public routines ------------ */
883 Returns a pointer to a newly allocated chunk of at least n bytes, or null
884 if no space is available. Additionally, on failure, errno is
885 set to ENOMEM on ANSI C systems.
887 If n is zero, malloc returns a minumum-sized chunk. (The minimum
888 size is 16 bytes on most 32bit systems, and 24 or 32 bytes on 64bit
889 systems.) On most systems, size_t is an unsigned type, so calls
890 with negative arguments are interpreted as requests for huge amounts
891 of space, which will often fail. The maximum supported value of n
892 differs across systems, but is in all cases less than the maximum
893 representable value of a size_t.
896 Void_t
* public_mALLOc(size_t);
898 Void_t
* public_mALLOc();
903 Releases the chunk of memory pointed to by p, that had been previously
904 allocated using malloc or a related routine such as realloc.
905 It has no effect if p is null. It can have arbitrary (i.e., bad!)
906 effects if p has already been freed.
908 Unless disabled (using mallopt), freeing very large spaces will
909 when possible, automatically trigger operations that give
910 back unused memory to the system, thus reducing program footprint.
913 void public_fREe(Void_t
*);
919 calloc(size_t n_elements, size_t element_size);
920 Returns a pointer to n_elements * element_size bytes, with all locations
924 Void_t
* public_cALLOc(size_t, size_t);
926 Void_t
* public_cALLOc();
930 realloc(Void_t* p, size_t n)
931 Returns a pointer to a chunk of size n that contains the same data
932 as does chunk p up to the minimum of (n, p's size) bytes, or null
933 if no space is available.
935 The returned pointer may or may not be the same as p. The algorithm
936 prefers extending p when possible, otherwise it employs the
937 equivalent of a malloc-copy-free sequence.
939 If p is null, realloc is equivalent to malloc.
941 If space is not available, realloc returns null, errno is set (if on
942 ANSI) and p is NOT freed.
944 if n is for fewer bytes than already held by p, the newly unused
945 space is lopped off and freed if possible. Unless the #define
946 REALLOC_ZERO_BYTES_FREES is set, realloc with a size argument of
947 zero (re)allocates a minimum-sized chunk.
949 Large chunks that were internally obtained via mmap will always
950 be reallocated using malloc-copy-free sequences unless
951 the system supports MREMAP (currently only linux).
953 The old unix realloc convention of allowing the last-free'd chunk
954 to be used as an argument to realloc is not supported.
957 Void_t
* public_rEALLOc(Void_t
*, size_t);
959 Void_t
* public_rEALLOc();
963 memalign(size_t alignment, size_t n);
964 Returns a pointer to a newly allocated chunk of n bytes, aligned
965 in accord with the alignment argument.
967 The alignment argument should be a power of two. If the argument is
968 not a power of two, the nearest greater power is used.
969 8-byte alignment is guaranteed by normal malloc calls, so don't
970 bother calling memalign with an argument of 8 or less.
972 Overreliance on memalign is a sure way to fragment space.
975 Void_t
* public_mEMALIGn(size_t, size_t);
977 Void_t
* public_mEMALIGn();
982 Equivalent to memalign(pagesize, n), where pagesize is the page
983 size of the system. If the pagesize is unknown, 4096 is used.
986 Void_t
* public_vALLOc(size_t);
988 Void_t
* public_vALLOc();
994 mallopt(int parameter_number, int parameter_value)
995 Sets tunable parameters The format is to provide a
996 (parameter-number, parameter-value) pair. mallopt then sets the
997 corresponding parameter to the argument value if it can (i.e., so
998 long as the value is meaningful), and returns 1 if successful else
999 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
1000 normally defined in malloc.h. Only one of these (M_MXFAST) is used
1001 in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
1002 so setting them has no effect. But this malloc also supports four
1003 other options in mallopt. See below for details. Briefly, supported
1004 parameters are as follows (listed defaults are for "typical"
1007 Symbol param # default allowed param values
1008 M_MXFAST 1 64 0-80 (0 disables fastbins)
1009 M_TRIM_THRESHOLD -1 256*1024 any (-1U disables trimming)
1011 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
1012 M_MMAP_MAX -4 65536 any (0 disables use of mmap)
1015 int public_mALLOPt(int, int);
1017 int public_mALLOPt();
1023 Returns (by copy) a struct containing various summary statistics:
1025 arena: current total non-mmapped bytes allocated from system
1026 ordblks: the number of free chunks
1027 smblks: the number of fastbin blocks (i.e., small chunks that
1028 have been freed but not use resused or consolidated)
1029 hblks: current number of mmapped regions
1030 hblkhd: total bytes held in mmapped regions
1031 usmblks: the maximum total allocated space. This will be greater
1032 than current total if trimming has occurred.
1033 fsmblks: total bytes held in fastbin blocks
1034 uordblks: current total allocated space (normal or mmapped)
1035 fordblks: total free space
1036 keepcost: the maximum number of bytes that could ideally be released
1037 back to system via malloc_trim. ("ideally" means that
1038 it ignores page restrictions etc.)
1040 Because these fields are ints, but internal bookkeeping may
1041 be kept as longs, the reported values may wrap around zero and
1045 struct mallinfo
public_mALLINFo(void);
1047 struct mallinfo
public_mALLINFo();
1051 independent_calloc(size_t n_elements, size_t element_size, Void_t* chunks[]);
1053 independent_calloc is similar to calloc, but instead of returning a
1054 single cleared space, it returns an array of pointers to n_elements
1055 independent elements that can hold contents of size elem_size, each
1056 of which starts out cleared, and can be independently freed,
1057 realloc'ed etc. The elements are guaranteed to be adjacently
1058 allocated (this is not guaranteed to occur with multiple callocs or
1059 mallocs), which may also improve cache locality in some
1062 The "chunks" argument is optional (i.e., may be null, which is
1063 probably the most typical usage). If it is null, the returned array
1064 is itself dynamically allocated and should also be freed when it is
1065 no longer needed. Otherwise, the chunks array must be of at least
1066 n_elements in length. It is filled in with the pointers to the
1069 In either case, independent_calloc returns this pointer array, or
1070 null if the allocation failed. If n_elements is zero and "chunks"
1071 is null, it returns a chunk representing an array with zero elements
1072 (which should be freed if not wanted).
1074 Each element must be individually freed when it is no longer
1075 needed. If you'd like to instead be able to free all at once, you
1076 should instead use regular calloc and assign pointers into this
1077 space to represent elements. (In this case though, you cannot
1078 independently free elements.)
1080 independent_calloc simplifies and speeds up implementations of many
1081 kinds of pools. It may also be useful when constructing large data
1082 structures that initially have a fixed number of fixed-sized nodes,
1083 but the number is not known at compile time, and some of the nodes
1084 may later need to be freed. For example:
1086 struct Node { int item; struct Node* next; };
1088 struct Node* build_list() {
1090 int n = read_number_of_nodes_needed();
1091 if (n <= 0) return 0;
1092 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1093 if (pool == 0) die();
1094 // organize into a linked list...
1095 struct Node* first = pool[0];
1096 for (i = 0; i < n-1; ++i)
1097 pool[i]->next = pool[i+1];
1098 free(pool); // Can now free the array (or not, if it is needed later)
1103 Void_t
** public_iCALLOc(size_t, size_t, Void_t
**);
1105 Void_t
** public_iCALLOc();
1109 independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
1111 independent_comalloc allocates, all at once, a set of n_elements
1112 chunks with sizes indicated in the "sizes" array. It returns
1113 an array of pointers to these elements, each of which can be
1114 independently freed, realloc'ed etc. The elements are guaranteed to
1115 be adjacently allocated (this is not guaranteed to occur with
1116 multiple callocs or mallocs), which may also improve cache locality
1117 in some applications.
1119 The "chunks" argument is optional (i.e., may be null). If it is null
1120 the returned array is itself dynamically allocated and should also
1121 be freed when it is no longer needed. Otherwise, the chunks array
1122 must be of at least n_elements in length. It is filled in with the
1123 pointers to the chunks.
1125 In either case, independent_comalloc returns this pointer array, or
1126 null if the allocation failed. If n_elements is zero and chunks is
1127 null, it returns a chunk representing an array with zero elements
1128 (which should be freed if not wanted).
1130 Each element must be individually freed when it is no longer
1131 needed. If you'd like to instead be able to free all at once, you
1132 should instead use a single regular malloc, and assign pointers at
1133 particular offsets in the aggregate space. (In this case though, you
1134 cannot independently free elements.)
1136 independent_comallac differs from independent_calloc in that each
1137 element may have a different size, and also that it does not
1138 automatically clear elements.
1140 independent_comalloc can be used to speed up allocation in cases
1141 where several structs or objects must always be allocated at the
1142 same time. For example:
1147 void send_message(char* msg) {
1148 int msglen = strlen(msg);
1149 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1151 if (independent_comalloc(3, sizes, chunks) == 0)
1153 struct Head* head = (struct Head*)(chunks[0]);
1154 char* body = (char*)(chunks[1]);
1155 struct Foot* foot = (struct Foot*)(chunks[2]);
1159 In general though, independent_comalloc is worth using only for
1160 larger values of n_elements. For small values, you probably won't
1161 detect enough difference from series of malloc calls to bother.
1163 Overuse of independent_comalloc can increase overall memory usage,
1164 since it cannot reuse existing noncontiguous small chunks that
1165 might be available for some of the elements.
1168 Void_t
** public_iCOMALLOc(size_t, size_t*, Void_t
**);
1170 Void_t
** public_iCOMALLOc();
1176 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1177 round up n to nearest pagesize.
1180 Void_t
* public_pVALLOc(size_t);
1182 Void_t
* public_pVALLOc();
1187 Equivalent to free(p).
1189 cfree is needed/defined on some systems that pair it with calloc,
1190 for odd historical reasons (such as: cfree is used in example
1191 code in the first edition of K&R).
1194 void public_cFREe(Void_t
*);
1196 void public_cFREe();
1200 malloc_trim(size_t pad);
1202 If possible, gives memory back to the system (via negative
1203 arguments to sbrk) if there is unused memory at the `high' end of
1204 the malloc pool. You can call this after freeing large blocks of
1205 memory to potentially reduce the system-level memory requirements
1206 of a program. However, it cannot guarantee to reduce memory. Under
1207 some allocation patterns, some large free blocks of memory will be
1208 locked between two used chunks, so they cannot be given back to
1211 The `pad' argument to malloc_trim represents the amount of free
1212 trailing space to leave untrimmed. If this argument is zero,
1213 only the minimum amount of memory to maintain internal data
1214 structures will be left (one page or less). Non-zero arguments
1215 can be supplied to maintain enough trailing space to service
1216 future expected allocations without having to re-obtain memory
1219 Malloc_trim returns 1 if it actually released any memory, else 0.
1220 On systems that do not support "negative sbrks", it will always
1224 int public_mTRIm(size_t);
1230 malloc_usable_size(Void_t* p);
1232 Returns the number of bytes you can actually use in
1233 an allocated chunk, which may be more than you requested (although
1234 often not) due to alignment and minimum size constraints.
1235 You can use this many bytes without worrying about
1236 overwriting other allocated objects. This is not a particularly great
1237 programming practice. malloc_usable_size can be more useful in
1238 debugging and assertions, for example:
1241 assert(malloc_usable_size(p) >= 256);
1245 size_t public_mUSABLe(Void_t
*);
1247 size_t public_mUSABLe();
1252 Prints on stderr the amount of space obtained from the system (both
1253 via sbrk and mmap), the maximum amount (which may be more than
1254 current if malloc_trim and/or munmap got called), and the current
1255 number of bytes allocated via malloc (or realloc, etc) but not yet
1256 freed. Note that this is the number of bytes allocated, not the
1257 number requested. It will be larger than the number requested
1258 because of alignment and bookkeeping overhead. Because it includes
1259 alignment wastage as being in use, this figure may be greater than
1260 zero even when no user-level chunks are allocated.
1262 The reported current and maximum system memory can be inaccurate if
1263 a program makes other calls to system memory allocation functions
1264 (normally sbrk) outside of malloc.
1266 malloc_stats prints only the most commonly interesting statistics.
1267 More information can be obtained by calling mallinfo.
1271 void public_mSTATs();
1273 void public_mSTATs();
1276 /* mallopt tuning options */
1279 M_MXFAST is the maximum request size used for "fastbins", special bins
1280 that hold returned chunks without consolidating their spaces. This
1281 enables future requests for chunks of the same size to be handled
1282 very quickly, but can increase fragmentation, and thus increase the
1283 overall memory footprint of a program.
1285 This malloc manages fastbins very conservatively yet still
1286 efficiently, so fragmentation is rarely a problem for values less
1287 than or equal to the default. The maximum supported value of MXFAST
1288 is 80. You wouldn't want it any higher than this anyway. Fastbins
1289 are designed especially for use with many small structs, objects or
1290 strings -- the default handles structs/objects/arrays with sizes up
1291 to 16 4byte fields, or small strings representing words, tokens,
1292 etc. Using fastbins for larger objects normally worsens
1293 fragmentation without improving speed.
1295 M_MXFAST is set in REQUEST size units. It is internally used in
1296 chunksize units, which adds padding and alignment. You can reduce
1297 M_MXFAST to 0 to disable all use of fastbins. This causes the malloc
1298 algorithm to be a closer approximation of fifo-best-fit in all cases,
1299 not just for larger requests, but will generally cause it to be
1304 /* M_MXFAST is a standard SVID/XPG tuning option, usually listed in malloc.h */
1309 #ifndef DEFAULT_MXFAST
1310 #define DEFAULT_MXFAST 64
1315 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
1316 to keep before releasing via malloc_trim in free().
1318 Automatic trimming is mainly useful in long-lived programs.
1319 Because trimming via sbrk can be slow on some systems, and can
1320 sometimes be wasteful (in cases where programs immediately
1321 afterward allocate more large chunks) the value should be high
1322 enough so that your overall system performance would improve by
1323 releasing this much memory.
1325 The trim threshold and the mmap control parameters (see below)
1326 can be traded off with one another. Trimming and mmapping are
1327 two different ways of releasing unused memory back to the
1328 system. Between these two, it is often possible to keep
1329 system-level demands of a long-lived program down to a bare
1330 minimum. For example, in one test suite of sessions measuring
1331 the XF86 X server on Linux, using a trim threshold of 128K and a
1332 mmap threshold of 192K led to near-minimal long term resource
1335 If you are using this malloc in a long-lived program, it should
1336 pay to experiment with these values. As a rough guide, you
1337 might set to a value close to the average size of a process
1338 (program) running on your system. Releasing this much memory
1339 would allow such a process to run in memory. Generally, it's
1340 worth it to tune for trimming rather tham memory mapping when a
1341 program undergoes phases where several large chunks are
1342 allocated and released in ways that can reuse each other's
1343 storage, perhaps mixed with phases where there are no such
1344 chunks at all. And in well-behaved long-lived programs,
1345 controlling release of large blocks via trimming versus mapping
1348 However, in most programs, these parameters serve mainly as
1349 protection against the system-level effects of carrying around
1350 massive amounts of unneeded memory. Since frequent calls to
1351 sbrk, mmap, and munmap otherwise degrade performance, the default
1352 parameters are set to relatively high values that serve only as
1355 The trim value must be greater than page size to have any useful
1356 effect. To disable trimming completely, you can set to
1359 Trim settings interact with fastbin (MXFAST) settings: Unless
1360 TRIM_FASTBINS is defined, automatic trimming never takes place upon
1361 freeing a chunk with size less than or equal to MXFAST. Trimming is
1362 instead delayed until subsequent freeing of larger chunks. However,
1363 you can still force an attempted trim by calling malloc_trim.
1365 Also, trimming is not generally possible in cases where
1366 the main arena is obtained via mmap.
1368 Note that the trick some people use of mallocing a huge space and
1369 then freeing it at program startup, in an attempt to reserve system
1370 memory, doesn't have the intended effect under automatic trimming,
1371 since that memory will immediately be returned to the system.
1374 #define M_TRIM_THRESHOLD -1
1376 #ifndef DEFAULT_TRIM_THRESHOLD
1377 #define DEFAULT_TRIM_THRESHOLD (256 * 1024)
1381 M_TOP_PAD is the amount of extra `padding' space to allocate or
1382 retain whenever sbrk is called. It is used in two ways internally:
1384 * When sbrk is called to extend the top of the arena to satisfy
1385 a new malloc request, this much padding is added to the sbrk
1388 * When malloc_trim is called automatically from free(),
1389 it is used as the `pad' argument.
1391 In both cases, the actual amount of padding is rounded
1392 so that the end of the arena is always a system page boundary.
1394 The main reason for using padding is to avoid calling sbrk so
1395 often. Having even a small pad greatly reduces the likelihood
1396 that nearly every malloc request during program start-up (or
1397 after trimming) will invoke sbrk, which needlessly wastes
1400 Automatic rounding-up to page-size units is normally sufficient
1401 to avoid measurable overhead, so the default is 0. However, in
1402 systems where sbrk is relatively slow, it can pay to increase
1403 this value, at the expense of carrying around more memory than
1407 #define M_TOP_PAD -2
1409 #ifndef DEFAULT_TOP_PAD
1410 #define DEFAULT_TOP_PAD (0)
1414 M_MMAP_THRESHOLD is the request size threshold for using mmap()
1415 to service a request. Requests of at least this size that cannot
1416 be allocated using already-existing space will be serviced via mmap.
1417 (If enough normal freed space already exists it is used instead.)
1419 Using mmap segregates relatively large chunks of memory so that
1420 they can be individually obtained and released from the host
1421 system. A request serviced through mmap is never reused by any
1422 other request (at least not directly; the system may just so
1423 happen to remap successive requests to the same locations).
1425 Segregating space in this way has the benefits that:
1427 1. Mmapped space can ALWAYS be individually released back
1428 to the system, which helps keep the system level memory
1429 demands of a long-lived program low.
1430 2. Mapped memory can never become `locked' between
1431 other chunks, as can happen with normally allocated chunks, which
1432 means that even trimming via malloc_trim would not release them.
1433 3. On some systems with "holes" in address spaces, mmap can obtain
1434 memory that sbrk cannot.
1436 However, it has the disadvantages that:
1438 1. The space cannot be reclaimed, consolidated, and then
1439 used to service later requests, as happens with normal chunks.
1440 2. It can lead to more wastage because of mmap page alignment
1442 3. It causes malloc performance to be more dependent on host
1443 system memory management support routines which may vary in
1444 implementation quality and may impose arbitrary
1445 limitations. Generally, servicing a request via normal
1446 malloc steps is faster than going through a system's mmap.
1448 The advantages of mmap nearly always outweigh disadvantages for
1449 "large" chunks, but the value of "large" varies across systems. The
1450 default is an empirically derived value that works well in most
1454 #define M_MMAP_THRESHOLD -3
1456 #ifndef DEFAULT_MMAP_THRESHOLD
1457 #define DEFAULT_MMAP_THRESHOLD (256 * 1024)
1461 M_MMAP_MAX is the maximum number of requests to simultaneously
1462 service using mmap. This parameter exists because
1463 . Some systems have a limited number of internal tables for
1464 use by mmap, and using more than a few of them may degrade
1467 The default is set to a value that serves only as a safeguard.
1468 Setting to 0 disables use of mmap for servicing large requests. If
1469 HAVE_MMAP is not set, the default value is 0, and attempts to set it
1470 to non-zero values in mallopt will fail.
1473 #define M_MMAP_MAX -4
1475 #ifndef DEFAULT_MMAP_MAX
1477 #define DEFAULT_MMAP_MAX (65536)
1479 #define DEFAULT_MMAP_MAX (0)
1484 }; /* end of extern "C" */
1488 ========================================================================
1489 To make a fully customizable malloc.h header file, cut everything
1490 above this line, put into file malloc.h, edit to suit, and #include it
1491 on the next line, as well as in programs that use this malloc.
1492 ========================================================================
1495 /* #include "malloc.h" */
1497 /* --------------------- public wrappers ---------------------- */
1499 #ifdef USE_PUBLIC_MALLOC_WRAPPERS
1501 /* Declare all routines as internal */
1503 static Void_t
* mALLOc(size_t);
1504 static void fREe(Void_t
*);
1505 static Void_t
* rEALLOc(Void_t
*, size_t);
1506 static Void_t
* mEMALIGn(size_t, size_t);
1507 static Void_t
* vALLOc(size_t);
1508 static Void_t
* pVALLOc(size_t);
1509 static Void_t
* cALLOc(size_t, size_t);
1510 static Void_t
** iCALLOc(size_t, size_t, Void_t
**);
1511 static Void_t
** iCOMALLOc(size_t, size_t*, Void_t
**);
1512 static void cFREe(Void_t
*);
1513 static int mTRIm(size_t);
1514 static size_t mUSABLe(Void_t
*);
1515 static void mSTATs();
1516 static int mALLOPt(int, int);
1517 static struct mallinfo
mALLINFo(void);
1519 static Void_t
* mALLOc();
1521 static Void_t
* rEALLOc();
1522 static Void_t
* mEMALIGn();
1523 static Void_t
* vALLOc();
1524 static Void_t
* pVALLOc();
1525 static Void_t
* cALLOc();
1526 static Void_t
** iCALLOc();
1527 static Void_t
** iCOMALLOc();
1528 static void cFREe();
1530 static size_t mUSABLe();
1531 static void mSTATs();
1532 static int mALLOPt();
1533 static struct mallinfo
mALLINFo();
1537 MALLOC_PREACTION and MALLOC_POSTACTION should be
1538 defined to return 0 on success, and nonzero on failure.
1539 The return value of MALLOC_POSTACTION is currently ignored
1540 in wrapper functions since there is no reasonable default
1541 action to take on failure.
1545 #ifdef USE_MALLOC_LOCK
1549 static int mALLOC_MUTEx
;
1550 #define MALLOC_PREACTION slwait(&mALLOC_MUTEx)
1551 #define MALLOC_POSTACTION slrelease(&mALLOC_MUTEx)
1555 #include <pthread.h>
1557 static pthread_mutex_t mALLOC_MUTEx
= PTHREAD_MUTEX_INITIALIZER
;
1559 #define MALLOC_PREACTION pthread_mutex_lock(&mALLOC_MUTEx)
1560 #define MALLOC_POSTACTION pthread_mutex_unlock(&mALLOC_MUTEx)
1562 #endif /* USE_MALLOC_LOCK */
1566 /* Substitute anything you like for these */
1568 #define MALLOC_PREACTION (0)
1569 #define MALLOC_POSTACTION (0)
1573 Void_t
* public_mALLOc(size_t bytes
) {
1575 if (MALLOC_PREACTION
!= 0) {
1579 if (MALLOC_POSTACTION
!= 0) {
1584 void public_fREe(Void_t
* m
) {
1585 if (MALLOC_PREACTION
!= 0) {
1589 if (MALLOC_POSTACTION
!= 0) {
1593 Void_t
* public_rEALLOc(Void_t
* m
, size_t bytes
) {
1594 if (MALLOC_PREACTION
!= 0) {
1597 m
= rEALLOc(m
, bytes
);
1598 if (MALLOC_POSTACTION
!= 0) {
1603 Void_t
* public_mEMALIGn(size_t alignment
, size_t bytes
) {
1605 if (MALLOC_PREACTION
!= 0) {
1608 m
= mEMALIGn(alignment
, bytes
);
1609 if (MALLOC_POSTACTION
!= 0) {
1614 Void_t
* public_vALLOc(size_t bytes
) {
1616 if (MALLOC_PREACTION
!= 0) {
1620 if (MALLOC_POSTACTION
!= 0) {
1625 Void_t
* public_pVALLOc(size_t bytes
) {
1627 if (MALLOC_PREACTION
!= 0) {
1631 if (MALLOC_POSTACTION
!= 0) {
1636 Void_t
* public_cALLOc(size_t n
, size_t elem_size
) {
1638 if (MALLOC_PREACTION
!= 0) {
1641 m
= cALLOc(n
, elem_size
);
1642 if (MALLOC_POSTACTION
!= 0) {
1648 Void_t
** public_iCALLOc(size_t n
, size_t elem_size
, Void_t
** chunks
) {
1650 if (MALLOC_PREACTION
!= 0) {
1653 m
= iCALLOc(n
, elem_size
, chunks
);
1654 if (MALLOC_POSTACTION
!= 0) {
1659 Void_t
** public_iCOMALLOc(size_t n
, size_t sizes
[], Void_t
** chunks
) {
1661 if (MALLOC_PREACTION
!= 0) {
1664 m
= iCOMALLOc(n
, sizes
, chunks
);
1665 if (MALLOC_POSTACTION
!= 0) {
1670 void public_cFREe(Void_t
* m
) {
1671 if (MALLOC_PREACTION
!= 0) {
1675 if (MALLOC_POSTACTION
!= 0) {
1679 int public_mTRIm(size_t s
) {
1681 if (MALLOC_PREACTION
!= 0) {
1685 if (MALLOC_POSTACTION
!= 0) {
1690 size_t public_mUSABLe(Void_t
* m
) {
1692 if (MALLOC_PREACTION
!= 0) {
1695 result
= mUSABLe(m
);
1696 if (MALLOC_POSTACTION
!= 0) {
1701 void public_mSTATs() {
1702 if (MALLOC_PREACTION
!= 0) {
1706 if (MALLOC_POSTACTION
!= 0) {
1710 struct mallinfo
public_mALLINFo() {
1712 if (MALLOC_PREACTION
!= 0) {
1713 struct mallinfo nm
= { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1717 if (MALLOC_POSTACTION
!= 0) {
1722 int public_mALLOPt(int p
, int v
) {
1724 if (MALLOC_PREACTION
!= 0) {
1727 result
= mALLOPt(p
, v
);
1728 if (MALLOC_POSTACTION
!= 0) {
1737 /* ------------- Optional versions of memcopy ---------------- */
1743 Note: memcpy is ONLY invoked with non-overlapping regions,
1744 so the (usually slower) memmove is not needed.
1747 #define MALLOC_COPY(dest, src, nbytes) memcpy(dest, src, nbytes)
1748 #define MALLOC_ZERO(dest, nbytes) memset(dest, 0, nbytes)
1750 #else /* !USE_MEMCPY */
1752 /* Use Duff's device for good zeroing/copying performance. */
1754 #define MALLOC_ZERO(charp, nbytes) \
1756 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
1757 CHUNK_SIZE_T mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T); \
1759 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
1761 case 0: for(;;) { *mzp++ = 0; \
1762 case 7: *mzp++ = 0; \
1763 case 6: *mzp++ = 0; \
1764 case 5: *mzp++ = 0; \
1765 case 4: *mzp++ = 0; \
1766 case 3: *mzp++ = 0; \
1767 case 2: *mzp++ = 0; \
1768 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
1772 #define MALLOC_COPY(dest,src,nbytes) \
1774 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
1775 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
1776 CHUNK_SIZE_T mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T); \
1778 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
1780 case 0: for(;;) { *mcdst++ = *mcsrc++; \
1781 case 7: *mcdst++ = *mcsrc++; \
1782 case 6: *mcdst++ = *mcsrc++; \
1783 case 5: *mcdst++ = *mcsrc++; \
1784 case 4: *mcdst++ = *mcsrc++; \
1785 case 3: *mcdst++ = *mcsrc++; \
1786 case 2: *mcdst++ = *mcsrc++; \
1787 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
1793 /* ------------------ MMAP support ------------------ */
1798 #ifndef LACKS_FCNTL_H
1802 #ifndef LACKS_SYS_MMAN_H
1803 #include <sys/mman.h>
1806 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1807 #define MAP_ANONYMOUS MAP_ANON
1811 Nearly all versions of mmap support MAP_ANONYMOUS,
1812 so the following is unlikely to be needed, but is
1813 supplied just in case.
1816 #ifndef MAP_ANONYMOUS
1818 static int dev_zero_fd
= -1; /* Cached file descriptor for /dev/zero. */
1820 #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
1821 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1822 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
1823 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
1827 #define MMAP(addr, size, prot, flags) \
1828 (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
1833 #endif /* HAVE_MMAP */
1837 ----------------------- Chunk representations -----------------------
1842 This struct declaration is misleading (but accurate and necessary).
1843 It declares a "view" into memory allowing access to necessary
1844 fields at known offsets from a given base. See explanation below.
1847 struct malloc_chunk
{
1849 INTERNAL_SIZE_T prev_size
; /* Size of previous chunk (if free). */
1850 INTERNAL_SIZE_T size
; /* Size in bytes, including overhead. */
1852 struct malloc_chunk
* fd
; /* double links -- used only if free. */
1853 struct malloc_chunk
* bk
;
1857 typedef struct malloc_chunk
* mchunkptr
;
1860 malloc_chunk details:
1862 (The following includes lightly edited explanations by Colin Plumb.)
1864 Chunks of memory are maintained using a `boundary tag' method as
1865 described in e.g., Knuth or Standish. (See the paper by Paul
1866 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1867 survey of such techniques.) Sizes of free chunks are stored both
1868 in the front of each chunk and at the end. This makes
1869 consolidating fragmented chunks into bigger chunks very fast. The
1870 size fields also hold bits representing whether chunks are free or
1873 An allocated chunk looks like this:
1876 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1877 | Size of previous chunk, if allocated | |
1878 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1879 | Size of chunk, in bytes |P|
1880 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1881 | User data starts here... .
1883 . (malloc_usable_space() bytes) .
1885 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1887 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1890 Where "chunk" is the front of the chunk for the purpose of most of
1891 the malloc code, but "mem" is the pointer that is returned to the
1892 user. "Nextchunk" is the beginning of the next contiguous chunk.
1894 Chunks always begin on even word boundries, so the mem portion
1895 (which is returned to the user) is also on an even word boundary, and
1896 thus at least double-word aligned.
1898 Free chunks are stored in circular doubly-linked lists, and look like this:
1900 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1901 | Size of previous chunk |
1902 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1903 `head:' | Size of chunk, in bytes |P|
1904 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1905 | Forward pointer to next chunk in list |
1906 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1907 | Back pointer to previous chunk in list |
1908 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1909 | Unused space (may be 0 bytes long) .
1912 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1913 `foot:' | Size of chunk, in bytes |
1914 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1916 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1917 chunk size (which is always a multiple of two words), is an in-use
1918 bit for the *previous* chunk. If that bit is *clear*, then the
1919 word before the current chunk size contains the previous chunk
1920 size, and can be used to find the front of the previous chunk.
1921 The very first chunk allocated always has this bit set,
1922 preventing access to non-existent (or non-owned) memory. If
1923 prev_inuse is set for any given chunk, then you CANNOT determine
1924 the size of the previous chunk, and might even get a memory
1925 addressing fault when trying to do so.
1927 Note that the `foot' of the current chunk is actually represented
1928 as the prev_size of the NEXT chunk. This makes it easier to
1929 deal with alignments etc but can be very confusing when trying
1930 to extend or adapt this code.
1932 The two exceptions to all this are
1934 1. The special chunk `top' doesn't bother using the
1935 trailing size field since there is no next contiguous chunk
1936 that would have to index off it. After initialization, `top'
1937 is forced to always exist. If it would become less than
1938 MINSIZE bytes long, it is replenished.
1940 2. Chunks allocated via mmap, which have the second-lowest-order
1941 bit (IS_MMAPPED) set in their size fields. Because they are
1942 allocated one-by-one, each must contain its own trailing size field.
1947 ---------- Size and alignment checks and conversions ----------
1950 /* conversion from malloc headers to user pointers, and back */
1952 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1953 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1955 /* The smallest possible chunk */
1956 #define MIN_CHUNK_SIZE (sizeof(struct malloc_chunk))
1958 /* The smallest size we can malloc is an aligned minimal chunk */
1961 (CHUNK_SIZE_T)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK))
1963 /* Check if m has acceptable alignment */
1965 #define aligned_OK(m) (((PTR_UINT)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1969 Check if a request is so large that it would wrap around zero when
1970 padded and aligned. To simplify some other code, the bound is made
1971 low enough so that adding MINSIZE will also not wrap around sero.
1974 #define REQUEST_OUT_OF_RANGE(req) \
1975 ((CHUNK_SIZE_T)(req) >= \
1976 (CHUNK_SIZE_T)(INTERNAL_SIZE_T)(-2 * MINSIZE))
1978 /* pad request bytes into a usable size -- internal version */
1980 #define request2size(req) \
1981 (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE) ? \
1983 ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
1985 /* Same, except also perform argument check */
1987 #define checked_request2size(req, sz) \
1988 if (REQUEST_OUT_OF_RANGE(req)) { \
1989 MALLOC_FAILURE_ACTION; \
1992 (sz) = request2size(req);
1995 --------------- Physical chunk operations ---------------
1999 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
2000 #define PREV_INUSE 0x1
2002 /* extract inuse bit of previous chunk */
2003 #define prev_inuse(p) ((p)->size & PREV_INUSE)
2006 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
2007 #define IS_MMAPPED 0x2
2009 /* check for mmap()'ed chunk */
2010 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
2013 Bits to mask off when extracting size
2015 Note: IS_MMAPPED is intentionally not masked off from size field in
2016 macros for which mmapped chunks should never be seen. This should
2017 cause helpful core dumps to occur if it is tried by accident by
2018 people extending or adapting this malloc.
2020 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
2022 /* Get size, ignoring use bits */
2023 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
2026 /* Ptr to next physical malloc_chunk. */
2027 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
2029 /* Ptr to previous physical malloc_chunk */
2030 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
2032 /* Treat space at ptr + offset as a chunk */
2033 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2035 /* extract p's inuse bit */
2037 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
2039 /* set/clear chunk as being inuse without otherwise disturbing */
2040 #define set_inuse(p)\
2041 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
2043 #define clear_inuse(p)\
2044 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
2047 /* check/set/clear inuse bits in known places */
2048 #define inuse_bit_at_offset(p, s)\
2049 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
2051 #define set_inuse_bit_at_offset(p, s)\
2052 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
2054 #define clear_inuse_bit_at_offset(p, s)\
2055 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
2058 /* Set size at head, without disturbing its use bit */
2059 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
2061 /* Set size/use field */
2062 #define set_head(p, s) ((p)->size = (s))
2064 /* Set size at footer (only when chunk is not in use) */
2065 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
2069 -------------------- Internal data structures --------------------
2071 All internal state is held in an instance of malloc_state defined
2072 below. There are no other static variables, except in two optional
2074 * If USE_MALLOC_LOCK is defined, the mALLOC_MUTEx declared above.
2075 * If HAVE_MMAP is true, but mmap doesn't support
2076 MAP_ANONYMOUS, a dummy file descriptor for mmap.
2078 Beware of lots of tricks that minimize the total bookkeeping space
2079 requirements. The result is a little over 1K bytes (for 4byte
2080 pointers and size_t.)
2086 An array of bin headers for free chunks. Each bin is doubly
2087 linked. The bins are approximately proportionally (log) spaced.
2088 There are a lot of these bins (128). This may look excessive, but
2089 works very well in practice. Most bins hold sizes that are
2090 unusual as malloc request sizes, but are more usual for fragments
2091 and consolidated sets of chunks, which is what these bins hold, so
2092 they can be found quickly. All procedures maintain the invariant
2093 that no consolidated chunk physically borders another one, so each
2094 chunk in a list is known to be preceeded and followed by either
2095 inuse chunks or the ends of memory.
2097 Chunks in bins are kept in size order, with ties going to the
2098 approximately least recently used chunk. Ordering isn't needed
2099 for the small bins, which all contain the same-sized chunks, but
2100 facilitates best-fit allocation for larger chunks. These lists
2101 are just sequential. Keeping them in order almost never requires
2102 enough traversal to warrant using fancier ordered data
2105 Chunks of the same size are linked with the most
2106 recently freed at the front, and allocations are taken from the
2107 back. This results in LRU (FIFO) allocation order, which tends
2108 to give each chunk an equal opportunity to be consolidated with
2109 adjacent freed chunks, resulting in larger free chunks and less
2112 To simplify use in double-linked lists, each bin header acts
2113 as a malloc_chunk. This avoids special-casing for headers.
2114 But to conserve space and improve locality, we allocate
2115 only the fd/bk pointers of bins, and then use repositioning tricks
2116 to treat these as the fields of a malloc_chunk*.
2119 typedef struct malloc_chunk
* mbinptr
;
2121 /* addressing -- note that bin_at(0) does not exist */
2122 #define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - (SIZE_SZ<<1)))
2124 /* analog of ++bin */
2125 #define next_bin(b) ((mbinptr)((char*)(b) + (sizeof(mchunkptr)<<1)))
2127 /* Reminders about list directionality within bins */
2128 #define first(b) ((b)->fd)
2129 #define last(b) ((b)->bk)
2131 /* Take a chunk off a bin list */
2132 #define unlink(P, BK, FD) { \
2142 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
2143 8 bytes apart. Larger bins are approximately logarithmically spaced:
2149 4 bins of size 32768
2150 2 bins of size 262144
2151 1 bin of size what's left
2153 The bins top out around 1MB because we expect to service large
2158 #define NSMALLBINS 32
2159 #define SMALLBIN_WIDTH 8
2160 #define MIN_LARGE_SIZE 256
2162 #define in_smallbin_range(sz) \
2163 ((CHUNK_SIZE_T)(sz) < (CHUNK_SIZE_T)MIN_LARGE_SIZE)
2165 #define smallbin_index(sz) (((unsigned)(sz)) >> 3)
2168 Compute index for size. We expect this to be inlined when
2169 compiled with optimization, else not, which works out well.
2171 static int largebin_index(unsigned int sz
) {
2172 unsigned int x
= sz
>> SMALLBIN_WIDTH
;
2173 unsigned int m
; /* bit position of highest set bit of m */
2175 if (x
>= 0x10000) return NBINS
-1;
2177 /* On intel, use BSRL instruction to find highest bit */
2178 #if defined(__GNUC__) && defined(i386)
2180 __asm__("bsrl %1,%0\n\t"
2187 Based on branch-free nlz algorithm in chapter 5 of Henry
2188 S. Warren Jr's book "Hacker's Delight".
2191 unsigned int n
= ((x
- 0x100) >> 16) & 8;
2193 m
= ((x
- 0x1000) >> 16) & 4;
2196 m
= ((x
- 0x4000) >> 16) & 2;
2199 m
= 13 - n
+ (x
& ~(x
>>1));
2203 /* Use next 2 bits to create finer-granularity bins */
2204 return NSMALLBINS
+ (m
<< 2) + ((sz
>> (m
+ 6)) & 3);
2207 #define bin_index(sz) \
2208 ((in_smallbin_range(sz)) ? smallbin_index(sz) : largebin_index(sz))
2211 FIRST_SORTED_BIN_SIZE is the chunk size corresponding to the
2212 first bin that is maintained in sorted order. This must
2213 be the smallest size corresponding to a given bin.
2215 Normally, this should be MIN_LARGE_SIZE. But you can weaken
2216 best fit guarantees to sometimes speed up malloc by increasing value.
2217 Doing this means that malloc may choose a chunk that is
2218 non-best-fitting by up to the width of the bin.
2220 Some useful cutoff values:
2221 512 - all bins sorted
2222 2560 - leaves bins <= 64 bytes wide unsorted
2223 12288 - leaves bins <= 512 bytes wide unsorted
2224 65536 - leaves bins <= 4096 bytes wide unsorted
2225 262144 - leaves bins <= 32768 bytes wide unsorted
2226 -1 - no bins sorted (not recommended!)
2229 #define FIRST_SORTED_BIN_SIZE MIN_LARGE_SIZE
2230 /* #define FIRST_SORTED_BIN_SIZE 65536 */
2235 All remainders from chunk splits, as well as all returned chunks,
2236 are first placed in the "unsorted" bin. They are then placed
2237 in regular bins after malloc gives them ONE chance to be used before
2238 binning. So, basically, the unsorted_chunks list acts as a queue,
2239 with chunks being placed on it in free (and malloc_consolidate),
2240 and taken off (to be either used or placed in bins) in malloc.
2243 /* The otherwise unindexable 1-bin is used to hold unsorted chunks. */
2244 #define unsorted_chunks(M) (bin_at(M, 1))
2249 The top-most available chunk (i.e., the one bordering the end of
2250 available memory) is treated specially. It is never included in
2251 any bin, is used only if no other chunk is available, and is
2252 released back to the system if it is very large (see
2253 M_TRIM_THRESHOLD). Because top initially
2254 points to its own bin with initial zero size, thus forcing
2255 extension on the first malloc request, we avoid having any special
2256 code in malloc to check whether it even exists yet. But we still
2257 need to do so when getting memory from system, so we make
2258 initial_top treat the bin as a legal but unusable chunk during the
2259 interval between initialization and the first call to
2260 sYSMALLOc. (This is somewhat delicate, since it relies on
2261 the 2 preceding words to be zero during this interval as well.)
2264 /* Conveniently, the unsorted bin can be used as dummy top on first call */
2265 #define initial_top(M) (unsorted_chunks(M))
2270 To help compensate for the large number of bins, a one-level index
2271 structure is used for bin-by-bin searching. `binmap' is a
2272 bitvector recording whether bins are definitely empty so they can
2273 be skipped over during during traversals. The bits are NOT always
2274 cleared as soon as bins are empty, but instead only
2275 when they are noticed to be empty during traversal in malloc.
2278 /* Conservatively use 32 bits per map word, even if on 64bit system */
2279 #define BINMAPSHIFT 5
2280 #define BITSPERMAP (1U << BINMAPSHIFT)
2281 #define BINMAPSIZE (NBINS / BITSPERMAP)
2283 #define idx2block(i) ((i) >> BINMAPSHIFT)
2284 #define idx2bit(i) ((1U << ((i) & ((1U << BINMAPSHIFT)-1))))
2286 #define mark_bin(m,i) ((m)->binmap[idx2block(i)] |= idx2bit(i))
2287 #define unmark_bin(m,i) ((m)->binmap[idx2block(i)] &= ~(idx2bit(i)))
2288 #define get_binmap(m,i) ((m)->binmap[idx2block(i)] & idx2bit(i))
2293 An array of lists holding recently freed small chunks. Fastbins
2294 are not doubly linked. It is faster to single-link them, and
2295 since chunks are never removed from the middles of these lists,
2296 double linking is not necessary. Also, unlike regular bins, they
2297 are not even processed in FIFO order (they use faster LIFO) since
2298 ordering doesn't much matter in the transient contexts in which
2299 fastbins are normally used.
2301 Chunks in fastbins keep their inuse bit set, so they cannot
2302 be consolidated with other free chunks. malloc_consolidate
2303 releases all chunks in fastbins and consolidates them with
2307 typedef struct malloc_chunk
* mfastbinptr
;
2309 /* offset 2 to use otherwise unindexable first 2 bins */
2310 #define fastbin_index(sz) ((((unsigned int)(sz)) >> 3) - 2)
2312 /* The maximum fastbin request size we support */
2313 #define MAX_FAST_SIZE 80
2315 #define NFASTBINS (fastbin_index(request2size(MAX_FAST_SIZE))+1)
2318 FASTBIN_CONSOLIDATION_THRESHOLD is the size of a chunk in free()
2319 that triggers automatic consolidation of possibly-surrounding
2320 fastbin chunks. This is a heuristic, so the exact value should not
2321 matter too much. It is defined at half the default trim threshold as a
2322 compromise heuristic to only attempt consolidation if it is likely
2323 to lead to trimming. However, it is not dynamically tunable, since
2324 consolidation reduces fragmentation surrounding loarge chunks even
2325 if trimming is not used.
2328 #define FASTBIN_CONSOLIDATION_THRESHOLD \
2329 ((unsigned long)(DEFAULT_TRIM_THRESHOLD) >> 1)
2332 Since the lowest 2 bits in max_fast don't matter in size comparisons,
2333 they are used as flags.
2337 ANYCHUNKS_BIT held in max_fast indicates that there may be any
2338 freed chunks at all. It is set true when entering a chunk into any
2342 #define ANYCHUNKS_BIT (1U)
2344 #define have_anychunks(M) (((M)->max_fast & ANYCHUNKS_BIT))
2345 #define set_anychunks(M) ((M)->max_fast |= ANYCHUNKS_BIT)
2346 #define clear_anychunks(M) ((M)->max_fast &= ~ANYCHUNKS_BIT)
2349 FASTCHUNKS_BIT held in max_fast indicates that there are probably
2350 some fastbin chunks. It is set true on entering a chunk into any
2351 fastbin, and cleared only in malloc_consolidate.
2354 #define FASTCHUNKS_BIT (2U)
2356 #define have_fastchunks(M) (((M)->max_fast & FASTCHUNKS_BIT))
2357 #define set_fastchunks(M) ((M)->max_fast |= (FASTCHUNKS_BIT|ANYCHUNKS_BIT))
2358 #define clear_fastchunks(M) ((M)->max_fast &= ~(FASTCHUNKS_BIT))
2361 Set value of max_fast.
2362 Use impossibly small value if 0.
2365 #define set_max_fast(M, s) \
2366 (M)->max_fast = (((s) == 0)? SMALLBIN_WIDTH: request2size(s)) | \
2367 ((M)->max_fast & (FASTCHUNKS_BIT|ANYCHUNKS_BIT))
2369 #define get_max_fast(M) \
2370 ((M)->max_fast & ~(FASTCHUNKS_BIT | ANYCHUNKS_BIT))
2374 morecore_properties is a status word holding dynamically discovered
2375 or controlled properties of the morecore function
2378 #define MORECORE_CONTIGUOUS_BIT (1U)
2380 #define contiguous(M) \
2381 (((M)->morecore_properties & MORECORE_CONTIGUOUS_BIT))
2382 #define noncontiguous(M) \
2383 (((M)->morecore_properties & MORECORE_CONTIGUOUS_BIT) == 0)
2384 #define set_contiguous(M) \
2385 ((M)->morecore_properties |= MORECORE_CONTIGUOUS_BIT)
2386 #define set_noncontiguous(M) \
2387 ((M)->morecore_properties &= ~MORECORE_CONTIGUOUS_BIT)
2391 ----------- Internal state representation and initialization -----------
2394 struct malloc_state
{
2396 /* The maximum chunk size to be eligible for fastbin */
2397 INTERNAL_SIZE_T max_fast
; /* low 2 bits used as flags */
2400 mfastbinptr fastbins
[NFASTBINS
];
2402 /* Base of the topmost chunk -- not otherwise kept in a bin */
2405 /* The remainder from the most recent split of a small request */
2406 mchunkptr last_remainder
;
2408 /* Normal bins packed as described above */
2409 mchunkptr bins
[NBINS
* 2];
2411 /* Bitmap of bins. Trailing zero map handles cases of largest binned size */
2412 unsigned int binmap
[BINMAPSIZE
+1];
2414 /* Tunable parameters */
2415 CHUNK_SIZE_T trim_threshold
;
2416 INTERNAL_SIZE_T top_pad
;
2417 INTERNAL_SIZE_T mmap_threshold
;
2419 /* Memory map support */
2424 /* Cache malloc_getpagesize */
2425 unsigned int pagesize
;
2427 /* Track properties of MORECORE */
2428 unsigned int morecore_properties
;
2431 INTERNAL_SIZE_T mmapped_mem
;
2432 INTERNAL_SIZE_T sbrked_mem
;
2433 INTERNAL_SIZE_T max_sbrked_mem
;
2434 INTERNAL_SIZE_T max_mmapped_mem
;
2435 INTERNAL_SIZE_T max_total_mem
;
2438 typedef struct malloc_state
*mstate
;
2441 There is exactly one instance of this struct in this malloc.
2442 If you are adapting this malloc in a way that does NOT use a static
2443 malloc_state, you MUST explicitly zero-fill it before using. This
2444 malloc relies on the property that malloc_state is initialized to
2445 all zeroes (as is true of C statics).
2448 static struct malloc_state av_
; /* never directly referenced */
2451 All uses of av_ are via get_malloc_state().
2452 At most one "call" to get_malloc_state is made per invocation of
2453 the public versions of malloc and free, but other routines
2454 that in turn invoke malloc and/or free may call more then once.
2455 Also, it is called in check* routines if DEBUG is set.
2458 #define get_malloc_state() (&(av_))
2461 Initialize a malloc_state struct.
2463 This is called only from within malloc_consolidate, which needs
2464 be called in the same contexts anyway. It is never called directly
2465 outside of malloc_consolidate because some optimizing compilers try
2466 to inline it at all call points, which turns out not to be an
2467 optimization at all. (Inlining it in malloc_consolidate is fine though.)
2471 static void malloc_init_state(mstate av
)
2473 static void malloc_init_state(av
) mstate av
;
2479 /* Establish circular links for normal bins */
2480 for (i
= 1; i
< NBINS
; ++i
) {
2482 bin
->fd
= bin
->bk
= bin
;
2485 av
->top_pad
= DEFAULT_TOP_PAD
;
2486 av
->n_mmaps_max
= DEFAULT_MMAP_MAX
;
2487 av
->mmap_threshold
= DEFAULT_MMAP_THRESHOLD
;
2488 av
->trim_threshold
= DEFAULT_TRIM_THRESHOLD
;
2490 #if MORECORE_CONTIGUOUS
2493 set_noncontiguous(av
);
2497 set_max_fast(av
, DEFAULT_MXFAST
);
2499 av
->top
= initial_top(av
);
2500 av
->pagesize
= malloc_getpagesize
;
2504 Other internal utilities operating on mstates
2508 static Void_t
* sYSMALLOc(INTERNAL_SIZE_T
, mstate
);
2509 static int sYSTRIm(size_t, mstate
);
2510 static void malloc_consolidate(mstate
);
2511 static Void_t
** iALLOc(size_t, size_t*, int, Void_t
**);
2513 static Void_t
* sYSMALLOc();
2514 static int sYSTRIm();
2515 static void malloc_consolidate();
2516 static Void_t
** iALLOc();
2522 These routines make a number of assertions about the states
2523 of data structures that should be true at all times. If any
2524 are not true, it's very likely that a user program has somehow
2525 trashed memory. (It's also possible that there is a coding error
2526 in malloc. In which case, please report it!)
2531 #define check_chunk(P)
2532 #define check_free_chunk(P)
2533 #define check_inuse_chunk(P)
2534 #define check_remalloced_chunk(P,N)
2535 #define check_malloced_chunk(P,N)
2536 #define check_malloc_state()
2539 #define check_chunk(P) do_check_chunk(P)
2540 #define check_free_chunk(P) do_check_free_chunk(P)
2541 #define check_inuse_chunk(P) do_check_inuse_chunk(P)
2542 #define check_remalloced_chunk(P,N) do_check_remalloced_chunk(P,N)
2543 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
2544 #define check_malloc_state() do_check_malloc_state()
2547 Properties of all chunks
2551 static void do_check_chunk(mchunkptr p
)
2553 static void do_check_chunk(p
) mchunkptr p
;
2556 mstate av
= get_malloc_state();
2557 CHUNK_SIZE_T sz
= chunksize(p
);
2558 /* min and max possible addresses assuming contiguous allocation */
2559 char* max_address
= (char*)(av
->top
) + chunksize(av
->top
);
2560 char* min_address
= max_address
- av
->sbrked_mem
;
2562 if (!chunk_is_mmapped(p
)) {
2564 /* Has legal address ... */
2566 if (contiguous(av
)) {
2567 assert(((char*)p
) >= min_address
);
2568 assert(((char*)p
+ sz
) <= ((char*)(av
->top
)));
2572 /* top size is always at least MINSIZE */
2573 assert((CHUNK_SIZE_T
)(sz
) >= MINSIZE
);
2574 /* top predecessor always marked inuse */
2575 assert(prev_inuse(p
));
2581 /* address is outside main heap */
2582 if (contiguous(av
) && av
->top
!= initial_top(av
)) {
2583 assert(((char*)p
) < min_address
|| ((char*)p
) > max_address
);
2585 /* chunk is page-aligned */
2586 assert(((p
->prev_size
+ sz
) & (av
->pagesize
-1)) == 0);
2587 /* mem is aligned */
2588 assert(aligned_OK(chunk2mem(p
)));
2590 /* force an appropriate assert violation if debug set */
2591 assert(!chunk_is_mmapped(p
));
2597 Properties of free chunks
2601 static void do_check_free_chunk(mchunkptr p
)
2603 static void do_check_free_chunk(p
) mchunkptr p
;
2606 mstate av
= get_malloc_state();
2608 INTERNAL_SIZE_T sz
= p
->size
& ~PREV_INUSE
;
2609 mchunkptr next
= chunk_at_offset(p
, sz
);
2613 /* Chunk must claim to be free ... */
2615 assert (!chunk_is_mmapped(p
));
2617 /* Unless a special marker, must have OK fields */
2618 if ((CHUNK_SIZE_T
)(sz
) >= MINSIZE
)
2620 assert((sz
& MALLOC_ALIGN_MASK
) == 0);
2621 assert(aligned_OK(chunk2mem(p
)));
2622 /* ... matching footer field */
2623 assert(next
->prev_size
== sz
);
2624 /* ... and is fully consolidated */
2625 assert(prev_inuse(p
));
2626 assert (next
== av
->top
|| inuse(next
));
2628 /* ... and has minimally sane links */
2629 assert(p
->fd
->bk
== p
);
2630 assert(p
->bk
->fd
== p
);
2632 else /* markers are always of size SIZE_SZ */
2633 assert(sz
== SIZE_SZ
);
2637 Properties of inuse chunks
2641 static void do_check_inuse_chunk(mchunkptr p
)
2643 static void do_check_inuse_chunk(p
) mchunkptr p
;
2646 mstate av
= get_malloc_state();
2650 if (chunk_is_mmapped(p
))
2651 return; /* mmapped chunks have no next/prev */
2653 /* Check whether it claims to be in use ... */
2656 next
= next_chunk(p
);
2658 /* ... and is surrounded by OK chunks.
2659 Since more things can be checked with free chunks than inuse ones,
2660 if an inuse chunk borders them and debug is on, it's worth doing them.
2662 if (!prev_inuse(p
)) {
2663 /* Note that we cannot even look at prev unless it is not inuse */
2664 mchunkptr prv
= prev_chunk(p
);
2665 assert(next_chunk(prv
) == p
);
2666 do_check_free_chunk(prv
);
2669 if (next
== av
->top
) {
2670 assert(prev_inuse(next
));
2671 assert(chunksize(next
) >= MINSIZE
);
2673 else if (!inuse(next
))
2674 do_check_free_chunk(next
);
2678 Properties of chunks recycled from fastbins
2682 static void do_check_remalloced_chunk(mchunkptr p
, INTERNAL_SIZE_T s
)
2684 static void do_check_remalloced_chunk(p
, s
) mchunkptr p
; INTERNAL_SIZE_T s
;
2687 INTERNAL_SIZE_T sz
= p
->size
& ~PREV_INUSE
;
2689 do_check_inuse_chunk(p
);
2691 /* Legal size ... */
2692 assert((sz
& MALLOC_ALIGN_MASK
) == 0);
2693 assert((CHUNK_SIZE_T
)(sz
) >= MINSIZE
);
2694 /* ... and alignment */
2695 assert(aligned_OK(chunk2mem(p
)));
2696 /* chunk is less than MINSIZE more than request */
2697 assert((long)(sz
) - (long)(s
) >= 0);
2698 assert((long)(sz
) - (long)(s
+ MINSIZE
) < 0);
2702 Properties of nonrecycled chunks at the point they are malloced
2706 static void do_check_malloced_chunk(mchunkptr p
, INTERNAL_SIZE_T s
)
2708 static void do_check_malloced_chunk(p
, s
) mchunkptr p
; INTERNAL_SIZE_T s
;
2711 /* same as recycled case ... */
2712 do_check_remalloced_chunk(p
, s
);
2715 ... plus, must obey implementation invariant that prev_inuse is
2716 always true of any allocated chunk; i.e., that each allocated
2717 chunk borders either a previously allocated and still in-use
2718 chunk, or the base of its memory arena. This is ensured
2719 by making all allocations from the the `lowest' part of any found
2720 chunk. This does not necessarily hold however for chunks
2721 recycled via fastbins.
2724 assert(prev_inuse(p
));
2729 Properties of malloc_state.
2731 This may be useful for debugging malloc, as well as detecting user
2732 programmer errors that somehow write into malloc_state.
2734 If you are extending or experimenting with this malloc, you can
2735 probably figure out how to hack this routine to print out or
2736 display chunk addresses, sizes, bins, and other instrumentation.
2739 static void do_check_malloc_state()
2741 mstate av
= get_malloc_state();
2746 unsigned int binbit
;
2749 INTERNAL_SIZE_T size
;
2750 CHUNK_SIZE_T total
= 0;
2753 /* internal size_t must be no wider than pointer type */
2754 assert(sizeof(INTERNAL_SIZE_T
) <= sizeof(char*));
2756 /* alignment is a power of 2 */
2757 assert((MALLOC_ALIGNMENT
& (MALLOC_ALIGNMENT
-1)) == 0);
2759 /* cannot run remaining checks until fully initialized */
2760 if (av
->top
== 0 || av
->top
== initial_top(av
))
2763 /* pagesize is a power of 2 */
2764 assert((av
->pagesize
& (av
->pagesize
-1)) == 0);
2766 /* properties of fastbins */
2768 /* max_fast is in allowed range */
2769 assert(get_max_fast(av
) <= request2size(MAX_FAST_SIZE
));
2771 max_fast_bin
= fastbin_index(av
->max_fast
);
2773 for (i
= 0; i
< NFASTBINS
; ++i
) {
2774 p
= av
->fastbins
[i
];
2776 /* all bins past max_fast are empty */
2777 if (i
> max_fast_bin
)
2781 /* each chunk claims to be inuse */
2782 do_check_inuse_chunk(p
);
2783 total
+= chunksize(p
);
2784 /* chunk belongs in this bin */
2785 assert(fastbin_index(chunksize(p
)) == i
);
2791 assert(have_fastchunks(av
));
2792 else if (!have_fastchunks(av
))
2795 /* check normal bins */
2796 for (i
= 1; i
< NBINS
; ++i
) {
2799 /* binmap is accurate (except for bin 1 == unsorted_chunks) */
2801 binbit
= get_binmap(av
,i
);
2802 empty
= last(b
) == b
;
2809 for (p
= last(b
); p
!= b
; p
= p
->bk
) {
2810 /* each chunk claims to be free */
2811 do_check_free_chunk(p
);
2812 size
= chunksize(p
);
2815 /* chunk belongs in bin */
2816 idx
= bin_index(size
);
2818 /* lists are sorted */
2819 if ((CHUNK_SIZE_T
) size
>= (CHUNK_SIZE_T
)(FIRST_SORTED_BIN_SIZE
)) {
2820 assert(p
->bk
== b
||
2821 (CHUNK_SIZE_T
)chunksize(p
->bk
) >=
2822 (CHUNK_SIZE_T
)chunksize(p
));
2825 /* chunk is followed by a legal chain of inuse chunks */
2826 for (q
= next_chunk(p
);
2827 (q
!= av
->top
&& inuse(q
) &&
2828 (CHUNK_SIZE_T
)(chunksize(q
)) >= MINSIZE
);
2830 do_check_inuse_chunk(q
);
2834 /* top chunk is OK */
2835 check_chunk(av
->top
);
2837 /* sanity checks for statistics */
2839 assert(total
<= (CHUNK_SIZE_T
)(av
->max_total_mem
));
2840 assert(av
->n_mmaps
>= 0);
2841 assert(av
->n_mmaps
<= av
->max_n_mmaps
);
2843 assert((CHUNK_SIZE_T
)(av
->sbrked_mem
) <=
2844 (CHUNK_SIZE_T
)(av
->max_sbrked_mem
));
2846 assert((CHUNK_SIZE_T
)(av
->mmapped_mem
) <=
2847 (CHUNK_SIZE_T
)(av
->max_mmapped_mem
));
2849 assert((CHUNK_SIZE_T
)(av
->max_total_mem
) >=
2850 (CHUNK_SIZE_T
)(av
->mmapped_mem
) + (CHUNK_SIZE_T
)(av
->sbrked_mem
));
2855 /* ----------- Routines dealing with system allocation -------------- */
2858 sysmalloc handles malloc cases requiring more memory from the system.
2859 On entry, it is assumed that av->top does not have enough
2860 space to service request for nb bytes, thus requiring that av->top
2861 be extended or replaced.
2865 static Void_t
* sYSMALLOc(INTERNAL_SIZE_T nb
, mstate av
)
2867 static Void_t
* sYSMALLOc(nb
, av
) INTERNAL_SIZE_T nb
; mstate av
;
2870 mchunkptr old_top
; /* incoming value of av->top */
2871 INTERNAL_SIZE_T old_size
; /* its size */
2872 char* old_end
; /* its end address */
2874 long size
; /* arg to first MORECORE or mmap call */
2875 char* brk
; /* return value from MORECORE */
2877 long correction
; /* arg to 2nd MORECORE call */
2878 char* snd_brk
; /* 2nd return val */
2880 INTERNAL_SIZE_T front_misalign
; /* unusable bytes at front of new space */
2881 INTERNAL_SIZE_T end_misalign
; /* partial page left at end of new space */
2882 char* aligned_brk
; /* aligned offset into brk */
2884 mchunkptr p
; /* the allocated/returned chunk */
2885 mchunkptr remainder
; /* remainder from allocation */
2886 CHUNK_SIZE_T remainder_size
; /* its size */
2888 CHUNK_SIZE_T sum
; /* for updating stats */
2890 size_t pagemask
= av
->pagesize
- 1;
2893 If there is space available in fastbins, consolidate and retry
2894 malloc from scratch rather than getting memory from system. This
2895 can occur only if nb is in smallbin range so we didn't consolidate
2896 upon entry to malloc. It is much easier to handle this case here
2897 than in malloc proper.
2900 if (have_fastchunks(av
)) {
2901 assert(in_smallbin_range(nb
));
2902 malloc_consolidate(av
);
2903 return mALLOc(nb
- MALLOC_ALIGN_MASK
);
2910 If have mmap, and the request size meets the mmap threshold, and
2911 the system supports mmap, and there are few enough currently
2912 allocated mmapped regions, try to directly map this request
2913 rather than expanding top.
2916 if ((CHUNK_SIZE_T
)(nb
) >= (CHUNK_SIZE_T
)(av
->mmap_threshold
) &&
2917 (av
->n_mmaps
< av
->n_mmaps_max
)) {
2919 char* mm
; /* return value from mmap call*/
2922 Round up size to nearest page. For mmapped chunks, the overhead
2923 is one SIZE_SZ unit larger than for normal chunks, because there
2924 is no following chunk whose prev_size field could be used.
2926 size
= (nb
+ SIZE_SZ
+ MALLOC_ALIGN_MASK
+ pagemask
) & ~pagemask
;
2928 /* Don't try if size wraps around 0 */
2929 if ((CHUNK_SIZE_T
)(size
) > (CHUNK_SIZE_T
)(nb
)) {
2931 mm
= (char*)(MMAP(0, size
, PROT_READ
|PROT_WRITE
, MAP_PRIVATE
));
2933 if (mm
!= (char*)(MORECORE_FAILURE
)) {
2936 The offset to the start of the mmapped region is stored
2937 in the prev_size field of the chunk. This allows us to adjust
2938 returned start address to meet alignment requirements here
2939 and in memalign(), and still be able to compute proper
2940 address argument for later munmap in free() and realloc().
2943 front_misalign
= (INTERNAL_SIZE_T
)chunk2mem(mm
) & MALLOC_ALIGN_MASK
;
2944 if (front_misalign
> 0) {
2945 correction
= MALLOC_ALIGNMENT
- front_misalign
;
2946 p
= (mchunkptr
)(mm
+ correction
);
2947 p
->prev_size
= correction
;
2948 set_head(p
, (size
- correction
) |IS_MMAPPED
);
2953 set_head(p
, size
|IS_MMAPPED
);
2956 /* update statistics */
2958 if (++av
->n_mmaps
> av
->max_n_mmaps
)
2959 av
->max_n_mmaps
= av
->n_mmaps
;
2961 sum
= av
->mmapped_mem
+= size
;
2962 if (sum
> (CHUNK_SIZE_T
)(av
->max_mmapped_mem
))
2963 av
->max_mmapped_mem
= sum
;
2964 sum
+= av
->sbrked_mem
;
2965 if (sum
> (CHUNK_SIZE_T
)(av
->max_total_mem
))
2966 av
->max_total_mem
= sum
;
2970 return chunk2mem(p
);
2976 /* Record incoming configuration of top */
2979 old_size
= chunksize(old_top
);
2980 old_end
= (char*)(chunk_at_offset(old_top
, old_size
));
2982 brk
= snd_brk
= (char*)(MORECORE_FAILURE
);
2985 If not the first time through, we require old_size to be
2986 at least MINSIZE and to have prev_inuse set.
2989 assert((old_top
== initial_top(av
) && old_size
== 0) ||
2990 ((CHUNK_SIZE_T
) (old_size
) >= MINSIZE
&&
2991 prev_inuse(old_top
)));
2993 /* Precondition: not enough current space to satisfy nb request */
2994 assert((CHUNK_SIZE_T
)(old_size
) < (CHUNK_SIZE_T
)(nb
+ MINSIZE
));
2996 /* Precondition: all fastbins are consolidated */
2997 assert(!have_fastchunks(av
));
3000 /* Request enough space for nb + pad + overhead */
3002 size
= nb
+ av
->top_pad
+ MINSIZE
;
3005 If contiguous, we can subtract out existing space that we hope to
3006 combine with new space. We add it back later only if
3007 we don't actually get contiguous space.
3014 Round to a multiple of page size.
3015 If MORECORE is not contiguous, this ensures that we only call it
3016 with whole-page arguments. And if MORECORE is contiguous and
3017 this is not first time through, this preserves page-alignment of
3018 previous calls. Otherwise, we correct to page-align below.
3021 size
= (size
+ pagemask
) & ~pagemask
;
3024 Don't try to call MORECORE if argument is so big as to appear
3025 negative. Note that since mmap takes size_t arg, it may succeed
3026 below even if we cannot call MORECORE.
3030 brk
= (char*)(MORECORE(size
));
3033 If have mmap, try using it as a backup when MORECORE fails or
3034 cannot be used. This is worth doing on systems that have "holes" in
3035 address space, so sbrk cannot extend to give contiguous space, but
3036 space is available elsewhere. Note that we ignore mmap max count
3037 and threshold limits, since the space will not be used as a
3038 segregated mmap region.
3042 if (brk
== (char*)(MORECORE_FAILURE
)) {
3044 /* Cannot merge with old top, so add its size back in */
3046 size
= (size
+ old_size
+ pagemask
) & ~pagemask
;
3048 /* If we are relying on mmap as backup, then use larger units */
3049 if ((CHUNK_SIZE_T
)(size
) < (CHUNK_SIZE_T
)(MMAP_AS_MORECORE_SIZE
))
3050 size
= MMAP_AS_MORECORE_SIZE
;
3052 /* Don't try if size wraps around 0 */
3053 if ((CHUNK_SIZE_T
)(size
) > (CHUNK_SIZE_T
)(nb
)) {
3055 brk
= (char*)(MMAP(0, size
, PROT_READ
|PROT_WRITE
, MAP_PRIVATE
));
3057 if (brk
!= (char*)(MORECORE_FAILURE
)) {
3059 /* We do not need, and cannot use, another sbrk call to find end */
3060 snd_brk
= brk
+ size
;
3063 Record that we no longer have a contiguous sbrk region.
3064 After the first time mmap is used as backup, we do not
3065 ever rely on contiguous space since this could incorrectly
3068 set_noncontiguous(av
);
3074 if (brk
!= (char*)(MORECORE_FAILURE
)) {
3075 av
->sbrked_mem
+= size
;
3078 If MORECORE extends previous space, we can likewise extend top size.
3081 if (brk
== old_end
&& snd_brk
== (char*)(MORECORE_FAILURE
)) {
3082 set_head(old_top
, (size
+ old_size
) | PREV_INUSE
);
3086 Otherwise, make adjustments:
3088 * If the first time through or noncontiguous, we need to call sbrk
3089 just to find out where the end of memory lies.
3091 * We need to ensure that all returned chunks from malloc will meet
3094 * If there was an intervening foreign sbrk, we need to adjust sbrk
3095 request size to account for fact that we will not be able to
3096 combine new space with existing space in old_top.
3098 * Almost all systems internally allocate whole pages at a time, in
3099 which case we might as well use the whole last page of request.
3100 So we allocate enough more memory to hit a page boundary now,
3101 which in turn causes future contiguous calls to page-align.
3111 If MORECORE returns an address lower than we have seen before,
3112 we know it isn't really contiguous. This and some subsequent
3113 checks help cope with non-conforming MORECORE functions and
3114 the presence of "foreign" calls to MORECORE from outside of
3115 malloc or by other threads. We cannot guarantee to detect
3116 these in all cases, but cope with the ones we do detect.
3118 if (contiguous(av
) && old_size
!= 0 && brk
< old_end
) {
3119 set_noncontiguous(av
);
3122 /* handle contiguous cases */
3123 if (contiguous(av
)) {
3126 We can tolerate forward non-contiguities here (usually due
3127 to foreign calls) but treat them as part of our space for
3131 av
->sbrked_mem
+= brk
- old_end
;
3133 /* Guarantee alignment of first new chunk made from this space */
3135 front_misalign
= (INTERNAL_SIZE_T
)chunk2mem(brk
) & MALLOC_ALIGN_MASK
;
3136 if (front_misalign
> 0) {
3139 Skip over some bytes to arrive at an aligned position.
3140 We don't need to specially mark these wasted front bytes.
3141 They will never be accessed anyway because
3142 prev_inuse of av->top (and any chunk created from its start)
3143 is always true after initialization.
3146 correction
= MALLOC_ALIGNMENT
- front_misalign
;
3147 aligned_brk
+= correction
;
3151 If this isn't adjacent to existing space, then we will not
3152 be able to merge with old_top space, so must add to 2nd request.
3155 correction
+= old_size
;
3157 /* Extend the end address to hit a page boundary */
3158 end_misalign
= (INTERNAL_SIZE_T
)(brk
+ size
+ correction
);
3159 correction
+= ((end_misalign
+ pagemask
) & ~pagemask
) - end_misalign
;
3161 assert(correction
>= 0);
3162 snd_brk
= (char*)(MORECORE(correction
));
3164 if (snd_brk
== (char*)(MORECORE_FAILURE
)) {
3166 If can't allocate correction, try to at least find out current
3167 brk. It might be enough to proceed without failing.
3170 snd_brk
= (char*)(MORECORE(0));
3172 else if (snd_brk
< brk
) {
3174 If the second call gives noncontiguous space even though
3175 it says it won't, the only course of action is to ignore
3176 results of second call, and conservatively estimate where
3177 the first call left us. Also set noncontiguous, so this
3178 won't happen again, leaving at most one hole.
3180 Note that this check is intrinsically incomplete. Because
3181 MORECORE is allowed to give more space than we ask for,
3182 there is no reliable way to detect a noncontiguity
3183 producing a forward gap for the second call.
3185 snd_brk
= brk
+ size
;
3187 set_noncontiguous(av
);
3192 /* handle non-contiguous cases */
3194 /* MORECORE/mmap must correctly align */
3195 assert(aligned_OK(chunk2mem(brk
)));
3197 /* Find out current end of memory */
3198 if (snd_brk
== (char*)(MORECORE_FAILURE
)) {
3199 snd_brk
= (char*)(MORECORE(0));
3200 av
->sbrked_mem
+= snd_brk
- brk
- size
;
3204 /* Adjust top based on results of second sbrk */
3205 if (snd_brk
!= (char*)(MORECORE_FAILURE
)) {
3206 av
->top
= (mchunkptr
)aligned_brk
;
3207 set_head(av
->top
, (snd_brk
- aligned_brk
+ correction
) | PREV_INUSE
);
3208 av
->sbrked_mem
+= correction
;
3211 If not the first time through, we either have a
3212 gap due to foreign sbrk or a non-contiguous region. Insert a
3213 double fencepost at old_top to prevent consolidation with space
3214 we don't own. These fenceposts are artificial chunks that are
3215 marked as inuse and are in any case too small to use. We need
3216 two to make sizes and alignments work out.
3219 if (old_size
!= 0) {
3221 Shrink old_top to insert fenceposts, keeping size a
3222 multiple of MALLOC_ALIGNMENT. We know there is at least
3223 enough space in old_top to do this.
3225 old_size
= (old_size
- 3*SIZE_SZ
) & ~MALLOC_ALIGN_MASK
;
3226 set_head(old_top
, old_size
| PREV_INUSE
);
3229 Note that the following assignments completely overwrite
3230 old_top when old_size was previously MINSIZE. This is
3231 intentional. We need the fencepost, even if old_top otherwise gets
3234 chunk_at_offset(old_top
, old_size
)->size
=
3237 chunk_at_offset(old_top
, old_size
+ SIZE_SZ
)->size
=
3241 If possible, release the rest, suppressing trimming.
3243 if (old_size
>= MINSIZE
) {
3244 INTERNAL_SIZE_T tt
= av
->trim_threshold
;
3245 av
->trim_threshold
= (INTERNAL_SIZE_T
)(-1);
3246 fREe(chunk2mem(old_top
));
3247 av
->trim_threshold
= tt
;
3253 /* Update statistics */
3254 sum
= av
->sbrked_mem
;
3255 if (sum
> (CHUNK_SIZE_T
)(av
->max_sbrked_mem
))
3256 av
->max_sbrked_mem
= sum
;
3258 sum
+= av
->mmapped_mem
;
3259 if (sum
> (CHUNK_SIZE_T
)(av
->max_total_mem
))
3260 av
->max_total_mem
= sum
;
3262 check_malloc_state();
3264 /* finally, do the allocation */
3267 size
= chunksize(p
);
3269 /* check that one of the above allocation paths succeeded */
3270 if ((CHUNK_SIZE_T
)(size
) >= (CHUNK_SIZE_T
)(nb
+ MINSIZE
)) {
3271 remainder_size
= size
- nb
;
3272 remainder
= chunk_at_offset(p
, nb
);
3273 av
->top
= remainder
;
3274 set_head(p
, nb
| PREV_INUSE
);
3275 set_head(remainder
, remainder_size
| PREV_INUSE
);
3276 check_malloced_chunk(p
, nb
);
3277 return chunk2mem(p
);
3282 /* catch all failure paths */
3283 MALLOC_FAILURE_ACTION
;
3291 sYSTRIm is an inverse of sorts to sYSMALLOc. It gives memory back
3292 to the system (via negative arguments to sbrk) if there is unused
3293 memory at the `high' end of the malloc pool. It is called
3294 automatically by free() when top space exceeds the trim
3295 threshold. It is also called by the public malloc_trim routine. It
3296 returns 1 if it actually released any memory, else 0.
3300 static int sYSTRIm(size_t pad
, mstate av
)
3302 static int sYSTRIm(pad
, av
) size_t pad
; mstate av
;
3305 long top_size
; /* Amount of top-most memory */
3306 long extra
; /* Amount to release */
3307 long released
; /* Amount actually released */
3308 char* current_brk
; /* address returned by pre-check sbrk call */
3309 char* new_brk
; /* address returned by post-check sbrk call */
3312 pagesz
= av
->pagesize
;
3313 top_size
= chunksize(av
->top
);
3315 /* Release in pagesize units, keeping at least one page */
3316 extra
= ((top_size
- pad
- MINSIZE
+ (pagesz
-1)) / pagesz
- 1) * pagesz
;
3321 Only proceed if end of memory is where we last set it.
3322 This avoids problems if there were foreign sbrk calls.
3324 current_brk
= (char*)(MORECORE(0));
3325 if (current_brk
== (char*)(av
->top
) + top_size
) {
3328 Attempt to release memory. We ignore MORECORE return value,
3329 and instead call again to find out where new end of memory is.
3330 This avoids problems if first call releases less than we asked,
3331 of if failure somehow altered brk value. (We could still
3332 encounter problems if it altered brk in some very bad way,
3333 but the only thing we can do is adjust anyway, which will cause
3334 some downstream failure.)
3338 new_brk
= (char*)(MORECORE(0));
3340 if (new_brk
!= (char*)MORECORE_FAILURE
) {
3341 released
= (long)(current_brk
- new_brk
);
3343 if (released
!= 0) {
3344 /* Success. Adjust top. */
3345 av
->sbrked_mem
-= released
;
3346 set_head(av
->top
, (top_size
- released
) | PREV_INUSE
);
3347 check_malloc_state();
3357 ------------------------------ malloc ------------------------------
3362 Void_t
* mALLOc(size_t bytes
)
3364 Void_t
* mALLOc(bytes
) size_t bytes
;
3367 mstate av
= get_malloc_state();
3369 INTERNAL_SIZE_T nb
; /* normalized request size */
3370 unsigned int idx
; /* associated bin index */
3371 mbinptr bin
; /* associated bin */
3372 mfastbinptr
* fb
; /* associated fastbin */
3374 mchunkptr victim
; /* inspected/selected chunk */
3375 INTERNAL_SIZE_T size
; /* its size */
3376 int victim_index
; /* its bin index */
3378 mchunkptr remainder
; /* remainder from a split */
3379 CHUNK_SIZE_T remainder_size
; /* its size */
3381 unsigned int block
; /* bit map traverser */
3382 unsigned int bit
; /* bit map traverser */
3383 unsigned int map
; /* current word of binmap */
3385 mchunkptr fwd
; /* misc temp for linking */
3386 mchunkptr bck
; /* misc temp for linking */
3389 Convert request size to internal form by adding SIZE_SZ bytes
3390 overhead plus possibly more to obtain necessary alignment and/or
3391 to obtain a size of at least MINSIZE, the smallest allocatable
3392 size. Also, checked_request2size traps (returning 0) request sizes
3393 that are so large that they wrap around zero when padded and
3397 checked_request2size(bytes
, nb
);
3400 Bypass search if no frees yet
3402 if (!have_anychunks(av
)) {
3403 if (av
->max_fast
== 0) /* initialization check */
3404 malloc_consolidate(av
);
3409 If the size qualifies as a fastbin, first check corresponding bin.
3412 if ((CHUNK_SIZE_T
)(nb
) <= (CHUNK_SIZE_T
)(av
->max_fast
)) {
3413 fb
= &(av
->fastbins
[(fastbin_index(nb
))]);
3414 if ( (victim
= *fb
) != 0) {
3416 check_remalloced_chunk(victim
, nb
);
3417 return chunk2mem(victim
);
3422 If a small request, check regular bin. Since these "smallbins"
3423 hold one size each, no searching within bins is necessary.
3424 (For a large request, we need to wait until unsorted chunks are
3425 processed to find best fit. But for small ones, fits are exact
3426 anyway, so we can check now, which is faster.)
3429 if (in_smallbin_range(nb
)) {
3430 idx
= smallbin_index(nb
);
3431 bin
= bin_at(av
,idx
);
3433 if ( (victim
= last(bin
)) != bin
) {
3435 set_inuse_bit_at_offset(victim
, nb
);
3439 check_malloced_chunk(victim
, nb
);
3440 return chunk2mem(victim
);
3445 If this is a large request, consolidate fastbins before continuing.
3446 While it might look excessive to kill all fastbins before
3447 even seeing if there is space available, this avoids
3448 fragmentation problems normally associated with fastbins.
3449 Also, in practice, programs tend to have runs of either small or
3450 large requests, but less often mixtures, so consolidation is not
3451 invoked all that often in most programs. And the programs that
3452 it is called frequently in otherwise tend to fragment.
3456 idx
= largebin_index(nb
);
3457 if (have_fastchunks(av
))
3458 malloc_consolidate(av
);
3462 Process recently freed or remaindered chunks, taking one only if
3463 it is exact fit, or, if this a small request, the chunk is remainder from
3464 the most recent non-exact fit. Place other traversed chunks in
3465 bins. Note that this step is the only place in any routine where
3466 chunks are placed in bins.
3469 while ( (victim
= unsorted_chunks(av
)->bk
) != unsorted_chunks(av
)) {
3471 size
= chunksize(victim
);
3474 If a small request, try to use last remainder if it is the
3475 only chunk in unsorted bin. This helps promote locality for
3476 runs of consecutive small requests. This is the only
3477 exception to best-fit, and applies only when there is
3478 no exact fit for a small chunk.
3481 if (in_smallbin_range(nb
) &&
3482 bck
== unsorted_chunks(av
) &&
3483 victim
== av
->last_remainder
&&
3484 (CHUNK_SIZE_T
)(size
) > (CHUNK_SIZE_T
)(nb
+ MINSIZE
)) {
3486 /* split and reattach remainder */
3487 remainder_size
= size
- nb
;
3488 remainder
= chunk_at_offset(victim
, nb
);
3489 unsorted_chunks(av
)->bk
= unsorted_chunks(av
)->fd
= remainder
;
3490 av
->last_remainder
= remainder
;
3491 remainder
->bk
= remainder
->fd
= unsorted_chunks(av
);
3493 set_head(victim
, nb
| PREV_INUSE
);
3494 set_head(remainder
, remainder_size
| PREV_INUSE
);
3495 set_foot(remainder
, remainder_size
);
3497 check_malloced_chunk(victim
, nb
);
3498 return chunk2mem(victim
);
3501 /* remove from unsorted list */
3502 unsorted_chunks(av
)->bk
= bck
;
3503 bck
->fd
= unsorted_chunks(av
);
3505 /* Take now instead of binning if exact fit */
3508 set_inuse_bit_at_offset(victim
, size
);
3509 check_malloced_chunk(victim
, nb
);
3510 return chunk2mem(victim
);
3513 /* place chunk in bin */
3515 if (in_smallbin_range(size
)) {
3516 victim_index
= smallbin_index(size
);
3517 bck
= bin_at(av
, victim_index
);
3521 victim_index
= largebin_index(size
);
3522 bck
= bin_at(av
, victim_index
);
3526 /* if smaller than smallest, place first */
3527 if ((CHUNK_SIZE_T
)(size
) < (CHUNK_SIZE_T
)(bck
->bk
->size
)) {
3531 else if ((CHUNK_SIZE_T
)(size
) >=
3532 (CHUNK_SIZE_T
)(FIRST_SORTED_BIN_SIZE
)) {
3534 /* maintain large bins in sorted order */
3535 size
|= PREV_INUSE
; /* Or with inuse bit to speed comparisons */
3536 while ((CHUNK_SIZE_T
)(size
) < (CHUNK_SIZE_T
)(fwd
->size
))
3543 mark_bin(av
, victim_index
);
3551 If a large request, scan through the chunks of current bin to
3552 find one that fits. (This will be the smallest that fits unless
3553 FIRST_SORTED_BIN_SIZE has been changed from default.) This is
3554 the only step where an unbounded number of chunks might be
3555 scanned without doing anything useful with them. However the
3556 lists tend to be short.
3559 if (!in_smallbin_range(nb
)) {
3560 bin
= bin_at(av
, idx
);
3562 for (victim
= last(bin
); victim
!= bin
; victim
= victim
->bk
) {
3563 size
= chunksize(victim
);
3565 if ((CHUNK_SIZE_T
)(size
) >= (CHUNK_SIZE_T
)(nb
)) {
3566 remainder_size
= size
- nb
;
3567 unlink(victim
, bck
, fwd
);
3570 if (remainder_size
< MINSIZE
) {
3571 set_inuse_bit_at_offset(victim
, size
);
3572 check_malloced_chunk(victim
, nb
);
3573 return chunk2mem(victim
);
3577 remainder
= chunk_at_offset(victim
, nb
);
3578 unsorted_chunks(av
)->bk
= unsorted_chunks(av
)->fd
= remainder
;
3579 remainder
->bk
= remainder
->fd
= unsorted_chunks(av
);
3580 set_head(victim
, nb
| PREV_INUSE
);
3581 set_head(remainder
, remainder_size
| PREV_INUSE
);
3582 set_foot(remainder
, remainder_size
);
3583 check_malloced_chunk(victim
, nb
);
3584 return chunk2mem(victim
);
3591 Search for a chunk by scanning bins, starting with next largest
3592 bin. This search is strictly by best-fit; i.e., the smallest
3593 (with ties going to approximately the least recently used) chunk
3594 that fits is selected.
3596 The bitmap avoids needing to check that most blocks are nonempty.
3600 bin
= bin_at(av
,idx
);
3601 block
= idx2block(idx
);
3602 map
= av
->binmap
[block
];
3607 /* Skip rest of block if there are no more set bits in this block. */
3608 if (bit
> map
|| bit
== 0) {
3610 if (++block
>= BINMAPSIZE
) /* out of bins */
3612 } while ( (map
= av
->binmap
[block
]) == 0);
3614 bin
= bin_at(av
, (block
<< BINMAPSHIFT
));
3618 /* Advance to bin with set bit. There must be one. */
3619 while ((bit
& map
) == 0) {
3620 bin
= next_bin(bin
);
3625 /* Inspect the bin. It is likely to be non-empty */
3628 /* If a false alarm (empty bin), clear the bit. */
3629 if (victim
== bin
) {
3630 av
->binmap
[block
] = map
&= ~bit
; /* Write through */
3631 bin
= next_bin(bin
);
3636 size
= chunksize(victim
);
3638 /* We know the first chunk in this bin is big enough to use. */
3639 assert((CHUNK_SIZE_T
)(size
) >= (CHUNK_SIZE_T
)(nb
));
3641 remainder_size
= size
- nb
;
3649 if (remainder_size
< MINSIZE
) {
3650 set_inuse_bit_at_offset(victim
, size
);
3651 check_malloced_chunk(victim
, nb
);
3652 return chunk2mem(victim
);
3657 remainder
= chunk_at_offset(victim
, nb
);
3659 unsorted_chunks(av
)->bk
= unsorted_chunks(av
)->fd
= remainder
;
3660 remainder
->bk
= remainder
->fd
= unsorted_chunks(av
);
3661 /* advertise as last remainder */
3662 if (in_smallbin_range(nb
))
3663 av
->last_remainder
= remainder
;
3665 set_head(victim
, nb
| PREV_INUSE
);
3666 set_head(remainder
, remainder_size
| PREV_INUSE
);
3667 set_foot(remainder
, remainder_size
);
3668 check_malloced_chunk(victim
, nb
);
3669 return chunk2mem(victim
);
3676 If large enough, split off the chunk bordering the end of memory
3677 (held in av->top). Note that this is in accord with the best-fit
3678 search rule. In effect, av->top is treated as larger (and thus
3679 less well fitting) than any other available chunk since it can
3680 be extended to be as large as necessary (up to system
3683 We require that av->top always exists (i.e., has size >=
3684 MINSIZE) after initialization, so if it would otherwise be
3685 exhuasted by current request, it is replenished. (The main
3686 reason for ensuring it exists is that we may need MINSIZE space
3687 to put in fenceposts in sysmalloc.)
3691 size
= chunksize(victim
);
3693 if ((CHUNK_SIZE_T
)(size
) >= (CHUNK_SIZE_T
)(nb
+ MINSIZE
)) {
3694 remainder_size
= size
- nb
;
3695 remainder
= chunk_at_offset(victim
, nb
);
3696 av
->top
= remainder
;
3697 set_head(victim
, nb
| PREV_INUSE
);
3698 set_head(remainder
, remainder_size
| PREV_INUSE
);
3700 check_malloced_chunk(victim
, nb
);
3701 return chunk2mem(victim
);
3705 If no space in top, relay to handle system-dependent cases
3707 return sYSMALLOc(nb
, av
);
3711 ------------------------------ free ------------------------------
3715 void fREe(Void_t
* mem
)
3717 void fREe(mem
) Void_t
* mem
;
3720 mstate av
= get_malloc_state();
3722 mchunkptr p
; /* chunk corresponding to mem */
3723 INTERNAL_SIZE_T size
; /* its size */
3724 mfastbinptr
* fb
; /* associated fastbin */
3725 mchunkptr nextchunk
; /* next contiguous chunk */
3726 INTERNAL_SIZE_T nextsize
; /* its size */
3727 int nextinuse
; /* true if nextchunk is used */
3728 INTERNAL_SIZE_T prevsize
; /* size of previous contiguous chunk */
3729 mchunkptr bck
; /* misc temp for linking */
3730 mchunkptr fwd
; /* misc temp for linking */
3732 /* free(0) has no effect */
3735 size
= chunksize(p
);
3737 check_inuse_chunk(p
);
3740 If eligible, place chunk on a fastbin so it can be found
3741 and used quickly in malloc.
3744 if ((CHUNK_SIZE_T
)(size
) <= (CHUNK_SIZE_T
)(av
->max_fast
)
3748 If TRIM_FASTBINS set, don't place chunks
3749 bordering top into fastbins
3751 && (chunk_at_offset(p
, size
) != av
->top
)
3756 fb
= &(av
->fastbins
[fastbin_index(size
)]);
3762 Consolidate other non-mmapped chunks as they arrive.
3765 else if (!chunk_is_mmapped(p
)) {
3768 nextchunk
= chunk_at_offset(p
, size
);
3769 nextsize
= chunksize(nextchunk
);
3771 /* consolidate backward */
3772 if (!prev_inuse(p
)) {
3773 prevsize
= p
->prev_size
;
3775 p
= chunk_at_offset(p
, -((long) prevsize
));
3776 unlink(p
, bck
, fwd
);
3779 if (nextchunk
!= av
->top
) {
3780 /* get and clear inuse bit */
3781 nextinuse
= inuse_bit_at_offset(nextchunk
, nextsize
);
3782 set_head(nextchunk
, nextsize
);
3784 /* consolidate forward */
3786 unlink(nextchunk
, bck
, fwd
);
3791 Place the chunk in unsorted chunk list. Chunks are
3792 not placed into regular bins until after they have
3793 been given one chance to be used in malloc.
3796 bck
= unsorted_chunks(av
);
3803 set_head(p
, size
| PREV_INUSE
);
3806 check_free_chunk(p
);
3810 If the chunk borders the current high end of memory,
3811 consolidate into top
3816 set_head(p
, size
| PREV_INUSE
);
3822 If freeing a large space, consolidate possibly-surrounding
3823 chunks. Then, if the total unused topmost memory exceeds trim
3824 threshold, ask malloc_trim to reduce top.
3826 Unless max_fast is 0, we don't know if there are fastbins
3827 bordering top, so we cannot tell for sure whether threshold
3828 has been reached unless fastbins are consolidated. But we
3829 don't want to consolidate on each free. As a compromise,
3830 consolidation is performed if FASTBIN_CONSOLIDATION_THRESHOLD
3834 if ((CHUNK_SIZE_T
)(size
) >= FASTBIN_CONSOLIDATION_THRESHOLD
) {
3835 if (have_fastchunks(av
))
3836 malloc_consolidate(av
);
3838 #ifndef MORECORE_CANNOT_TRIM
3839 if ((CHUNK_SIZE_T
)(chunksize(av
->top
)) >=
3840 (CHUNK_SIZE_T
)(av
->trim_threshold
))
3841 sYSTRIm(av
->top_pad
, av
);
3847 If the chunk was allocated via mmap, release via munmap()
3848 Note that if HAVE_MMAP is false but chunk_is_mmapped is
3849 true, then user must have overwritten memory. There's nothing
3850 we can do to catch this error unless DEBUG is set, in which case
3851 check_inuse_chunk (above) will have triggered error.
3857 INTERNAL_SIZE_T offset
= p
->prev_size
;
3859 av
->mmapped_mem
-= (size
+ offset
);
3860 ret
= munmap((char*)p
- offset
, size
+ offset
);
3861 /* munmap returns non-zero on failure */
3869 ------------------------- malloc_consolidate -------------------------
3871 malloc_consolidate is a specialized version of free() that tears
3872 down chunks held in fastbins. Free itself cannot be used for this
3873 purpose since, among other things, it might place chunks back onto
3874 fastbins. So, instead, we need to use a minor variant of the same
3877 Also, because this routine needs to be called the first time through
3878 malloc anyway, it turns out to be the perfect place to trigger
3879 initialization code.
3883 static void malloc_consolidate(mstate av
)
3885 static void malloc_consolidate(av
) mstate av
;
3888 mfastbinptr
* fb
; /* current fastbin being consolidated */
3889 mfastbinptr
* maxfb
; /* last fastbin (for loop control) */
3890 mchunkptr p
; /* current chunk being consolidated */
3891 mchunkptr nextp
; /* next chunk to consolidate */
3892 mchunkptr unsorted_bin
; /* bin header */
3893 mchunkptr first_unsorted
; /* chunk to link to */
3895 /* These have same use as in free() */
3896 mchunkptr nextchunk
;
3897 INTERNAL_SIZE_T size
;
3898 INTERNAL_SIZE_T nextsize
;
3899 INTERNAL_SIZE_T prevsize
;
3905 If max_fast is 0, we know that av hasn't
3906 yet been initialized, in which case do so below
3909 if (av
->max_fast
!= 0) {
3910 clear_fastchunks(av
);
3912 unsorted_bin
= unsorted_chunks(av
);
3915 Remove each chunk from fast bin and consolidate it, placing it
3916 then in unsorted bin. Among other reasons for doing this,
3917 placing in unsorted bin avoids needing to calculate actual bins
3918 until malloc is sure that chunks aren't immediately going to be
3922 maxfb
= &(av
->fastbins
[fastbin_index(av
->max_fast
)]);
3923 fb
= &(av
->fastbins
[0]);
3925 if ( (p
= *fb
) != 0) {
3929 check_inuse_chunk(p
);
3932 /* Slightly streamlined version of consolidation code in free() */
3933 size
= p
->size
& ~PREV_INUSE
;
3934 nextchunk
= chunk_at_offset(p
, size
);
3935 nextsize
= chunksize(nextchunk
);
3937 if (!prev_inuse(p
)) {
3938 prevsize
= p
->prev_size
;
3940 p
= chunk_at_offset(p
, -((long) prevsize
));
3941 unlink(p
, bck
, fwd
);
3944 if (nextchunk
!= av
->top
) {
3945 nextinuse
= inuse_bit_at_offset(nextchunk
, nextsize
);
3946 set_head(nextchunk
, nextsize
);
3950 unlink(nextchunk
, bck
, fwd
);
3953 first_unsorted
= unsorted_bin
->fd
;
3954 unsorted_bin
->fd
= p
;
3955 first_unsorted
->bk
= p
;
3957 set_head(p
, size
| PREV_INUSE
);
3958 p
->bk
= unsorted_bin
;
3959 p
->fd
= first_unsorted
;
3965 set_head(p
, size
| PREV_INUSE
);
3969 } while ( (p
= nextp
) != 0);
3972 } while (fb
++ != maxfb
);
3975 malloc_init_state(av
);
3976 check_malloc_state();
3981 ------------------------------ realloc ------------------------------
3986 Void_t
* rEALLOc(Void_t
* oldmem
, size_t bytes
)
3988 Void_t
* rEALLOc(oldmem
, bytes
) Void_t
* oldmem
; size_t bytes
;
3991 mstate av
= get_malloc_state();
3993 INTERNAL_SIZE_T nb
; /* padded request size */
3995 mchunkptr oldp
; /* chunk corresponding to oldmem */
3996 INTERNAL_SIZE_T oldsize
; /* its size */
3998 mchunkptr newp
; /* chunk to return */
3999 INTERNAL_SIZE_T newsize
; /* its size */
4000 Void_t
* newmem
; /* corresponding user mem */
4002 mchunkptr next
; /* next contiguous chunk after oldp */
4004 mchunkptr remainder
; /* extra space at end of newp */
4005 CHUNK_SIZE_T remainder_size
; /* its size */
4007 mchunkptr bck
; /* misc temp for linking */
4008 mchunkptr fwd
; /* misc temp for linking */
4010 CHUNK_SIZE_T copysize
; /* bytes to copy */
4011 unsigned int ncopies
; /* INTERNAL_SIZE_T words to copy */
4012 INTERNAL_SIZE_T
* s
; /* copy source */
4013 INTERNAL_SIZE_T
* d
; /* copy destination */
4016 #ifdef REALLOC_ZERO_BYTES_FREES
4023 /* realloc of null is supposed to be same as malloc */
4024 if (oldmem
== 0) return mALLOc(bytes
);
4026 checked_request2size(bytes
, nb
);
4028 oldp
= mem2chunk(oldmem
);
4029 oldsize
= chunksize(oldp
);
4031 check_inuse_chunk(oldp
);
4033 if (!chunk_is_mmapped(oldp
)) {
4035 if ((CHUNK_SIZE_T
)(oldsize
) >= (CHUNK_SIZE_T
)(nb
)) {
4036 /* already big enough; split below */
4042 next
= chunk_at_offset(oldp
, oldsize
);
4044 /* Try to expand forward into top */
4045 if (next
== av
->top
&&
4046 (CHUNK_SIZE_T
)(newsize
= oldsize
+ chunksize(next
)) >=
4047 (CHUNK_SIZE_T
)(nb
+ MINSIZE
)) {
4048 set_head_size(oldp
, nb
);
4049 av
->top
= chunk_at_offset(oldp
, nb
);
4050 set_head(av
->top
, (newsize
- nb
) | PREV_INUSE
);
4051 return chunk2mem(oldp
);
4054 /* Try to expand forward into next chunk; split off remainder below */
4055 else if (next
!= av
->top
&&
4057 (CHUNK_SIZE_T
)(newsize
= oldsize
+ chunksize(next
)) >=
4058 (CHUNK_SIZE_T
)(nb
)) {
4060 unlink(next
, bck
, fwd
);
4063 /* allocate, copy, free */
4065 newmem
= mALLOc(nb
- MALLOC_ALIGN_MASK
);
4067 return 0; /* propagate failure */
4069 newp
= mem2chunk(newmem
);
4070 newsize
= chunksize(newp
);
4073 Avoid copy if newp is next chunk after oldp.
4081 Unroll copy of <= 36 bytes (72 if 8byte sizes)
4082 We know that contents have an odd number of
4083 INTERNAL_SIZE_T-sized words; minimally 3.
4086 copysize
= oldsize
- SIZE_SZ
;
4087 s
= (INTERNAL_SIZE_T
*)(oldmem
);
4088 d
= (INTERNAL_SIZE_T
*)(newmem
);
4089 ncopies
= copysize
/ sizeof(INTERNAL_SIZE_T
);
4090 assert(ncopies
>= 3);
4093 MALLOC_COPY(d
, s
, copysize
);
4114 check_inuse_chunk(newp
);
4115 return chunk2mem(newp
);
4120 /* If possible, free extra space in old or extended chunk */
4122 assert((CHUNK_SIZE_T
)(newsize
) >= (CHUNK_SIZE_T
)(nb
));
4124 remainder_size
= newsize
- nb
;
4126 if (remainder_size
< MINSIZE
) { /* not enough extra to split off */
4127 set_head_size(newp
, newsize
);
4128 set_inuse_bit_at_offset(newp
, newsize
);
4130 else { /* split remainder */
4131 remainder
= chunk_at_offset(newp
, nb
);
4132 set_head_size(newp
, nb
);
4133 set_head(remainder
, remainder_size
| PREV_INUSE
);
4134 /* Mark remainder as inuse so free() won't complain */
4135 set_inuse_bit_at_offset(remainder
, remainder_size
);
4136 fREe(chunk2mem(remainder
));
4139 check_inuse_chunk(newp
);
4140 return chunk2mem(newp
);
4151 INTERNAL_SIZE_T offset
= oldp
->prev_size
;
4152 size_t pagemask
= av
->pagesize
- 1;
4156 /* Note the extra SIZE_SZ overhead */
4157 newsize
= (nb
+ offset
+ SIZE_SZ
+ pagemask
) & ~pagemask
;
4159 /* don't need to remap if still within same page */
4160 if (oldsize
== newsize
- offset
)
4163 cp
= (char*)mremap((char*)oldp
- offset
, oldsize
+ offset
, newsize
, 1);
4165 if (cp
!= (char*)MORECORE_FAILURE
) {
4167 newp
= (mchunkptr
)(cp
+ offset
);
4168 set_head(newp
, (newsize
- offset
)|IS_MMAPPED
);
4170 assert(aligned_OK(chunk2mem(newp
)));
4171 assert((newp
->prev_size
== offset
));
4173 /* update statistics */
4174 sum
= av
->mmapped_mem
+= newsize
- oldsize
;
4175 if (sum
> (CHUNK_SIZE_T
)(av
->max_mmapped_mem
))
4176 av
->max_mmapped_mem
= sum
;
4177 sum
+= av
->sbrked_mem
;
4178 if (sum
> (CHUNK_SIZE_T
)(av
->max_total_mem
))
4179 av
->max_total_mem
= sum
;
4181 return chunk2mem(newp
);
4185 /* Note the extra SIZE_SZ overhead. */
4186 if ((CHUNK_SIZE_T
)(oldsize
) >= (CHUNK_SIZE_T
)(nb
+ SIZE_SZ
))
4187 newmem
= oldmem
; /* do nothing */
4189 /* Must alloc, copy, free. */
4190 newmem
= mALLOc(nb
- MALLOC_ALIGN_MASK
);
4192 MALLOC_COPY(newmem
, oldmem
, oldsize
- 2*SIZE_SZ
);
4199 /* If !HAVE_MMAP, but chunk_is_mmapped, user must have overwritten mem */
4200 check_malloc_state();
4201 MALLOC_FAILURE_ACTION
;
4208 ------------------------------ memalign ------------------------------
4212 Void_t
* mEMALIGn(size_t alignment
, size_t bytes
)
4214 Void_t
* mEMALIGn(alignment
, bytes
) size_t alignment
; size_t bytes
;
4217 INTERNAL_SIZE_T nb
; /* padded request size */
4218 char* m
; /* memory returned by malloc call */
4219 mchunkptr p
; /* corresponding chunk */
4220 char* brk
; /* alignment point within p */
4221 mchunkptr newp
; /* chunk to return */
4222 INTERNAL_SIZE_T newsize
; /* its size */
4223 INTERNAL_SIZE_T leadsize
; /* leading space before alignment point */
4224 mchunkptr remainder
; /* spare room at end to split off */
4225 CHUNK_SIZE_T remainder_size
; /* its size */
4226 INTERNAL_SIZE_T size
;
4228 /* If need less alignment than we give anyway, just relay to malloc */
4230 if (alignment
<= MALLOC_ALIGNMENT
) return mALLOc(bytes
);
4232 /* Otherwise, ensure that it is at least a minimum chunk size */
4234 if (alignment
< MINSIZE
) alignment
= MINSIZE
;
4236 /* Make sure alignment is power of 2 (in case MINSIZE is not). */
4237 if ((alignment
& (alignment
- 1)) != 0) {
4238 size_t a
= MALLOC_ALIGNMENT
* 2;
4239 while ((CHUNK_SIZE_T
)a
< (CHUNK_SIZE_T
)alignment
) a
<<= 1;
4243 checked_request2size(bytes
, nb
);
4246 Strategy: find a spot within that chunk that meets the alignment
4247 request, and then possibly free the leading and trailing space.
4251 /* Call malloc with worst case padding to hit alignment. */
4253 m
= (char*)(mALLOc(nb
+ alignment
+ MINSIZE
));
4255 if (m
== 0) return 0; /* propagate failure */
4259 if ((((PTR_UINT
)(m
)) % alignment
) != 0) { /* misaligned */
4262 Find an aligned spot inside chunk. Since we need to give back
4263 leading space in a chunk of at least MINSIZE, if the first
4264 calculation places us at a spot with less than MINSIZE leader,
4265 we can move to the next aligned spot -- we've allocated enough
4266 total room so that this is always possible.
4269 brk
= (char*)mem2chunk((PTR_UINT
)(((PTR_UINT
)(m
+ alignment
- 1)) &
4270 -((signed long) alignment
)));
4271 if ((CHUNK_SIZE_T
)(brk
- (char*)(p
)) < MINSIZE
)
4274 newp
= (mchunkptr
)brk
;
4275 leadsize
= brk
- (char*)(p
);
4276 newsize
= chunksize(p
) - leadsize
;
4278 /* For mmapped chunks, just adjust offset */
4279 if (chunk_is_mmapped(p
)) {
4280 newp
->prev_size
= p
->prev_size
+ leadsize
;
4281 set_head(newp
, newsize
|IS_MMAPPED
);
4282 return chunk2mem(newp
);
4285 /* Otherwise, give back leader, use the rest */
4286 set_head(newp
, newsize
| PREV_INUSE
);
4287 set_inuse_bit_at_offset(newp
, newsize
);
4288 set_head_size(p
, leadsize
);
4292 assert (newsize
>= nb
&&
4293 (((PTR_UINT
)(chunk2mem(p
))) % alignment
) == 0);
4296 /* Also give back spare room at the end */
4297 if (!chunk_is_mmapped(p
)) {
4298 size
= chunksize(p
);
4299 if ((CHUNK_SIZE_T
)(size
) > (CHUNK_SIZE_T
)(nb
+ MINSIZE
)) {
4300 remainder_size
= size
- nb
;
4301 remainder
= chunk_at_offset(p
, nb
);
4302 set_head(remainder
, remainder_size
| PREV_INUSE
);
4303 set_head_size(p
, nb
);
4304 fREe(chunk2mem(remainder
));
4308 check_inuse_chunk(p
);
4309 return chunk2mem(p
);
4313 ------------------------------ calloc ------------------------------
4317 Void_t
* cALLOc(size_t n_elements
, size_t elem_size
)
4319 Void_t
* cALLOc(n_elements
, elem_size
) size_t n_elements
; size_t elem_size
;
4323 CHUNK_SIZE_T clearsize
;
4324 CHUNK_SIZE_T nclears
;
4327 Void_t
* mem
= mALLOc(n_elements
* elem_size
);
4332 if (!chunk_is_mmapped(p
))
4335 Unroll clear of <= 36 bytes (72 if 8byte sizes)
4336 We know that contents have an odd number of
4337 INTERNAL_SIZE_T-sized words; minimally 3.
4340 d
= (INTERNAL_SIZE_T
*)mem
;
4341 clearsize
= chunksize(p
) - SIZE_SZ
;
4342 nclears
= clearsize
/ sizeof(INTERNAL_SIZE_T
);
4343 assert(nclears
>= 3);
4346 MALLOC_ZERO(d
, clearsize
);
4369 d
= (INTERNAL_SIZE_T
*)mem
;
4371 Note the additional SIZE_SZ
4373 clearsize
= chunksize(p
) - 2*SIZE_SZ
;
4374 MALLOC_ZERO(d
, clearsize
);
4382 ------------------------------ cfree ------------------------------
4386 void cFREe(Void_t
*mem
)
4388 void cFREe(mem
) Void_t
*mem
;
4395 ------------------------- independent_calloc -------------------------
4399 Void_t
** iCALLOc(size_t n_elements
, size_t elem_size
, Void_t
* chunks
[])
4401 Void_t
** iCALLOc(n_elements
, elem_size
, chunks
) size_t n_elements
; size_t elem_size
; Void_t
* chunks
[];
4404 size_t sz
= elem_size
; /* serves as 1-element array */
4405 /* opts arg of 3 means all elements are same size, and should be cleared */
4406 return iALLOc(n_elements
, &sz
, 3, chunks
);
4410 ------------------------- independent_comalloc -------------------------
4414 Void_t
** iCOMALLOc(size_t n_elements
, size_t sizes
[], Void_t
* chunks
[])
4416 Void_t
** iCOMALLOc(n_elements
, sizes
, chunks
) size_t n_elements
; size_t sizes
[]; Void_t
* chunks
[];
4419 return iALLOc(n_elements
, sizes
, 0, chunks
);
4424 ------------------------------ ialloc ------------------------------
4425 ialloc provides common support for independent_X routines, handling all of
4426 the combinations that can result.
4429 bit 0 set if all elements are same size (using sizes[0])
4430 bit 1 set if elements should be zeroed
4435 static Void_t
** iALLOc(size_t n_elements
,
4440 static Void_t
** iALLOc(n_elements
, sizes
, opts
, chunks
) size_t n_elements
; size_t* sizes
; int opts
; Void_t
* chunks
[];
4443 mstate av
= get_malloc_state();
4444 INTERNAL_SIZE_T element_size
; /* chunksize of each element, if all same */
4445 INTERNAL_SIZE_T contents_size
; /* total size of elements */
4446 INTERNAL_SIZE_T array_size
; /* request size of pointer array */
4447 Void_t
* mem
; /* malloced aggregate space */
4448 mchunkptr p
; /* corresponding chunk */
4449 INTERNAL_SIZE_T remainder_size
; /* remaining bytes while splitting */
4450 Void_t
** marray
; /* either "chunks" or malloced ptr array */
4451 mchunkptr array_chunk
; /* chunk for malloced ptr array */
4452 int mmx
; /* to disable mmap */
4453 INTERNAL_SIZE_T size
;
4456 /* Ensure initialization */
4457 if (av
->max_fast
== 0) malloc_consolidate(av
);
4459 /* compute array length, if needed */
4461 if (n_elements
== 0)
4462 return chunks
; /* nothing to do */
4467 /* if empty req, must still return chunk representing empty array */
4468 if (n_elements
== 0)
4469 return (Void_t
**) mALLOc(0);
4471 array_size
= request2size(n_elements
* (sizeof(Void_t
*)));
4474 /* compute total element size */
4475 if (opts
& 0x1) { /* all-same-size */
4476 element_size
= request2size(*sizes
);
4477 contents_size
= n_elements
* element_size
;
4479 else { /* add up all the sizes */
4482 for (i
= 0; i
!= n_elements
; ++i
)
4483 contents_size
+= request2size(sizes
[i
]);
4486 /* subtract out alignment bytes from total to minimize overallocation */
4487 size
= contents_size
+ array_size
- MALLOC_ALIGN_MASK
;
4490 Allocate the aggregate chunk.
4491 But first disable mmap so malloc won't use it, since
4492 we would not be able to later free/realloc space internal
4493 to a segregated mmap region.
4495 mmx
= av
->n_mmaps_max
; /* disable mmap */
4496 av
->n_mmaps_max
= 0;
4498 av
->n_mmaps_max
= mmx
; /* reset mmap */
4503 assert(!chunk_is_mmapped(p
));
4504 remainder_size
= chunksize(p
);
4506 if (opts
& 0x2) { /* optionally clear the elements */
4507 MALLOC_ZERO(mem
, remainder_size
- SIZE_SZ
- array_size
);
4510 /* If not provided, allocate the pointer array as final part of chunk */
4512 array_chunk
= chunk_at_offset(p
, contents_size
);
4513 marray
= (Void_t
**) (chunk2mem(array_chunk
));
4514 set_head(array_chunk
, (remainder_size
- contents_size
) | PREV_INUSE
);
4515 remainder_size
= contents_size
;
4518 /* split out elements */
4519 for (i
= 0; ; ++i
) {
4520 marray
[i
] = chunk2mem(p
);
4521 if (i
!= n_elements
-1) {
4522 if (element_size
!= 0)
4523 size
= element_size
;
4525 size
= request2size(sizes
[i
]);
4526 remainder_size
-= size
;
4527 set_head(p
, size
| PREV_INUSE
);
4528 p
= chunk_at_offset(p
, size
);
4530 else { /* the final element absorbs any overallocation slop */
4531 set_head(p
, remainder_size
| PREV_INUSE
);
4537 if (marray
!= chunks
) {
4538 /* final element must have exactly exhausted chunk */
4539 if (element_size
!= 0)
4540 assert(remainder_size
== element_size
);
4542 assert(remainder_size
== request2size(sizes
[i
]));
4543 check_inuse_chunk(mem2chunk(marray
));
4546 for (i
= 0; i
!= n_elements
; ++i
)
4547 check_inuse_chunk(mem2chunk(marray
[i
]));
4555 ------------------------------ valloc ------------------------------
4559 Void_t
* vALLOc(size_t bytes
)
4561 Void_t
* vALLOc(bytes
) size_t bytes
;
4564 /* Ensure initialization */
4565 mstate av
= get_malloc_state();
4566 if (av
->max_fast
== 0) malloc_consolidate(av
);
4567 return mEMALIGn(av
->pagesize
, bytes
);
4571 ------------------------------ pvalloc ------------------------------
4576 Void_t
* pVALLOc(size_t bytes
)
4578 Void_t
* pVALLOc(bytes
) size_t bytes
;
4581 mstate av
= get_malloc_state();
4584 /* Ensure initialization */
4585 if (av
->max_fast
== 0) malloc_consolidate(av
);
4586 pagesz
= av
->pagesize
;
4587 return mEMALIGn(pagesz
, (bytes
+ pagesz
- 1) & ~(pagesz
- 1));
4592 ------------------------------ malloc_trim ------------------------------
4596 int mTRIm(size_t pad
)
4598 int mTRIm(pad
) size_t pad
;
4601 mstate av
= get_malloc_state();
4602 /* Ensure initialization/consolidation */
4603 malloc_consolidate(av
);
4605 #ifndef MORECORE_CANNOT_TRIM
4606 return sYSTRIm(pad
, av
);
4614 ------------------------- malloc_usable_size -------------------------
4618 size_t mUSABLe(Void_t
* mem
)
4620 size_t mUSABLe(mem
) Void_t
* mem
;
4626 if (chunk_is_mmapped(p
))
4627 return chunksize(p
) - 2*SIZE_SZ
;
4629 return chunksize(p
) - SIZE_SZ
;
4635 ------------------------------ mallinfo ------------------------------
4638 struct mallinfo
mALLINFo()
4640 mstate av
= get_malloc_state();
4645 INTERNAL_SIZE_T avail
;
4646 INTERNAL_SIZE_T fastavail
;
4650 /* Ensure initialization */
4651 if (av
->top
== 0) malloc_consolidate(av
);
4653 check_malloc_state();
4655 /* Account for top */
4656 avail
= chunksize(av
->top
);
4657 nblocks
= 1; /* top always exists */
4659 /* traverse fastbins */
4663 for (i
= 0; i
< NFASTBINS
; ++i
) {
4664 for (p
= av
->fastbins
[i
]; p
!= 0; p
= p
->fd
) {
4666 fastavail
+= chunksize(p
);
4672 /* traverse regular bins */
4673 for (i
= 1; i
< NBINS
; ++i
) {
4675 for (p
= last(b
); p
!= b
; p
= p
->bk
) {
4677 avail
+= chunksize(p
);
4681 mi
.smblks
= nfastblocks
;
4682 mi
.ordblks
= nblocks
;
4683 mi
.fordblks
= avail
;
4684 mi
.uordblks
= av
->sbrked_mem
- avail
;
4685 mi
.arena
= av
->sbrked_mem
;
4686 mi
.hblks
= av
->n_mmaps
;
4687 mi
.hblkhd
= av
->mmapped_mem
;
4688 mi
.fsmblks
= fastavail
;
4689 mi
.keepcost
= chunksize(av
->top
);
4690 mi
.usmblks
= av
->max_total_mem
;
4695 ------------------------------ malloc_stats ------------------------------
4700 struct mallinfo mi
= mALLINFo();
4704 CHUNK_SIZE_T free
, reserved
, committed
;
4705 vminfo (&free
, &reserved
, &committed
);
4706 fprintf(stderr
, "free bytes = %10lu\n",
4708 fprintf(stderr
, "reserved bytes = %10lu\n",
4710 fprintf(stderr
, "committed bytes = %10lu\n",
4716 fprintf(stderr
, "max system bytes = %10lu\n",
4717 (CHUNK_SIZE_T
)(mi
.usmblks
));
4718 fprintf(stderr
, "system bytes = %10lu\n",
4719 (CHUNK_SIZE_T
)(mi
.arena
+ mi
.hblkhd
));
4720 fprintf(stderr
, "in use bytes = %10lu\n",
4721 (CHUNK_SIZE_T
)(mi
.uordblks
+ mi
.hblkhd
));
4725 CHUNK_SIZE_T kernel
, user
;
4726 if (cpuinfo (TRUE
, &kernel
, &user
)) {
4727 fprintf(stderr
, "kernel ms = %10lu\n",
4729 fprintf(stderr
, "user ms = %10lu\n",
4738 ------------------------------ mallopt ------------------------------
4742 int mALLOPt(int param_number
, int value
)
4744 int mALLOPt(param_number
, value
) int param_number
; int value
;
4747 mstate av
= get_malloc_state();
4748 /* Ensure initialization/consolidation */
4749 malloc_consolidate(av
);
4751 switch(param_number
) {
4753 if (value
>= 0 && value
<= MAX_FAST_SIZE
) {
4754 set_max_fast(av
, value
);
4760 case M_TRIM_THRESHOLD
:
4761 av
->trim_threshold
= value
;
4765 av
->top_pad
= value
;
4768 case M_MMAP_THRESHOLD
:
4769 av
->mmap_threshold
= value
;
4777 av
->n_mmaps_max
= value
;
4787 -------------------- Alternative MORECORE functions --------------------
4792 General Requirements for MORECORE.
4794 The MORECORE function must have the following properties:
4796 If MORECORE_CONTIGUOUS is false:
4798 * MORECORE must allocate in multiples of pagesize. It will
4799 only be called with arguments that are multiples of pagesize.
4801 * MORECORE(0) must return an address that is at least
4802 MALLOC_ALIGNMENT aligned. (Page-aligning always suffices.)
4804 else (i.e. If MORECORE_CONTIGUOUS is true):
4806 * Consecutive calls to MORECORE with positive arguments
4807 return increasing addresses, indicating that space has been
4808 contiguously extended.
4810 * MORECORE need not allocate in multiples of pagesize.
4811 Calls to MORECORE need not have args of multiples of pagesize.
4813 * MORECORE need not page-align.
4817 * MORECORE may allocate more memory than requested. (Or even less,
4818 but this will generally result in a malloc failure.)
4820 * MORECORE must not allocate memory when given argument zero, but
4821 instead return one past the end address of memory from previous
4822 nonzero call. This malloc does NOT call MORECORE(0)
4823 until at least one call with positive arguments is made, so
4824 the initial value returned is not important.
4826 * Even though consecutive calls to MORECORE need not return contiguous
4827 addresses, it must be OK for malloc'ed chunks to span multiple
4828 regions in those cases where they do happen to be contiguous.
4830 * MORECORE need not handle negative arguments -- it may instead
4831 just return MORECORE_FAILURE when given negative arguments.
4832 Negative arguments are always multiples of pagesize. MORECORE
4833 must not misinterpret negative args as large positive unsigned
4834 args. You can suppress all such calls from even occurring by defining
4835 MORECORE_CANNOT_TRIM,
4837 There is some variation across systems about the type of the
4838 argument to sbrk/MORECORE. If size_t is unsigned, then it cannot
4839 actually be size_t, because sbrk supports negative args, so it is
4840 normally the signed type of the same width as size_t (sometimes
4841 declared as "intptr_t", and sometimes "ptrdiff_t"). It doesn't much
4842 matter though. Internally, we use "long" as arguments, which should
4843 work across all reasonable possibilities.
4845 Additionally, if MORECORE ever returns failure for a positive
4846 request, and HAVE_MMAP is true, then mmap is used as a noncontiguous
4847 system allocator. This is a useful backup strategy for systems with
4848 holes in address spaces -- in this case sbrk cannot contiguously
4849 expand the heap, but mmap may be able to map noncontiguous space.
4851 If you'd like mmap to ALWAYS be used, you can define MORECORE to be
4852 a function that always returns MORECORE_FAILURE.
4854 Malloc only has limited ability to detect failures of MORECORE
4855 to supply contiguous space when it says it can. In particular,
4856 multithreaded programs that do not use locks may result in
4857 rece conditions across calls to MORECORE that result in gaps
4858 that cannot be detected as such, and subsequent corruption.
4860 If you are using this malloc with something other than sbrk (or its
4861 emulation) to supply memory regions, you probably want to set
4862 MORECORE_CONTIGUOUS as false. As an example, here is a custom
4863 allocator kindly contributed for pre-OSX macOS. It uses virtually
4864 but not necessarily physically contiguous non-paged memory (locked
4865 in, present and won't get swapped out). You can use it by
4866 uncommenting this section, adding some #includes, and setting up the
4867 appropriate defines above:
4869 #define MORECORE osMoreCore
4870 #define MORECORE_CONTIGUOUS 0
4872 There is also a shutdown routine that should somehow be called for
4873 cleanup upon program exit.
4875 #define MAX_POOL_ENTRIES 100
4876 #define MINIMUM_MORECORE_SIZE (64 * 1024)
4877 static int next_os_pool;
4878 void *our_os_pools[MAX_POOL_ENTRIES];
4880 void *osMoreCore(int size)
4883 static void *sbrk_top = 0;
4887 if (size < MINIMUM_MORECORE_SIZE)
4888 size = MINIMUM_MORECORE_SIZE;
4889 if (CurrentExecutionLevel() == kTaskLevel)
4890 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4893 return (void *) MORECORE_FAILURE;
4895 // save ptrs so they can be freed during cleanup
4896 our_os_pools[next_os_pool] = ptr;
4898 ptr = (void *) ((((CHUNK_SIZE_T) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4899 sbrk_top = (char *) ptr + size;
4904 // we don't currently support shrink behavior
4905 return (void *) MORECORE_FAILURE;
4913 // cleanup any allocated memory pools
4914 // called as last thing before shutting down driver
4916 void osCleanupMem(void)
4920 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4923 PoolDeallocate(*ptr);
4932 --------------------------------------------------------------
4934 Emulation of sbrk for win32.
4935 Donated by J. Walter <Walter@GeNeSys-e.de>.
4936 For additional information about this code, and malloc on Win32, see
4937 http://www.genesys-e.de/jwalter/
4947 /* Support for USE_MALLOC_LOCK */
4948 #ifdef USE_MALLOC_LOCK
4950 /* Wait for spin lock */
4951 static int slwait (int *sl
) {
4952 while (InterlockedCompareExchange ((void **) sl
, (void *) 1, (void *) 0) != 0)
4957 /* Release spin lock */
4958 static int slrelease (int *sl
) {
4959 InterlockedExchange (sl
, 0);
4964 /* Spin lock for emulation code */
4968 #endif /* USE_MALLOC_LOCK */
4970 /* getpagesize for windows */
4971 static long getpagesize (void) {
4972 static long g_pagesize
= 0;
4974 SYSTEM_INFO system_info
;
4975 GetSystemInfo (&system_info
);
4976 g_pagesize
= system_info
.dwPageSize
;
4980 static long getregionsize (void) {
4981 static long g_regionsize
= 0;
4982 if (! g_regionsize
) {
4983 SYSTEM_INFO system_info
;
4984 GetSystemInfo (&system_info
);
4985 g_regionsize
= system_info
.dwAllocationGranularity
;
4987 return g_regionsize
;
4990 /* A region list entry */
4991 typedef struct _region_list_entry
{
4992 void *top_allocated
;
4993 void *top_committed
;
4996 struct _region_list_entry
*previous
;
4997 } region_list_entry
;
4999 /* Allocate and link a region entry in the region list */
5000 static int region_list_append (region_list_entry
**last
, void *base_reserved
, long reserve_size
) {
5001 region_list_entry
*next
= HeapAlloc (GetProcessHeap (), 0, sizeof (region_list_entry
));
5004 next
->top_allocated
= (char *) base_reserved
;
5005 next
->top_committed
= (char *) base_reserved
;
5006 next
->top_reserved
= (char *) base_reserved
+ reserve_size
;
5007 next
->reserve_size
= reserve_size
;
5008 next
->previous
= *last
;
5012 /* Free and unlink the last region entry from the region list */
5013 static int region_list_remove (region_list_entry
**last
) {
5014 region_list_entry
*previous
= (*last
)->previous
;
5015 if (! HeapFree (GetProcessHeap (), sizeof (region_list_entry
), *last
))
5021 #define CEIL(size,to) (((size)+(to)-1)&~((to)-1))
5022 #define FLOOR(size,to) ((size)&~((to)-1))
5024 #define SBRK_SCALE 0
5025 /* #define SBRK_SCALE 1 */
5026 /* #define SBRK_SCALE 2 */
5027 /* #define SBRK_SCALE 4 */
5029 /* sbrk for windows */
5030 static void *sbrk (long size
) {
5031 static long g_pagesize
, g_my_pagesize
;
5032 static long g_regionsize
, g_my_regionsize
;
5033 static region_list_entry
*g_last
;
5034 void *result
= (void *) MORECORE_FAILURE
;
5036 printf ("sbrk %d\n", size
);
5038 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5039 /* Wait for spin lock */
5042 /* First time initialization */
5044 g_pagesize
= getpagesize ();
5045 g_my_pagesize
= g_pagesize
<< SBRK_SCALE
;
5047 if (! g_regionsize
) {
5048 g_regionsize
= getregionsize ();
5049 g_my_regionsize
= g_regionsize
<< SBRK_SCALE
;
5052 if (! region_list_append (&g_last
, 0, 0))
5055 /* Assert invariants */
5057 assert ((char *) g_last
->top_reserved
- g_last
->reserve_size
<= (char *) g_last
->top_allocated
&&
5058 g_last
->top_allocated
<= g_last
->top_committed
);
5059 assert ((char *) g_last
->top_reserved
- g_last
->reserve_size
<= (char *) g_last
->top_committed
&&
5060 g_last
->top_committed
<= g_last
->top_reserved
&&
5061 (unsigned) g_last
->top_committed
% g_pagesize
== 0);
5062 assert ((unsigned) g_last
->top_reserved
% g_regionsize
== 0);
5063 assert ((unsigned) g_last
->reserve_size
% g_regionsize
== 0);
5064 /* Allocation requested? */
5066 /* Allocation size is the requested size */
5067 long allocate_size
= size
;
5068 /* Compute the size to commit */
5069 long to_commit
= (char *) g_last
->top_allocated
+ allocate_size
- (char *) g_last
->top_committed
;
5070 /* Do we reach the commit limit? */
5071 if (to_commit
> 0) {
5072 /* Round size to commit */
5073 long commit_size
= CEIL (to_commit
, g_my_pagesize
);
5074 /* Compute the size to reserve */
5075 long to_reserve
= (char *) g_last
->top_committed
+ commit_size
- (char *) g_last
->top_reserved
;
5076 /* Do we reach the reserve limit? */
5077 if (to_reserve
> 0) {
5078 /* Compute the remaining size to commit in the current region */
5079 long remaining_commit_size
= (char *) g_last
->top_reserved
- (char *) g_last
->top_committed
;
5080 if (remaining_commit_size
> 0) {
5081 /* Assert preconditions */
5082 assert ((unsigned) g_last
->top_committed
% g_pagesize
== 0);
5083 assert (0 < remaining_commit_size
&& remaining_commit_size
% g_pagesize
== 0); {
5085 void *base_committed
= VirtualAlloc (g_last
->top_committed
, remaining_commit_size
,
5086 MEM_COMMIT
, PAGE_READWRITE
);
5087 /* Check returned pointer for consistency */
5088 if (base_committed
!= g_last
->top_committed
)
5090 /* Assert postconditions */
5091 assert ((unsigned) base_committed
% g_pagesize
== 0);
5093 printf ("Commit %p %d\n", base_committed
, remaining_commit_size
);
5095 /* Adjust the regions commit top */
5096 g_last
->top_committed
= (char *) base_committed
+ remaining_commit_size
;
5099 /* Now we are going to search and reserve. */
5100 int contiguous
= -1;
5102 MEMORY_BASIC_INFORMATION memory_info
;
5103 void *base_reserved
;
5106 /* Assume contiguous memory */
5108 /* Round size to reserve */
5109 reserve_size
= CEIL (to_reserve
, g_my_regionsize
);
5110 /* Start with the current region's top */
5111 memory_info
.BaseAddress
= g_last
->top_reserved
;
5112 /* Assert preconditions */
5113 assert ((unsigned) memory_info
.BaseAddress
% g_pagesize
== 0);
5114 assert (0 < reserve_size
&& reserve_size
% g_regionsize
== 0);
5115 while (VirtualQuery (memory_info
.BaseAddress
, &memory_info
, sizeof (memory_info
))) {
5116 /* Assert postconditions */
5117 assert ((unsigned) memory_info
.BaseAddress
% g_pagesize
== 0);
5119 printf ("Query %p %d %s\n", memory_info
.BaseAddress
, memory_info
.RegionSize
,
5120 memory_info
.State
== MEM_FREE
? "FREE":
5121 (memory_info
.State
== MEM_RESERVE
? "RESERVED":
5122 (memory_info
.State
== MEM_COMMIT
? "COMMITTED": "?")));
5124 /* Region is free, well aligned and big enough: we are done */
5125 if (memory_info
.State
== MEM_FREE
&&
5126 (unsigned) memory_info
.BaseAddress
% g_regionsize
== 0 &&
5127 memory_info
.RegionSize
>= (unsigned) reserve_size
) {
5131 /* From now on we can't get contiguous memory! */
5133 /* Recompute size to reserve */
5134 reserve_size
= CEIL (allocate_size
, g_my_regionsize
);
5135 memory_info
.BaseAddress
= (char *) memory_info
.BaseAddress
+ memory_info
.RegionSize
;
5136 /* Assert preconditions */
5137 assert ((unsigned) memory_info
.BaseAddress
% g_pagesize
== 0);
5138 assert (0 < reserve_size
&& reserve_size
% g_regionsize
== 0);
5140 /* Search failed? */
5143 /* Assert preconditions */
5144 assert ((unsigned) memory_info
.BaseAddress
% g_regionsize
== 0);
5145 assert (0 < reserve_size
&& reserve_size
% g_regionsize
== 0);
5146 /* Try to reserve this */
5147 base_reserved
= VirtualAlloc (memory_info
.BaseAddress
, reserve_size
,
5148 MEM_RESERVE
, PAGE_NOACCESS
);
5149 if (! base_reserved
) {
5150 int rc
= GetLastError ();
5151 if (rc
!= ERROR_INVALID_ADDRESS
)
5154 /* A null pointer signals (hopefully) a race condition with another thread. */
5155 /* In this case, we try again. */
5156 } while (! base_reserved
);
5157 /* Check returned pointer for consistency */
5158 if (memory_info
.BaseAddress
&& base_reserved
!= memory_info
.BaseAddress
)
5160 /* Assert postconditions */
5161 assert ((unsigned) base_reserved
% g_regionsize
== 0);
5163 printf ("Reserve %p %d\n", base_reserved
, reserve_size
);
5165 /* Did we get contiguous memory? */
5167 long start_size
= (char *) g_last
->top_committed
- (char *) g_last
->top_allocated
;
5168 /* Adjust allocation size */
5169 allocate_size
-= start_size
;
5170 /* Adjust the regions allocation top */
5171 g_last
->top_allocated
= g_last
->top_committed
;
5172 /* Recompute the size to commit */
5173 to_commit
= (char *) g_last
->top_allocated
+ allocate_size
- (char *) g_last
->top_committed
;
5174 /* Round size to commit */
5175 commit_size
= CEIL (to_commit
, g_my_pagesize
);
5177 /* Append the new region to the list */
5178 if (! region_list_append (&g_last
, base_reserved
, reserve_size
))
5180 /* Didn't we get contiguous memory? */
5182 /* Recompute the size to commit */
5183 to_commit
= (char *) g_last
->top_allocated
+ allocate_size
- (char *) g_last
->top_committed
;
5184 /* Round size to commit */
5185 commit_size
= CEIL (to_commit
, g_my_pagesize
);
5189 /* Assert preconditions */
5190 assert ((unsigned) g_last
->top_committed
% g_pagesize
== 0);
5191 assert (0 < commit_size
&& commit_size
% g_pagesize
== 0); {
5193 void *base_committed
= VirtualAlloc (g_last
->top_committed
, commit_size
,
5194 MEM_COMMIT
, PAGE_READWRITE
);
5195 /* Check returned pointer for consistency */
5196 if (base_committed
!= g_last
->top_committed
)
5198 /* Assert postconditions */
5199 assert ((unsigned) base_committed
% g_pagesize
== 0);
5201 printf ("Commit %p %d\n", base_committed
, commit_size
);
5203 /* Adjust the regions commit top */
5204 g_last
->top_committed
= (char *) base_committed
+ commit_size
;
5207 /* Adjust the regions allocation top */
5208 g_last
->top_allocated
= (char *) g_last
->top_allocated
+ allocate_size
;
5209 result
= (char *) g_last
->top_allocated
- size
;
5210 /* Deallocation requested? */
5211 } else if (size
< 0) {
5212 long deallocate_size
= - size
;
5213 /* As long as we have a region to release */
5214 while ((char *) g_last
->top_allocated
- deallocate_size
< (char *) g_last
->top_reserved
- g_last
->reserve_size
) {
5215 /* Get the size to release */
5216 long release_size
= g_last
->reserve_size
;
5217 /* Get the base address */
5218 void *base_reserved
= (char *) g_last
->top_reserved
- release_size
;
5219 /* Assert preconditions */
5220 assert ((unsigned) base_reserved
% g_regionsize
== 0);
5221 assert (0 < release_size
&& release_size
% g_regionsize
== 0); {
5223 int rc
= VirtualFree (base_reserved
, 0,
5225 /* Check returned code for consistency */
5229 printf ("Release %p %d\n", base_reserved
, release_size
);
5232 /* Adjust deallocation size */
5233 deallocate_size
-= (char *) g_last
->top_allocated
- (char *) base_reserved
;
5234 /* Remove the old region from the list */
5235 if (! region_list_remove (&g_last
))
5238 /* Compute the size to decommit */
5239 long to_decommit
= (char *) g_last
->top_committed
- ((char *) g_last
->top_allocated
- deallocate_size
);
5240 if (to_decommit
>= g_my_pagesize
) {
5241 /* Compute the size to decommit */
5242 long decommit_size
= FLOOR (to_decommit
, g_my_pagesize
);
5243 /* Compute the base address */
5244 void *base_committed
= (char *) g_last
->top_committed
- decommit_size
;
5245 /* Assert preconditions */
5246 assert ((unsigned) base_committed
% g_pagesize
== 0);
5247 assert (0 < decommit_size
&& decommit_size
% g_pagesize
== 0); {
5249 int rc
= VirtualFree ((char *) base_committed
, decommit_size
,
5251 /* Check returned code for consistency */
5255 printf ("Decommit %p %d\n", base_committed
, decommit_size
);
5258 /* Adjust deallocation size and regions commit and allocate top */
5259 deallocate_size
-= (char *) g_last
->top_allocated
- (char *) base_committed
;
5260 g_last
->top_committed
= base_committed
;
5261 g_last
->top_allocated
= base_committed
;
5264 /* Adjust regions allocate top */
5265 g_last
->top_allocated
= (char *) g_last
->top_allocated
- deallocate_size
;
5266 /* Check for underflow */
5267 if ((char *) g_last
->top_reserved
- g_last
->reserve_size
> (char *) g_last
->top_allocated
||
5268 g_last
->top_allocated
> g_last
->top_committed
) {
5269 /* Adjust regions allocate top */
5270 g_last
->top_allocated
= (char *) g_last
->top_reserved
- g_last
->reserve_size
;
5273 result
= g_last
->top_allocated
;
5275 /* Assert invariants */
5277 assert ((char *) g_last
->top_reserved
- g_last
->reserve_size
<= (char *) g_last
->top_allocated
&&
5278 g_last
->top_allocated
<= g_last
->top_committed
);
5279 assert ((char *) g_last
->top_reserved
- g_last
->reserve_size
<= (char *) g_last
->top_committed
&&
5280 g_last
->top_committed
<= g_last
->top_reserved
&&
5281 (unsigned) g_last
->top_committed
% g_pagesize
== 0);
5282 assert ((unsigned) g_last
->top_reserved
% g_regionsize
== 0);
5283 assert ((unsigned) g_last
->reserve_size
% g_regionsize
== 0);
5286 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5287 /* Release spin lock */
5293 /* mmap for windows */
5294 static void *mmap (void *ptr
, long size
, long prot
, long type
, long handle
, long arg
) {
5295 static long g_pagesize
;
5296 static long g_regionsize
;
5298 printf ("mmap %d\n", size
);
5300 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5301 /* Wait for spin lock */
5304 /* First time initialization */
5306 g_pagesize
= getpagesize ();
5308 g_regionsize
= getregionsize ();
5309 /* Assert preconditions */
5310 assert ((unsigned) ptr
% g_regionsize
== 0);
5311 assert (size
% g_pagesize
== 0);
5313 ptr
= VirtualAlloc (ptr
, size
,
5314 MEM_RESERVE
| MEM_COMMIT
| MEM_TOP_DOWN
, PAGE_READWRITE
);
5316 ptr
= (void *) MORECORE_FAILURE
;
5319 /* Assert postconditions */
5320 assert ((unsigned) ptr
% g_regionsize
== 0);
5322 printf ("Commit %p %d\n", ptr
, size
);
5325 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5326 /* Release spin lock */
5332 /* munmap for windows */
5333 static long munmap (void *ptr
, long size
) {
5334 static long g_pagesize
;
5335 static long g_regionsize
;
5336 int rc
= MUNMAP_FAILURE
;
5338 printf ("munmap %p %d\n", ptr
, size
);
5340 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5341 /* Wait for spin lock */
5344 /* First time initialization */
5346 g_pagesize
= getpagesize ();
5348 g_regionsize
= getregionsize ();
5349 /* Assert preconditions */
5350 assert ((unsigned) ptr
% g_regionsize
== 0);
5351 assert (size
% g_pagesize
== 0);
5353 if (! VirtualFree (ptr
, 0,
5358 printf ("Release %p %d\n", ptr
, size
);
5361 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5362 /* Release spin lock */
5368 static void vminfo (CHUNK_SIZE_T
*free
, CHUNK_SIZE_T
*reserved
, CHUNK_SIZE_T
*committed
) {
5369 MEMORY_BASIC_INFORMATION memory_info
;
5370 memory_info
.BaseAddress
= 0;
5371 *free
= *reserved
= *committed
= 0;
5372 while (VirtualQuery (memory_info
.BaseAddress
, &memory_info
, sizeof (memory_info
))) {
5373 switch (memory_info
.State
) {
5375 *free
+= memory_info
.RegionSize
;
5378 *reserved
+= memory_info
.RegionSize
;
5381 *committed
+= memory_info
.RegionSize
;
5384 memory_info
.BaseAddress
= (char *) memory_info
.BaseAddress
+ memory_info
.RegionSize
;
5388 static int cpuinfo (int whole
, CHUNK_SIZE_T
*kernel
, CHUNK_SIZE_T
*user
) {
5390 __int64 creation64
, exit64
, kernel64
, user64
;
5391 int rc
= GetProcessTimes (GetCurrentProcess (),
5392 (FILETIME
*) &creation64
,
5393 (FILETIME
*) &exit64
,
5394 (FILETIME
*) &kernel64
,
5395 (FILETIME
*) &user64
);
5401 *kernel
= (CHUNK_SIZE_T
) (kernel64
/ 10000);
5402 *user
= (CHUNK_SIZE_T
) (user64
/ 10000);
5405 __int64 creation64
, exit64
, kernel64
, user64
;
5406 int rc
= GetThreadTimes (GetCurrentThread (),
5407 (FILETIME
*) &creation64
,
5408 (FILETIME
*) &exit64
,
5409 (FILETIME
*) &kernel64
,
5410 (FILETIME
*) &user64
);
5416 *kernel
= (CHUNK_SIZE_T
) (kernel64
/ 10000);
5417 *user
= (CHUNK_SIZE_T
) (user64
/ 10000);
5424 /* ------------------------------------------------------------
5426 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
5427 * Fix malloc_state bitmap array misdeclaration
5429 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
5430 * Allow tuning of FIRST_SORTED_BIN_SIZE
5431 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5432 * Better detection and support for non-contiguousness of MORECORE.
5433 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5434 * Bypass most of malloc if no frees. Thanks To Emery Berger.
5435 * Fix freeing of old top non-contiguous chunk im sysmalloc.
5436 * Raised default trim and map thresholds to 256K.
5437 * Fix mmap-related #defines. Thanks to Lubos Lunak.
5438 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5439 * Branch-free bin calculation
5440 * Default trim and mmap thresholds now 256K.
5442 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
5443 * Introduce independent_comalloc and independent_calloc.
5444 Thanks to Michael Pachos for motivation and help.
5445 * Make optional .h file available
5446 * Allow > 2GB requests on 32bit systems.
5447 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5448 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5450 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5452 * memalign: check alignment arg
5453 * realloc: don't try to shift chunks backwards, since this
5454 leads to more fragmentation in some programs and doesn't
5455 seem to help in any others.
5456 * Collect all cases in malloc requiring system memory into sYSMALLOc
5457 * Use mmap as backup to sbrk
5458 * Place all internal state in malloc_state
5459 * Introduce fastbins (although similar to 2.5.1)
5460 * Many minor tunings and cosmetic improvements
5461 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5462 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5463 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5464 * Include errno.h to support default failure action.
5466 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5467 * return null for negative arguments
5468 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5469 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5470 (e.g. WIN32 platforms)
5471 * Cleanup header file inclusion for WIN32 platforms
5472 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5473 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5474 memory allocation routines
5475 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5476 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5477 usage of 'assert' in non-WIN32 code
5478 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5480 * Always call 'fREe()' rather than 'free()'
5482 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5483 * Fixed ordering problem with boundary-stamping
5485 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5486 * Added pvalloc, as recommended by H.J. Liu
5487 * Added 64bit pointer support mainly from Wolfram Gloger
5488 * Added anonymously donated WIN32 sbrk emulation
5489 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5490 * malloc_extend_top: fix mask error that caused wastage after
5492 * Add linux mremap support code from HJ Liu
5494 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5495 * Integrated most documentation with the code.
5496 * Add support for mmap, with help from
5497 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5498 * Use last_remainder in more cases.
5499 * Pack bins using idea from colin@nyx10.cs.du.edu
5500 * Use ordered bins instead of best-fit threshhold
5501 * Eliminate block-local decls to simplify tracing and debugging.
5502 * Support another case of realloc via move into top
5503 * Fix error occuring when initial sbrk_base not word-aligned.
5504 * Rely on page size for units instead of SBRK_UNIT to
5505 avoid surprises about sbrk alignment conventions.
5506 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5507 (raymond@es.ele.tue.nl) for the suggestion.
5508 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5509 * More precautions for cases where other routines call sbrk,
5510 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5511 * Added macros etc., allowing use in linux libc from
5512 H.J. Lu (hjl@gnu.ai.mit.edu)
5513 * Inverted this history list
5515 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5516 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5517 * Removed all preallocation code since under current scheme
5518 the work required to undo bad preallocations exceeds
5519 the work saved in good cases for most test programs.
5520 * No longer use return list or unconsolidated bins since
5521 no scheme using them consistently outperforms those that don't
5522 given above changes.
5523 * Use best fit for very large chunks to prevent some worst-cases.
5524 * Added some support for debugging
5526 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5527 * Removed footers when chunks are in use. Thanks to
5528 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5530 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5531 * Added malloc_trim, with help from Wolfram Gloger
5532 (wmglo@Dent.MED.Uni-Muenchen.DE).
5534 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5536 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5537 * realloc: try to expand in both directions
5538 * malloc: swap order of clean-bin strategy;
5539 * realloc: only conditionally expand backwards
5540 * Try not to scavenge used bins
5541 * Use bin counts as a guide to preallocation
5542 * Occasionally bin return list chunks in first scan
5543 * Add a few optimizations from colin@nyx10.cs.du.edu
5545 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5546 * faster bin computation & slightly different binning
5547 * merged all consolidations to one part of malloc proper
5548 (eliminating old malloc_find_space & malloc_clean_bin)
5549 * Scan 2 returns chunks (not just 1)
5550 * Propagate failure in realloc if malloc returns 0
5551 * Add stuff to allow compilation on non-ANSI compilers
5552 from kpv@research.att.com
5554 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5555 * removed potential for odd address access in prev_chunk
5556 * removed dependency on getpagesize.h
5557 * misc cosmetics and a bit more internal documentation
5558 * anticosmetics: mangled names in macros to evade debugger strangeness
5559 * tested on sparc, hp-700, dec-mips, rs6000
5560 with gcc & native cc (hp, dec only) allowing
5561 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5563 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5564 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5565 structure of old version, but most details differ.)