about summary refs log tree commit diff
path: root/malloc/malloc.c
Commit message (Collapse)AuthorAgeFilesLines
* Base max_fast on alignment, not width, of bins (Bug 24903)DJ Delorie2019-10-311-1/+1
| | | | | | | | | | | | | | set_max_fast sets the "impossibly small" value based on, eventually, MALLOC_ALIGNMENT. The comparisons for the smallest chunk used is, eventually, MIN_CHUNK_SIZE. Note that i386 is the only platform where these are the same, so a smallest chunk *would* be put in a no-fastbins fastbin. This change calculates the "impossibly small" value based on MIN_CHUNK_SIZE instead, so that we can know it will always be impossibly small. (cherry picked from commit ff12e0fb91b9072800f031cb21fb2651ee7b6251)
* malloc: Fix missing accounting of top chunk in malloc_info [BZ #24026]Niklas Hambüchen2019-10-301-0/+6
| | | | | | | | | | | | | | Fixes `<total type="rest" size="..."> incorrectly showing as 0 most of the time. The rest value being wrong is significant because to compute the actual amount of memory handed out via malloc, the user must subtract it from <system type="current" size="...">. That result being wrong makes investigating memory fragmentation issues like <https://bugzilla.redhat.com/show_bug.cgi?id=843478> close to impossible. (cherry picked from commit b6d2c4475d5abc05dd009575b90556bdd3c78ad0)
* malloc: Remove unwanted leading whitespace in malloc_info [BZ #24867]Florian Weimer2019-10-301-1/+1
| | | | | | | | It was introduced in commit 6c8dbf00f536d78b1937b5af6f57be47fd376344 ("Reformat malloc to gnu style."). Reviewed-by: Carlos O'Donell <carlos@redhat.com> (cherry picked from commit b0f6679bcd738ea244a14acd879d974901e56c8e)
* Add glibc.malloc.mxfast tunableDJ Delorie2019-10-301-7/+14
| | | | | | | | | | | | | * elf/dl-tunables.list: Add glibc.malloc.mxfast. * manual/tunables.texi: Document it. * malloc/malloc.c (do_set_mxfast): New. (__libc_mallopt): Call it. * malloc/arena.c: Add mxfast tunable. * malloc/tst-mxfast.c: New. * malloc/Makefile: Add it. Reviewed-by: Carlos O'Donell <carlos@redhat.com> (cherry picked from commit c48d92b430c480de06762f80c104922239416826)
* Small tcache improvementsWilco Dijkstra2019-10-301-8/+6
| | | | | | | | | | | | | | | | | | Change the tcache->counts[] entries to uint16_t - this removes the limit set by char and allows a larger tcache. Remove a few redundant asserts. bench-malloc-thread with 4 threads is ~15% faster on Cortex-A72. Reviewed-by: DJ Delorie <dj@redhat.com> * malloc/malloc.c (MAX_TCACHE_COUNT): Increase to UINT16_MAX. (tcache_put): Remove redundant assert. (tcache_get): Remove redundant asserts. (__libc_malloc): Check tcache count is not zero. * manual/tunables.texi (glibc.malloc.tcache_count): Update maximum. (cherry picked from commit 1f50f2ad854c84ead522bfc7331b46dbe6057d53)
* Fix assertion in malloc.c:tcache_get.Joseph Myers2019-10-301-1/+1
| | | | | | | | | | | | | | | | | | | | One of the warnings that appears with -Wextra is "ordered comparison of pointer with integer zero" in malloc.c:tcache_get, for the assertion: assert (tcache->entries[tc_idx] > 0); Indeed, a "> 0" comparison does not make sense for tcache->entries[tc_idx], which is a pointer. My guess is that tcache->counts[tc_idx] is what's intended here, and this patch changes the assertion accordingly. Tested for x86_64. * malloc/malloc.c (tcache_get): Compare tcache->counts[tc_idx] with 0, not tcache->entries[tc_idx]. (cherry picked from commit 77dc0d8643aa99c92bf671352b0a8adde705896f)
* Fix tcache count maximum (BZ #24531)Wilco Dijkstra2019-05-221-2/+7
| | | | | | | | | | | | | | The tcache counts[] array is a char, which has a very small range and thus may overflow. When setting tcache_count tunable, there is no overflow check. However the tunable must not be larger than the maximum value of the tcache counts[] array, otherwise it can overflow when filling the tcache. [BZ #24531] * malloc/malloc.c (MAX_TCACHE_COUNT): New define. (do_set_tcache_count): Only update if count is small enough. * manual/tunables.texi (glibc.malloc.tcache_count): Document max value. (cherry picked from commit 5ad533e8e65092be962e414e0417112c65d154fb)
* malloc: Check for large bin list corruption when inserting unsorted chunkAdam Maris2019-05-021-0/+4
| | | | | | | | | | Fixes bug 24216. This patch adds security checks for bk and bk_nextsize pointers of chunks in large bin when inserting chunk from unsorted bin. It was possible to write the pointer to victim (newly inserted chunk) to arbitrary memory locations if bk or bk_nextsize pointers of the next large bin chunk got corrupted. (cherry picked from commit 5b06f538c5aee0389ed034f60d90a8884d6d54de)
* malloc: Check the alignment of mmapped chunks before unmapping.Istvan Kurucsai2019-05-021-1/+4
| | | | | | * malloc/malloc.c (munmap_chunk): Verify chunk alignment. (cherry picked from commit c0e82f117357a941e4d40fcc08babbd6a3c3a1b5)
* malloc: Add more integrity checks to mremap_chunk.Istvan Kurucsai2019-05-021-3/+9
| | | | | | * malloc/malloc.c (mremap_chunk): Additional checks. (cherry picked from commit ebe544bf6e8eec35e754fd49efb027c6f161b6cb)
* malloc: Add ChangeLog for accidentally committed changeFlorian Weimer2019-05-021-1/+1
| | | | | | | | Commit b90ddd08f6dd688e651df9ee89ca3a69ff88cd0c ("malloc: Additional checks for unsorted bin integrity I.") was committed without a whitespace fix, so it is adjusted here as well. (cherry picked from commit 35cfefd96062145eeb8aee6bd72d07e0909a6b2e)
* malloc: Always call memcpy in _int_realloc [BZ #24027]Florian Weimer2019-01-011-42/+1
| | | | | | | | | | This commit removes the custom memcpy implementation from _int_realloc for small chunk sizes. The ncopies variable has the wrong type, and an integer wraparound could cause the existing code to copy too few elements (leaving the new memory region mostly uninitialized). Therefore, removing this code fixes bug 24027. (cherry picked from commit b50dd3bc8cbb1efe85399b03d7e6c0310c2ead84)
* malloc: tcache double free checkDJ Delorie2018-11-281-6/+34
| | | | | | | | | | | | | | | | | | | | | | | * malloc/malloc.c (tcache_entry): Add key field. (tcache_put): Set it. (tcache_get): Likewise. (_int_free): Check for double free in tcache. * malloc/tst-tcfree1.c: New. * malloc/tst-tcfree2.c: New. * malloc/Makefile: Run the new tests. * manual/probes.texi: Document memory_tcache_double_free probe. * dlfcn/dlerror.c (check_free): Prevent double frees. (cherry picked from commit bcdaad21d4635931d1bd3b54a7894276925d081d) malloc: tcache: Validate tc_idx before checking for double-frees [BZ #23907] The previous check could read beyond the end of the tcache entry array. If the e->key == tcache cookie check happened to pass, this would result in crashes. (cherry picked from commit affec03b713c82c43a5b025dddc21bde3334f41e)
* Revert "malloc: tcache double free check" [BZ #23907]Florian Weimer2018-11-221-28/+0
| | | | | | This reverts commit 481a6cf0c24f02f251d7cd0b776c12d00e6b144f, the backport of commit bcdaad21d4635931d1bd3b54a7894276925d081d on the master branch.
* malloc: tcache double free checkDJ Delorie2018-11-201-0/+28
| | | | | | | | | | | | | * malloc/malloc.c (tcache_entry): Add key field. (tcache_put): Set it. (tcache_get): Likewise. (_int_free): Check for double free in tcache. * malloc/tst-tcfree1.c: New. * malloc/tst-tcfree2.c: New. * malloc/Makefile: Run the new tests. * manual/probes.texi: Document memory_tcache_double_free probe. * dlfcn/dlerror.c (check_free): Prevent double frees.
* malloc: Additional checks for unsorted bin integrity I.Istvan Kurucsai2018-11-091-4/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On Thu, Jan 11, 2018 at 3:50 PM, Florian Weimer <fweimer@redhat.com> wrote: > On 11/07/2017 04:27 PM, Istvan Kurucsai wrote: >> >> + next = chunk_at_offset (victim, size); > > > For new code, we prefer declarations with initializers. Noted. >> + if (__glibc_unlikely (chunksize_nomask (victim) <= 2 * SIZE_SZ) >> + || __glibc_unlikely (chunksize_nomask (victim) > >> av->system_mem)) >> + malloc_printerr("malloc(): invalid size (unsorted)"); >> + if (__glibc_unlikely (chunksize_nomask (next) < 2 * SIZE_SZ) >> + || __glibc_unlikely (chunksize_nomask (next) > >> av->system_mem)) >> + malloc_printerr("malloc(): invalid next size (unsorted)"); >> + if (__glibc_unlikely ((prev_size (next) & ~(SIZE_BITS)) != >> size)) >> + malloc_printerr("malloc(): mismatching next->prev_size >> (unsorted)"); > > > I think this check is redundant because prev_size (next) and chunksize > (victim) are loaded from the same memory location. I'm fairly certain that it compares mchunk_size of victim against mchunk_prev_size of the next chunk, i.e. the size of victim in its header and footer. >> + if (__glibc_unlikely (bck->fd != victim) >> + || __glibc_unlikely (victim->fd != unsorted_chunks (av))) >> + malloc_printerr("malloc(): unsorted double linked list >> corrupted"); >> + if (__glibc_unlikely (prev_inuse(next))) >> + malloc_printerr("malloc(): invalid next->prev_inuse >> (unsorted)"); > > > There's a missing space after malloc_printerr. Noted. > Why do you keep using chunksize_nomask? We never investigated why the > original code uses it. It may have been an accident. You are right, I don't think it makes a difference in these checks. So the size local can be reused for the checks against victim. For next, leaving it as such avoids the masking operation. > Again, for non-main arenas, the checks against av->system_mem could be made > tighter (against the heap size). Maybe you could put the condition into a > separate inline function? We could also do a chunk boundary check similar to what I proposed in the thread for the first patch in the series to be even more strict. I'll gladly try to implement either but believe that refining these checks would bring less benefits than in the case of the top chunk. Intra-arena or intra-heap overlaps would still be doable here with unsorted chunks and I don't see any way to counter that besides more generic measures like randomizing allocations and your metadata encoding patches. I've attached a revised version with the above comments incorporated but without the refined checks. Thanks, Istvan From a12d5d40fd7aed5fa10fc444dcb819947b72b315 Mon Sep 17 00:00:00 2001 From: Istvan Kurucsai <pistukem@gmail.com> Date: Tue, 16 Jan 2018 14:48:16 +0100 Subject: [PATCH v2 1/1] malloc: Additional checks for unsorted bin integrity I. Ensure the following properties of chunks encountered during binning: - victim chunk has reasonable size - next chunk has reasonable size - next->prev_size == victim->size - valid double linked list - PREV_INUSE of next chunk is unset * malloc/malloc.c (_int_malloc): Additional binning code checks. (cherry picked from commit b90ddd08f6dd688e651df9ee89ca3a69ff88cd0c)
* malloc: Mitigate null-byte overflow attacksMoritz Eckert2018-11-091-0/+4
| | | | | | | * malloc/malloc.c (_int_free): Check for corrupt prev_size vs size. (malloc_consolidate): Likewise. (cherry picked from commit d6db68e66dff25d12c3bc5641b60cbd7fb6ab44f)
* malloc: Verify size of top chunk.Pochang Chen2018-11-091-0/+3
| | | | | | | | | | | | | | The House of Force is a well-known technique to exploit heap overflow. In essence, this exploit takes three steps: 1. Overwrite the size of top chunk with very large value (e.g. -1). 2. Request x bytes from top chunk. As the size of top chunk is corrupted, x can be arbitrarily large and top chunk will still be offset by x. 3. The next allocation from top chunk will thus be controllable. If we verify the size of top chunk at step 2, we can stop such attack. (cherry picked from commit 30a17d8c95fbfb15c52d1115803b63aaa73a285c)
* malloc: Update heap dumping/undumping comments [BZ #23351]Florian Weimer2018-06-291-16/+0
| | | | | | Also remove a few now-unused declarations and definitions. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* malloc: harden removal from unsorted listFrancois Goichon2018-03-141-0/+2
| | | | | * malloc/malloc.c (_int_malloc): Added check before removing from unsorted list.
* malloc: Revert sense of prev_inuse in commentsFlorian Weimer2018-03-091-3/+3
| | | | Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* Mechanically remove _IO_ name aliases for types and constants.Zack Weinberg2018-02-211-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch mechanically removes all remaining uses, and the definitions, of the following libio name aliases: name replaced with ---- ------------- _IO_FILE FILE _IO_fpos_t __fpos_t _IO_fpos64_t __fpos64_t _IO_size_t size_t _IO_ssize_t ssize_t or __ssize_t _IO_off_t off_t _IO_off64_t off64_t _IO_pid_t pid_t _IO_uid_t uid_t _IO_wint_t wint_t _IO_va_list va_list or __gnuc_va_list _IO_BUFSIZ BUFSIZ _IO_cookie_io_functions_t cookie_io_functions_t __io_read_fn cookie_read_function_t __io_write_fn cookie_write_function_t __io_seek_fn cookie_seek_function_t __io_close_fn cookie_close_function_t I used __fpos_t and __fpos64_t instead of fpos_t and fpos64_t because the definitions of fpos_t and fpos64_t depend on the largefile mode. I used __ssize_t and __gnuc_va_list in a handful of headers where namespace cleanliness might be relevant even though they're internal-use-only. In all other cases, I used the public-namespace name. There are a tiny handful of places where I left a use of 'struct _IO_FILE' alone, because it was being used together with 'struct _IO_FILE_plus' or 'struct _IO_FILE_complete' in the same arithmetic expression. Because this patch was almost entirely done with search and replace, I may have introduced indentation botches. I did proofread the diff, but I may have missed something. The ChangeLog below calls out all of the places where this was not a pure search-and-replace change. Installed stripped libraries and executables are unchanged by this patch, except that some assertions in vfscanf.c change line numbers. * libio/libio.h (_IO_FILE): Delete; all uses changed to FILE. (_IO_fpos_t): Delete; all uses changed to __fpos_t. (_IO_fpos64_t): Delete; all uses changed to __fpos64_t. (_IO_size_t): Delete; all uses changed to size_t. (_IO_ssize_t): Delete; all uses changed to ssize_t or __ssize_t. (_IO_off_t): Delete; all uses changed to off_t. (_IO_off64_t): Delete; all uses changed to off64_t. (_IO_pid_t): Delete; all uses changed to pid_t. (_IO_uid_t): Delete; all uses changed to uid_t. (_IO_wint_t): Delete; all uses changed to wint_t. (_IO_va_list): Delete; all uses changed to va_list or __gnuc_va_list. (_IO_BUFSIZ): Delete; all uses changed to BUFSIZ. (_IO_cookie_io_functions_t): Delete; all uses changed to cookie_io_functions_t. (__io_read_fn): Delete; all uses changed to cookie_read_function_t. (__io_write_fn): Delete; all uses changed to cookie_write_function_t. (__io_seek_fn): Delete; all uses changed to cookie_seek_function_t. (__io_close_fn): Delete: all uses changed to cookie_close_function_t. * libio/iofopncook.c: Remove unnecessary forward declarations. * libio/iolibio.h: Correct outdated commentary. * malloc/malloc.c (__malloc_stats): Remove unnecessary casts. * stdio-common/fxprintf.c (__fxprintf_nocancel): Remove unnecessary casts. * stdio-common/getline.c: Use _IO_getdelim directly. Don't redefine ssize_t. * stdio-common/printf_fp.c, stdio_common/printf_fphex.c * stdio-common/printf_size.c: Don't redefine size_t or FILE. Remove outdated comments. * stdio-common/vfscanf.c: Don't redefine va_list.
* [BZ #22830] malloc_stats: restore cancellation for stderr correctly.Zack Weinberg2018-02-101-1/+1
| | | | | | | | | | | | malloc_stats means to disable cancellation for writes to stderr while it runs, but it restores stderr->_flags2 with |= instead of =, so what it actually does is disable cancellation on stderr permanently. [BZ #22830] * malloc/malloc.c (__malloc_stats): Restore stderr->_flags2 correctly. * malloc/tst-malloc-stats-cancellation.c: New test case. * malloc/Makefile: Add new test case.
* malloc: Use assert.h's assert macroSamuel Thibault2018-01-291-7/+4
| | | | | | | | | | | This avoids assert definition conflicts if some of the headers used by malloc.c happens to include assert.h. Malloc still needs a malloc-avoiding implementation, which we get by redirecting __assert_fail to malloc's __malloc_assert. * malloc/malloc.c: Include <assert.h>. (assert): Do not define. [!defined NDEBUG] (__assert_fail): Define to __malloc_assert.
* Fix integer overflows in internal memalign and malloc functions [BZ #22343]Arjun Shankar2018-01-181-8/+22
| | | | | | | | | | | | | | | | | | | | | | | | When posix_memalign is called with an alignment less than MALLOC_ALIGNMENT and a requested size close to SIZE_MAX, it falls back to malloc code (because the alignment of a block returned by malloc is sufficient to satisfy the call). In this case, an integer overflow in _int_malloc leads to posix_memalign incorrectly returning successfully. Upon fixing this and writing a somewhat thorough regression test, it was discovered that when posix_memalign is called with an alignment larger than MALLOC_ALIGNMENT (so it uses _int_memalign instead) and a requested size close to SIZE_MAX, a different integer overflow in _int_memalign leads to posix_memalign incorrectly returning successfully. Both integer overflows affect other memory allocation functions that use _int_malloc (one affected malloc in x86) or _int_memalign as well. This commit fixes both integer overflows. In addition to this, it adds a regression test to guard against false successful allocations by the following memory allocation functions when called with too-large allocation sizes and, where relevant, various valid alignments: malloc, realloc, calloc, reallocarray, memalign, posix_memalign, aligned_alloc, valloc, and pvalloc.
* malloc: Ensure that the consolidated fast chunk has a sane size.Istvan Kurucsai2018-01-121-0/+6
|
* Update copyright dates with scripts/update-copyrights.Joseph Myers2018-01-011-1/+1
| | | | | | | * All files with FSF copyright notices: Update copyright dates using scripts/update-copyrights. * locale/programs/charmap-kw.h: Regenerated. * locale/programs/locfile-kw.h: Likewise.
* Fix integer overflow in malloc when tcache is enabled [BZ #22375]Arjun Shankar2017-11-301-1/+2
| | | | | | | | | | When the per-thread cache is enabled, __libc_malloc uses request2size (which does not perform an overflow check) to calculate the chunk size from the requested allocation size. This leads to an integer overflow causing malloc to incorrectly return the last successfully allocated block when called with a very large size argument (close to SIZE_MAX). This commit uses checked_request2size instead, removing the overflow.
* malloc: Call tcache destructor in arena_thread_freeresFlorian Weimer2017-11-231-7/+16
| | | | | | | | | It does not make sense to register separate cleanup functions for arena and tcache since they're always going to be called together. Call the tcache cleanup function from within arena_thread_freeres since it at least makes the order of those cleanups clear in the code. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* malloc: Account for all heaps in an arena in malloc_info [BZ #22439]Florian Weimer2017-11-151-4/+13
| | | | | | | | This commit adds a "subheaps" field to the malloc_info output that shows the number of heaps that were allocated to extend a non-main arena. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* malloc: Add missing arena lock in malloc_info [BZ #22408]Florian Weimer2017-11-151-4/+12
| | | | | Obtain the size information while the arena lock is acquired, and only print it later.
* Add single-threaded path to _int_mallocWilco Dijkstra2017-10-241-25/+38
| | | | | | This patch adds single-threaded fast paths to _int_malloc. * malloc/malloc.c (_int_malloc): Add SINGLE_THREAD_P path.
* Add single-threaded path to malloc/realloc/calloc/memallocWilco Dijkstra2017-10-241-9/+41
| | | | | | | | | | | | | This patch adds a single-threaded fast path to malloc, realloc, calloc and memalloc. When we're single-threaded, we can bypass arena_get (which always locks the arena it returns) and just use the main arena. Also avoid retrying a different arena since there is just the main arena. * malloc/malloc.c (__libc_malloc): Add SINGLE_THREAD_P path. (__libc_realloc): Likewise. (_mid_memalign): Likewise. (__libc_calloc): Likewise.
* Fix build issue with SINGLE_THREAD_PWilco Dijkstra2017-10-201-0/+3
| | | | | | Add sysdep-cancel.h include. * malloc/malloc.c (sysdep-cancel.h): Add include.
* Add single-threaded path to _int_freeWilco Dijkstra2017-10-201-14/+29
| | | | | | | This patch adds single-threaded fast paths to _int_free. Bypass the explicit locking for larger allocations. * malloc/malloc.c (_int_free): Add SINGLE_THREAD_P fast paths.
* Fix deadlock in _int_free consistency checkWilco Dijkstra2017-10-191-9/+12
| | | | | | | | | | This patch fixes a deadlock in the fastbin consistency check. If we fail the fast check due to concurrent modifications to the next chunk or system_mem, we should not lock if we already have the arena lock. Simplify the check to make it obviously correct. * malloc/malloc.c (_int_free): Fix deadlock bug in consistency check.
* Fix build failure on tilepro due to unsupported atomicsWilco Dijkstra2017-10-181-1/+2
| | | | | * malloc/malloc.c (malloc_state): Use int for have_fastchunks since not all targets support atomics on bool.
* Improve malloc initialization sequenceWilco Dijkstra2017-10-171-109/+88
| | | | | | | | | | | | | | | | | | | | | | | | | | | The current malloc initialization is quite convoluted. Instead of sometimes calling malloc_consolidate from ptmalloc_init, call malloc_init_state early so that the main_arena is always initialized. The special initialization can now be removed from malloc_consolidate. This also fixes BZ #22159. Check all calls to malloc_consolidate and remove calls that are redundant initialization after ptmalloc_init, like in int_mallinfo and __libc_mallopt (but keep the latter as consolidation is required for set_max_fast). Update comments to improve clarity. Remove impossible initialization check from _int_malloc, fix assert in do_check_malloc_state to ensure arena->top != 0. Fix the obvious bugs in do_check_free_chunk and do_check_remalloced_chunk to enable single threaded malloc debugging (do_check_malloc_state is not thread safe!). [BZ #22159] * malloc/arena.c (ptmalloc_init): Call malloc_init_state. * malloc/malloc.c (do_check_free_chunk): Fix build bug. (do_check_remalloced_chunk): Fix build bug. (do_check_malloc_state): Add assert that checks arena->top. (malloc_consolidate): Remove initialization. (int_mallinfo): Remove call to malloc_consolidate. (__libc_mallopt): Clarify why malloc_consolidate is needed.
* Use relaxed atomics for malloc have_fastchunksWilco Dijkstra2017-10-171-32/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently free typically uses 2 atomic operations per call. The have_fastchunks flag indicates whether there are recently freed blocks in the fastbins. This is purely an optimization to avoid calling malloc_consolidate too often and avoiding the overhead of walking all fast bins even if all are empty during a sequence of allocations. However using catomic_or to update the flag is completely unnecessary since it can be changed into a simple boolean and accessed using relaxed atomics. There is no change in multi-threaded behaviour given the flag is already approximate (it may be set when there are no blocks in any fast bins, or it may be clear when there are free blocks that could be consolidated). Performance of malloc/free improves by 27% on a simple benchmark on AArch64 (both single and multithreaded). The number of load/store exclusive instructions is reduced by 33%. Bench-malloc-thread speeds up by ~3% in all cases. * malloc/malloc.c (FASTCHUNKS_BIT): Remove. (have_fastchunks): Remove. (clear_fastchunks): Remove. (set_fastchunks): Remove. (malloc_state): Add have_fastchunks. (malloc_init_state): Use have_fastchunks. (do_check_malloc_state): Remove incorrect invariant checks. (_int_malloc): Use have_fastchunks. (_int_free): Likewise. (malloc_consolidate): Likewise.
* Inline tcache functionsWilco Dijkstra2017-10-171-2/+2
| | | | | | | | | | | The functions tcache_get and tcache_put show up in profiles as they are a critical part of the tcache code. Inline them to give tcache a 16% performance gain. Since this improves multi-threaded cases as well, it helps offset any potential performance loss due to adding single-threaded fast paths. * malloc/malloc.c (tcache_put): Inline. (tcache_get): Inline.
* malloc: Fix tcache leak after thread destruction [BZ #22111]Carlos O'Donell2017-10-061-3/+5
| | | | | | | | | | | | | | | The malloc tcache added in 2.26 will leak all of the elements remaining in the cache and the cache structure itself when a thread exits. The defect is that we do not set tcache_shutting_down early enough, and the thread simply recreates the tcache and places the elements back onto a new tcache which is subsequently lost as the thread exits (unfreed memory). The fix is relatively simple, move the setting of tcache_shutting_down earlier in tcache_thread_freeres. We add a test case which uses mallinfo and some heuristics to look for unaccounted for memory usage between the start and end of a thread start/join loop. It is very reliable at detecting that there is a leak given the number of iterations. Without the fix the test will consume 122MiB of leaked memory.
* malloc: Remove the internal_function attributeFlorian Weimer2017-08-311-13/+4
|
* malloc: Resolve compilation failure in NDEBUG modeFlorian Weimer2017-08-311-18/+7
| | | | In _int_free, the locked variable is not used if NDEBUG is defined.
* malloc: Change top_check return type to voidFlorian Weimer2017-08-311-1/+1
| | | | | | After commit ec2c1fcefb200c6cb7e09553f3c6af8815013d83, (malloc: Abort on heap corruption, without a backtrace), the function always returns 0.
* malloc: Remove corrupt arena flagFlorian Weimer2017-08-301-13/+0
| | | | | This is no longer needed because we now abort immediately once heap corruption is detected.
* malloc: Remove check_action variable [BZ #21754]Florian Weimer2017-08-301-122/+30
| | | | | | | | | | | Clean up calls to malloc_printerr and trim its argument list. This also removes a few bits of work done before calling malloc_printerr (such as unlocking operations). The tunable/environment variable still enables the lightweight additional malloc checking, but mallopt (M_CHECK_ACTION) no longer has any effect.
* malloc: Abort on heap corruption, without a backtrace [BZ #21754]Florian Weimer2017-08-301-19/+4
| | | | | The stack trace printing caused deadlocks and has been itself been targeted by code execution exploits.
* malloc: Avoid optimizer warning with GCC 7 and -O3Florian Weimer2017-08-101-4/+16
|
* Avoid backtrace from __stack_chk_fail [BZ #12189]H.J. Lu2017-07-111-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | __stack_chk_fail is called on corrupted stack. Stack backtrace is very unreliable against corrupted stack. __libc_message is changed to accept enum __libc_message_action and call BEFORE_ABORT only if action includes do_backtrace. __fortify_fail_abort is added to avoid backtrace from __stack_chk_fail. [BZ #12189] * debug/Makefile (CFLAGS-tst-ssp-1.c): New. (tests): Add tst-ssp-1 if -fstack-protector works. * debug/fortify_fail.c: Include <stdbool.h>. (_fortify_fail_abort): New function. (__fortify_fail): Call _fortify_fail_abort. (__fortify_fail_abort): Add a hidden definition. * debug/stack_chk_fail.c: Include <stdbool.h>. (__stack_chk_fail): Call __fortify_fail_abort, instead of __fortify_fail. * debug/tst-ssp-1.c: New file. * include/stdio.h (__libc_message_action): New enum. (__libc_message): Replace int with enum __libc_message_action. (__fortify_fail_abort): New hidden prototype. * malloc/malloc.c (malloc_printerr): Update __libc_message calls. * sysdeps/posix/libc_fatal.c (__libc_message): Replace int with enum __libc_message_action. Call BEFORE_ABORT only if action includes do_backtrace. (__libc_fatal): Update __libc_message call.
* Add per-thread cache to mallocDJ Delorie2017-07-061-9/+341
| | | | | | | | | | | | | | | | | | | | | | | | | | | | * config.make.in: Enable experimental malloc option. * configure.ac: Likewise. * configure: Regenerate. * manual/install.texi: Document it. * INSTALL: Regenerate. * malloc/Makefile: Likewise. * malloc/malloc.c: Add per-thread cache (tcache). (tcache_put): New. (tcache_get): New. (tcache_thread_freeres): New. (tcache_init): New. (__libc_malloc): Use cached chunks if available. (__libc_free): Initialize tcache if needed. (__libc_realloc): Likewise. (__libc_calloc): Likewise. (_int_malloc): Prefill tcache when appropriate. (_int_free): Likewise. (do_set_tcache_max): New. (do_set_tcache_count): New. (do_set_tcache_unsorted_limit): New. * manual/probes.texi: Document new probes. * malloc/arena.c: Add new tcache tunables. * elf/dl-tunables.list: Likewise. * manual/tunables.texi: Document them. * NEWS: Mention the per-thread cache.