about summary refs log tree commit diff
path: root/malloc/malloc.c
Commit message (Collapse)AuthorAgeFilesLines
* Make __getrandom_nocancel set errno and add a _nostatus versionXi Ruoyao2024-01-121-1/+3
| | | | | | | | | | | | | | | | The __getrandom_nocancel function returns errors as negative values instead of errno. This is inconsistent with other _nocancel functions and it breaks "TEMP_FAILURE_RETRY (__getrandom_nocancel (p, n, 0))" in __arc4random_buf. Use INLINE_SYSCALL_CALL instead of INTERNAL_SYSCALL_CALL to fix this issue. But __getrandom_nocancel has been avoiding from touching errno for a reason, see BZ 29624. So add a __getrandom_nocancel_nostatus function and use it in tcache_key_initialize. Signed-off-by: Xi Ruoyao <xry111@xry111.site> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org>
* Update copyright dates with scripts/update-copyrightsPaul Eggert2024-01-011-1/+1
|
* malloc: Decorate malloc mapsAdhemerval Zanella2023-11-071-0/+5
| | | | | | | | | | | | | | | Add anonymous mmap annotations on loader malloc, malloc when it allocates memory with mmap, and on malloc arena. The /proc/self/maps will now print: [anon: glibc: malloc arena] [anon: glibc: malloc] [anon: glibc: loader malloc] On arena allocation, glibc annotates only the read/write mapping. Checked on x86_64-linux-gnu and aarch64-linux-gnu. Reviewed-by: DJ Delorie <dj@redhat.com>
* malloc: Remove bin scanning from memalign (bug 30723)Florian Weimer2023-08-151-164/+5
| | | | | | | | | | | | | | | | | On the test workload (mpv --cache=yes with VP9 video decoding), the bin scanning has a very poor success rate (less than 2%). The tcache scanning has about 50% success rate, so keep that. Update comments in malloc/tst-memalign-2 to indicate the purpose of the tests. Even with the scanning removed, the additional merging opportunities since commit 542b1105852568c3ebc712225ae78b ("malloc: Enable merging of remainders in memalign (bug 30723)") are sufficient to pass the existing large bins test. Remove leftover variables from _int_free from refactoring in the same commit. Reviewed-by: DJ Delorie <dj@redhat.com>
* malloc: Enable merging of remainders in memalign (bug 30723)Florian Weimer2023-08-111-76/+121
| | | | | | | | | | | | | | | | | | | | | Previously, calling _int_free from _int_memalign could put remainders into the tcache or into fastbins, where they are invisible to the low-level allocator. This results in missed merge opportunities because once these freed chunks become available to the low-level allocator, further memalign allocations (even of the same size are) likely obstructing merges. Furthermore, during forwards merging in _int_memalign, do not completely give up when the remainder is too small to serve as a chunk on its own. We can still give it back if it can be merged with the following unused chunk. This makes it more likely that memalign calls in a loop achieve a compact memory layout, independently of initial heap layout. Drop some useless (unsigned long) casts along the way, and tweak the style to more closely match GNU on changed lines. Reviewed-by: DJ Delorie <dj@redhat.com>
* realloc: Limit chunk reuse to only growing requests [BZ #30579]Siddhesh Poyarekar2023-07-061-8/+15
| | | | | | | | | | | | | | | | | | | | | The trim_threshold is too aggressive a heuristic to decide if chunk reuse is OK for reallocated memory; for repeated small, shrinking allocations it leads to internal fragmentation and for repeated larger allocations that fragmentation may blow up even worse due to the dynamic nature of the threshold. Limit reuse only when it is within the alignment padding, which is 2 * size_t for heap allocations and a page size for mmapped allocations. There's the added wrinkle of THP, but this fix ignores it for now, pessimizing that case in favor of keeping fragmentation low. This resolves BZ #30579. Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reported-by: Nicolas Dusart <nicolas@freedelity.be> Reported-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Tested-by: Aurelien Jarno <aurelien@aurel32.net>
* Fix all the remaining misspellings -- BZ 25337Paul Pluzhnikov2023-06-021-8/+8
|
* aligned_alloc: conform to C17DJ Delorie2023-05-081-3/+23
| | | | | | | This patch adds the strict checking for power-of-two alignments in aligned_alloc(), and updates the manual accordingly. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)DJ Delorie2023-04-181-76/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | Based on these comments in malloc.c: size field is or'ed with NON_MAIN_ARENA if the chunk was obtained from a non-main arena. This is only set immediately before handing the chunk to the user, if necessary. The NON_MAIN_ARENA flag is never set for unsorted chunks, so it does not have to be taken into account in size comparisons. When we pull a chunk off the unsorted list (or any list) we need to make sure that flag is set properly before returning the chunk. Use the rounded-up size for chunk_ok_for_memalign() Do not scan the arena for reusable chunks if there's no arena. Account for chunk overhead when determining if a chunk is a reuse candidate. mcheck interferes with memalign, so skip mcheck variants of memalign tests. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
* memalign: Support scanning for aligned chunks.DJ Delorie2023-03-291-27/+233
| | | | | | | | | | | | | | | | This patch adds a chunk scanning algorithm to the _int_memalign code path that reduces heap fragmentation by reusing already aligned chunks instead of always looking for chunks of larger sizes and splitting them. The tcache macros are extended to allow removing a chunk from the middle of the list. The goal is to fix the pathological use cases where heaps grow continuously in workloads that are heavy users of memalign. Note that tst-memalign-2 checks for tcache operation, which malloc-check bypasses. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Remove --enable-tunables configure optionAdhemerval Zanella Netto2023-03-291-11/+3
| | | | | | | | | | | | And make always supported. The configure option was added on glibc 2.25 and some features require it (such as hwcap mask, huge pages support, and lock elisition tuning). It also simplifies the build permutations. Changes from v1: * Remove glibc.rtld.dynamic_sort changes, it is orthogonal and needs more discussion. * Cleanup more code. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* malloc: Fix transposed arguments in sysmalloc_mmap_fallback callRobert Morell2023-03-081-2/+2
| | | | | | | | | | | | git commit 0849eed45daa ("malloc: Move MORECORE fallback mmap to sysmalloc_mmap_fallback") moved a block of code from sysmalloc to a new helper function sysmalloc_mmap_fallback(), but 'pagesize' is used for the 'minsize' argument and 'MMAP_AS_MORECORE_SIZE' for the 'pagesize' argument. Fixes: 0849eed45daa ("malloc: Move MORECORE fallback mmap to sysmalloc_mmap_fallback") Signed-off-by: Robert Morell <rmorell@nvidia.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* malloc: remove redundant check of unsorted bin corruptionAyush Mittal2023-02-221-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | * malloc/malloc.c (_int_malloc): remove redundant check of unsorted bin corruption With commit "b90ddd08f6dd688e651df9ee89ca3a69ff88cd0c" (malloc: Additional checks for unsorted bin integrity), same check of (bck->fd != victim) is added before checking of unsorted chunk corruption, which was added in "bdc3009b8ff0effdbbfb05eb6b10966753cbf9b8" (Added check before removing from unsorted list). .. 3773 if (__glibc_unlikely (bck->fd != victim) 3774 || __glibc_unlikely (victim->fd != unsorted_chunks (av))) 3775 malloc_printerr ("malloc(): unsorted double linked list corrupted"); .. .. 3815 /* remove from unsorted list */ 3816 if (__glibc_unlikely (bck->fd != victim)) 3817 malloc_printerr ("malloc(): corrupted unsorted chunks 3"); 3818 unsorted_chunks (av)->bk = bck; .. So this extra check can be removed. Signed-off-by: Maninder Singh <maninder1.s@samsung.com> Signed-off-by: Ayush Mittal <ayush.m@samsung.com> Reviewed-by: DJ Delorie <dj@redhat.com>
* Update copyright dates with scripts/update-copyrightsJoseph Myers2023-01-061-1/+1
|
* realloc: Return unchanged if request is within usable sizeSiddhesh Poyarekar2022-12-081-0/+10
| | | | | | | | | | | | | | | | | If there is enough space in the chunk to satisfy the new size, return the old pointer as is, thus avoiding any locks or reallocations. The only real place this has a benefit is in large chunks that tend to get satisfied with mmap, since there is a large enough spare size (up to a page) for it to matter. For allocations on heap, the extra size is typically barely a few bytes (up to 15) and it's unlikely that it would make much difference in performance. Also added a smoke test to ensure that the old pointer is returned unchanged if the new size to realloc is within usable size of the old pointer. Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: DJ Delorie <dj@redhat.com>
* malloc: Switch global_max_fast to uint8_tFlorian Weimer2022-10-131-1/+1
| | | | | | | | MAX_FAST_SIZE is 160 at most, so a uint8_t is sufficient. This makes it harder to use memory corruption, by overwriting global_max_fast with a large value, to fundamentally alter malloc behavior. Reviewed-by: DJ Delorie <dj@redhat.com>
* Use atomic_exchange_release/acquireWilco Dijkstra2022-09-261-1/+1
| | | | | | | Rename atomic_exchange_rel/acq to use atomic_exchange_release/acquire since these map to the standard C11 atomic builtins. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* malloc: Print error when oldsize is not equal to the current size.Qingqing Li2022-09-221-1/+2
| | | | | | | | This is used to detect errors early. The read of the oldsize is not protected by any lock, so check this value to avoid causing bigger mistakes. Reviewed-by: DJ Delorie <dj@redhat.com>
* Use C11 atomics instead of atomic_decrement(_val)Wilco Dijkstra2022-09-091-1/+1
| | | | | | | Replace atomic_decrement and atomic_decrement_val with atomic_fetch_add_relaxed. Reviewed-by: DJ Delorie <dj@redhat.com>
* Use C11 atomics instead atomic_add(_zero)Wilco Dijkstra2022-09-091-1/+1
| | | | | | Replace atomic_add and atomic_add_zero with atomic_fetch_add_relaxed. Reviewed-by: DJ Delorie <dj@redhat.com>
* malloc: Use C11 atomics rather than atomic_exchange_and_addWilco Dijkstra2022-09-061-3/+3
| | | | | | | Replace a few counters using atomic_exchange_and_add with atomic_fetch_add_relaxed. Reviewed-by: Florian Weimer <fweimer@redhat.com>
* malloc: Do not use MAP_NORESERVE to allocate heap segmentsFlorian Weimer2022-08-151-4/+0
| | | | | | | | | | | | | | | | | | Address space for heap segments is reserved in a mmap call with MAP_ANONYMOUS | MAP_PRIVATE and protection flags PROT_NONE. This reservation does not count against the RSS limit of the process or system. Backing memory is allocated using mprotect in alloc_new_heap and grow_heap, and at this point, the allocator expects the kernel to provide memory (subject to memory overcommit). The SIGSEGV that might generate due to MAP_NORESERVE (according to the mmap manual page) does not seem to occur in practice, it's always SIGKILL from the OOM killer. Even if there is a way that SIGSEGV could be generated, it is confusing to applications that this only happens for secondary heaps, not for large mmap-based allocations, and not for the main arena. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* assert: Do not use stderr in libc-internal assertFlorian Weimer2022-08-031-16/+0
| | | | | | | | | | | | | | | | | | | | Redirect internal assertion failures to __libc_assert_fail, based on based on __libc_message, which writes directly to STDERR_FILENO and calls abort. Also disable message translation and reword the error message slightly (adjusting stdlib/tst-bz20544 accordingly). As a result of these changes, malloc no longer needs its own redefinition of __assert_fail. __libc_assert_fail needs to be stubbed out during rtld dependency analysis because the rtld rebuilds turn __libc_assert_fail into __assert_fail, which is unconditionally provided by elf/dl-minimal.c. This change is not possible for the public assert macro and its __assert_fail function because POSIX requires that the diagnostic is written to stderr. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* stdio: Clean up __libc_message after unconditional abortFlorian Weimer2022-08-031-3/+2
| | | | | | | | | | | | Since commit ec2c1fcefb200c6cb7e09553f3c6af8815013d83 ("malloc: Abort on heap corruption, without a backtrace [BZ #21754]"), __libc_message always terminates the process. Since commit a289ea09ea843ced6e5277c2f2e63c357bc7f9a3 ("Do not print backtraces on fatal glibc errors"), the backtrace facility has been removed. Therefore, remove enum __libc_message_action and the action argument of __libc_message, and mark __libc_message as _No_return. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* malloc: Use __getrandom_nocancel during tcache initiailizationFlorian Weimer2022-08-011-1/+2
| | | | | | | | | | Cancellation currently cannot happen at this point because dlopen as used by the unwind link always performs additional allocations for libgcc_s.so.1, even if it has been loaded already as a dependency of the main executable. But it seems prudent not to rely on this quirk. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* malloc: Simplify implementation of __malloc_assertFlorian Weimer2022-07-211-10/+5
| | | | | | | | | | It is prudent not to run too much code after detecting heap corruption, and __fxprintf is really complex. The line number and file name do not carry much information, so it is not included in the error message. (__libc_message only supports %s formatting.) The function name and assertion should provide some context. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* malloc: Simplify checked_request2size interfaceFlorian Weimer2022-07-051-14/+16
| | | | | | | In-band signaling avoids an uninitialized variable warning when building with -Og and GCC 12. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* malloc: Fix duplicate inline for do_set_mxfastAdhemerval Zanella2022-03-231-2/+1
|
* Update copyright dates with scripts/update-copyrightsPaul Eggert2022-01-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | I used these shell commands: ../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright (cd ../glibc && git commit -am"[this commit message]") and then ignored the output, which consisted lines saying "FOO: warning: copyright statement not found" for each of 7061 files FOO. I then removed trailing white space from math/tgmath.h, support/tst-support-open-dev-null-range.c, and sysdeps/x86_64/multiarch/strlen-vec.S, to work around the following obscure pre-commit check failure diagnostics from Savannah. I don't know why I run into these diagnostics whereas others evidently do not. remote: *** 912-#endif remote: *** 913: remote: *** 914- remote: *** error: lines with trailing whitespace found ... remote: *** error: sysdeps/unix/sysv/linux/statx_cp.c: trailing lines
* Remove upper limit on tunable MALLOC_MMAP_THRESHOLDPatrick McGehearty2021-12-161-10/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The current limit on MALLOC_MMAP_THRESHOLD is either 1 Mbyte (for 32-bit apps) or 32 Mbytes (for 64-bit apps). This value was set by a patch dated 2006 (15 years ago). Attempts to set the threshold higher are currently ignored. The default behavior is appropriate for many highly parallel applications where many processes or threads are sharing RAM. In other situations where the number of active processes or threads closely matches the number of cores, a much higher limit may be desired by the application designer. By today's standards on personal computers and small servers, 2 Gbytes of RAM per core is commonly available. On larger systems 4 Gbytes or more of RAM is sometimes available. Instead of raising the limit to match current needs, this patch proposes to remove the limit of the tunable, leaving the decision up to the user of a tunable to judge the best value for their needs. This patch does not change any of the defaults for malloc tunables, retaining the current behavior of the dynamic malloc mmap threshold. bugzilla 27801 - Remove upper limit on tunable MALLOC_MMAP_THRESHOLD Reviewed-by: DJ Delorie <dj@redhat.com> malloc/ malloc.c changed do_set_mmap_threshold to remove test for HEAP_MAX_SIZE.
* malloc: Enable huge page support on main arenaAdhemerval Zanella2021-12-151-2/+10
| | | | | | | | | | | | This patch adds support huge page support on main arena allocation, enable with tunable glibc.malloc.hugetlb=2. The patch essentially disable the __glibc_morecore() sbrk() call (similar when memory tag does when sbrk() call does not support it) and fallback to default page size if the memory allocation fails. Checked on x86_64-linux-gnu. Reviewed-by: DJ Delorie <dj@redhat.com>
* malloc: Move MORECORE fallback mmap to sysmalloc_mmap_fallbackAdhemerval Zanella2021-12-151-32/+53
| | | | | | So it can be used on hugepage code as well. Reviewed-by: DJ Delorie <dj@redhat.com>
* malloc: Add Huge Page support to arenasAdhemerval Zanella2021-12-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | It is enabled as default for glibc.malloc.hugetlb set to 2 or higher. It also uses a non configurable minimum value and maximum value, currently set respectively to 1 and 4 selected huge page size. The arena allocation with huge pages does not use MAP_NORESERVE. As indicate by kernel internal documentation [1], the flag might trigger a SIGBUS on soft page faults if at memory access there is no left pages in the pool. On systems without a reserved huge pages pool, is just stress the mmap(MAP_HUGETLB) allocation failure. To improve test coverage it is required to create a pool with some allocated pages. Checked on x86_64-linux-gnu with no reserved pages, 10 reserved pages (which trigger mmap(MAP_HUGETBL) failures) and with 256 reserved pages (which does not trigger mmap(MAP_HUGETLB) failures). [1] https://www.kernel.org/doc/html/v4.18/vm/hugetlbfs_reserv.html#resv-map-modifications Reviewed-by: DJ Delorie <dj@redhat.com>
* malloc: Add Huge Page support for mmapAdhemerval Zanella2021-12-151-4/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | With the morecore hook removed, there is not easy way to provide huge pages support on with glibc allocator without resorting to transparent huge pages. And some users and programs do prefer to use the huge pages directly instead of THP for multiple reasons: no splitting, re-merging by the VM, no TLB shootdowns for running processes, fast allocation from the reserve pool, no competition with the rest of the processes unlike THP, no swapping all, etc. This patch extends the 'glibc.malloc.hugetlb' tunable: the value '2' means to use huge pages directly with the system default size, while a positive value means and specific page size that is matched against the supported ones by the system. Currently only memory allocated on sysmalloc() is handled, the arenas still uses the default system page size. To test is a new rule is added tests-malloc-hugetlb2, which run the addes tests with the required GLIBC_TUNABLE setting. On systems without a reserved huge pages pool, is just stress the mmap(MAP_HUGETLB) allocation failure. To improve test coverage it is required to create a pool with some allocated pages. Checked on x86_64-linux-gnu. Reviewed-by: DJ Delorie <dj@redhat.com>
* malloc: Move mmap logic to its own functionAdhemerval Zanella2021-12-151-76/+88
| | | | | | So it can be used with different pagesize and flags. Reviewed-by: DJ Delorie <dj@redhat.com>
* malloc: Add THP/madvise support for sbrkAdhemerval Zanella2021-12-151-5/+29
| | | | | | | | | | To increase effectiveness with Transparent Huge Page with madvise, the large page size is use instead page size for sbrk increment for the main arena. Checked on x86_64-linux-gnu. Reviewed-by: DJ Delorie <dj@redhat.com>
* malloc: Add madvise support for Transparent Huge PagesAdhemerval Zanella2021-12-151-0/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | Linux Transparent Huge Pages (THP) current supports three different states: 'never', 'madvise', and 'always'. The 'never' is self-explanatory and 'always' will enable THP for all anonymous pages. However, 'madvise' is still the default for some system and for such case THP will be only used if the memory range is explicity advertise by the program through a madvise(MADV_HUGEPAGE) call. To enable it a new tunable is provided, 'glibc.malloc.hugetlb', where setting to a value diffent than 0 enables the madvise call. This patch issues the madvise(MADV_HUGEPAGE) call after a successful mmap() call at sysmalloc() with sizes larger than the default huge page size. The madvise() call is disable is system does not support THP or if it has the mode set to "never" and on Linux only support one page size for THP, even if the architecture supports multiple sizes. To test is a new rule is added tests-malloc-hugetlb1, which run the addes tests with the required GLIBC_TUNABLE setting. Checked on x86_64-linux-gnu. Reviewed-by: DJ Delorie <dj@redhat.com>
* Handle NULL input to malloc_usable_size [BZ #28506]Siddhesh Poyarekar2021-10-291-16/+9
| | | | | | | | | | Hoist the NULL check for malloc_usable_size into its entry points in malloc-debug and malloc and assume non-NULL in all callees. This fixes BZ #28506 Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: Florian Weimer <fweimer@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
* Remove "Contributed by" linesSiddhesh Poyarekar2021-09-031-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | We stopped adding "Contributed by" or similar lines in sources in 2012 in favour of git logs and keeping the Contributors section of the glibc manual up to date. Removing these lines makes the license header a bit more consistent across files and also removes the possibility of error in attribution when license blocks or files are copied across since the contributed-by lines don't actually reflect reality in those cases. Move all "Contributed by" and similar lines (Written by, Test by, etc.) into a new file CONTRIBUTED-BY to retain record of these contributions. These contributors are also mentioned in manual/contrib.texi, so we just maintain this additional record as a courtesy to the earlier developers. The following scripts were used to filter a list of files to edit in place and to clean up the CONTRIBUTED-BY file respectively. These were not added to the glibc sources because they're not expected to be of any use in future given that this is a one time task: https://gist.github.com/siddhesh/b5ecac94eabfd72ed2916d6d8157e7dc https://gist.github.com/siddhesh/15ea1f5e435ace9774f485030695ee02 Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* Fix build and tests with --disable-tunablesSiddhesh Poyarekar2021-07-231-25/+26
| | | | | | | | | | | | | Remove unused code and declare __libc_mallopt when !IS_IN (libc) to allow the debug hook to build with --disable-tunables. Also, run tst-ifunc-isa-2* tests only when tunables are enabled since the result depends on it. Tested on x86_64. Reported-by: Matheus Castanho <msc@linux.ibm.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* Move malloc_{g,s}et_state to libc_malloc_debugSiddhesh Poyarekar2021-07-221-50/+5
| | | | | | | | | | | | | | | | | | | | | | | These deprecated functions are only safe to call from __malloc_initialize_hook and as a result, are not useful in the general case. Move the implementations to libc_malloc_debug so that existing binaries that need it will now have to preload the debug DSO to work correctly. This also allows simplification of the core malloc implementation by dropping all the undumping support code that was added to make malloc_set_state work. One known breakage is that of ancient emacs binaries that depend on this. They will now crash when running with this libc. With LD_BIND_NOW=1, it will terminate immediately because of not being able to find malloc_set_state but with lazy binding it will crash in unpredictable ways. It will need a preloaded libc_malloc_debug.so so that its initialization hook is executed to allow its malloc implementation to work properly. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
* glibc.malloc.check: Wean away from malloc hooksSiddhesh Poyarekar2021-07-221-15/+20
| | | | | | | | | | | | | | | | | | The malloc-check debugging feature is tightly integrated into glibc malloc, so thanks to an idea from Florian Weimer, much of the malloc implementation has been moved into libc_malloc_debug.so to support malloc-check. Due to this, glibc malloc and malloc-check can no longer work together; they use altogether different (but identical) structures for heap management. This should not make a difference though since the malloc check hook is not disabled anywhere. malloc_set_state does, but it does so early enough that it shouldn't cause any problems. The malloc check tunable is now in the debug DSO and has no effect when the DSO is not preloaded. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
* Simplify __malloc_initializedSiddhesh Poyarekar2021-07-221-12/+12
| | | | | | | | | Now that mcheck no longer needs to check __malloc_initialized (and no other third party hook can since the symbol is not exported), make the variable boolean and static so that it is used strictly within malloc. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
* Move malloc hooks into a compat DSOSiddhesh Poyarekar2021-07-221-73/+12
| | | | | | | | | | | | | | | | | | | | | | | | Remove all malloc hook uses from core malloc functions and move it into a new library libc_malloc_debug.so. With this, the hooks now no longer have any effect on the core library. libc_malloc_debug.so is a malloc interposer that needs to be preloaded to get hooks functionality back so that the debugging features that depend on the hooks, i.e. malloc-check, mcheck and mtrace work again. Without the preloaded DSO these debugging features will be nops. These features will be ported away from hooks in subsequent patches. Similarly, legacy applications that need hooks functionality need to preload libc_malloc_debug.so. The symbols exported by libc_malloc_debug.so are maintained at exactly the same version as libc.so. Finally, static binaries will no longer be able to use malloc debugging features since they cannot preload the debugging DSO. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
* Remove __morecore and __default_morecoreSiddhesh Poyarekar2021-07-221-4/+3
| | | | | | | | | | Make the __morecore and __default_morecore symbols compat-only and remove their declarations from the API. Also, include morecore.c directly into malloc.c; this should ideally get merged into malloc in a future cleanup. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
* Remove __after_morecore_hookSiddhesh Poyarekar2021-07-221-21/+1
| | | | | | | | | | Remove __after_morecore_hook from the API and finalize the symbol so that it can no longer be used in new applications. Old applications using __after_morecore_hook will find that their hook is no longer called. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
* Force building with -fno-commonFlorian Weimer2021-07-091-1/+1
| | | | | | | | | | As a result, is not necessary to specify __attribute__ ((nocommon)) on individual definitions. GCC 10 defaults to -fno-common on all architectures except ARC, but this change is compatible with older GCC versions and ARC, too. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* _int_realloc is staticSiddhesh Poyarekar2021-07-081-2/+2
| | | | | | _int_realloc is correctly declared at the top to be static, but incorrectly defined without the static keyword. Fix that. The generated binaries have identical code.
* Harden tcache double-free checkSiddhesh Poyarekar2021-07-081-4/+33
| | | | | | | | | | | | | | | | | | | | The tcache allocator layer uses the tcache pointer as a key to identify a block that may be freed twice. Since this is in the application data area, an attacker exploiting a use-after-free could potentially get access to the entire tcache structure through this key. A detailed write-up was provided by Awarau here: https://awaraucom.wordpress.com/2020/07/19/house-of-io-remastered/ Replace this static pointer use for key checking with one that is generated at malloc initialization. The first attempt is through getrandom with a fallback to random_bits(), which is a simple pseudo-random number generator based on the clock. The fallback ought to be sufficient since the goal of the randomness is only to make the key arbitrary enough that it is very unlikely to collide with user data. Co-authored-by: Eyal Itkin <eyalit@checkpoint.com>
* malloc: Initiate tcache shutdown even without allocations [BZ #28028]JeffyChen2021-07-021-1/+2
| | | | | | | | | | After commit 1e26d35193efbb29239c710a4c46a64708643320 ("malloc: Fix tcache leak after thread destruction [BZ #22111]"), tcache_shutting_down is still not early enough. When we detach a thread with no tcache allocated, tcache_shutting_down would still be false. Reviewed-by: DJ Delorie <dj@redhat.com>