about summary refs log tree commit diff
path: root/malloc
Commit message (Collapse)AuthorAgeFilesLines
* string: Use tls-internal on strerror_lAdhemerval Zanella2020-07-071-1/+0
| | | | | | | | | | The buffer allocation uses the same strategy of strsignal. Checked on x86-64-linux-gnu, i686-linux-gnu, powerpc64le-linux-gnu, and s390x-linux-gnu. Tested-by: Carlos O'Donell <carlos@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* string: Remove old TLS usage on strsignalAdhemerval Zanella2020-07-071-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The per-thread state is refactored two use two strategies: 1. The default one uses a TLS structure, which will be placed in the static TLS space (using __thread keyword). 2. Linux allocates via struct pthread and access it through THREAD_* macros. The default strategy has the disadvantage of increasing libc.so static TLS consumption and thus decreasing the possible surplus used in some scenarios (which might be mitigated by BZ#25051 fix). It is used only on Hurd, where accessing the thread storage in the in single thread case is not straightforward (afaiu, Hurd developers could correct me here). The fallback static allocation used for allocation failure is also removed: defining its size is problematic without synchronizing with translated messages (to avoid partial translation) and the resulting usage is not thread-safe. Checked on x86-64-linux-gnu, i686-linux-gnu, powerpc64le-linux-gnu, and s390x-linux-gnu. Tested-by: Carlos O'Donell <carlos@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* malloc: ensure set_max_fast never stores zero [BZ #25733]DJ Delorie2020-04-061-1/+1
| | | | | | | | | | | | | | | | | | | The code for set_max_fast() stores an "impossibly small value" instead of zero, when the parameter is zero. However, for small values of the parameter (ex: 1 or 2) the computation results in a zero being stored anyway. This patch checks for the parameter being small enough for the computation to result in zero instead, so that a zero is never stored. key values which result in zero being stored: x86-64: 1..7 (or other 64-bit) i686: 1..11 armhfp: 1..3 (or other 32-bit) Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* Add tests for Safe-LinkingEyal Itkin2020-04-032-0/+180
| | | | | | | | | | | | | | | Adding the test "tst-safe-linking" for testing that Safe-Linking works as expected. The test checks these 3 main flows: * tcache protection * fastbin protection * malloc_consolidate() correctness As there is a random chance of 1/16 that of the alignment will remain correct, the test checks each flow up to 10 times, using different random values for the pointer corruption. As a result, the chance for a false failure of a given tested flow is 2**(-40), thus highly unlikely. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* Fix alignment bug in Safe-LinkingEyal Itkin2020-03-311-11/+11
| | | | | | | | | | | | Alignment checks should be performed on the user's buffer and NOT on the mchunkptr as was done before. This caused bugs in 32 bit versions, because: 2*sizeof(t) != MALLOC_ALIGNMENT. As the tcache works on users' buffers it uses the aligned_OK() check, and the rest work on mchunkptr and therefore check using misaligned_chunk(). Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* Typo fixes and CR cleanup in Safe-LinkingEyal Itkin2020-03-311-15/+15
| | | | | | | | Removed unneeded '\' chars from end of lines and fixed some indentation issues that were introduced in the original Safe-Linking patch. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* Add Safe-Linking to fastbins and tcacheEyal Itkin2020-03-291-13/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Safe-Linking is a security mechanism that protects single-linked lists (such as the fastbin and tcache) from being tampered by attackers. The mechanism makes use of randomness from ASLR (mmap_base), and when combined with chunk alignment integrity checks, it protects the "next" pointers from being hijacked by an attacker. While Safe-Unlinking protects double-linked lists (such as the small bins), there wasn't any similar protection for attacks against single-linked lists. This solution protects against 3 common attacks: * Partial pointer override: modifies the lower bytes (Little Endian) * Full pointer override: hijacks the pointer to an attacker's location * Unaligned chunks: pointing the list to an unaligned address The design assumes an attacker doesn't know where the heap is located, and uses the ASLR randomness to "sign" the single-linked pointers. We mark the pointer as P and the location in which it is stored as L, and the calculation will be: * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P) * *L = PROTECT(P) This way, the random bits from the address L (which start at the bit in the PAGE_SHIFT position), will be merged with LSB of the stored protected pointer. This protection layer prevents an attacker from modifying the pointer into a controlled value. An additional check that the chunks are MALLOC_ALIGNed adds an important layer: * Attackers can't point to illegal (unaligned) memory addresses * Attackers must guess correctly the alignment bits On standard 32 bit Linux machines, an attack will directly fail 7 out of 8 times, and on 64 bit machines it will fail 15 out of 16 times. This proposed patch was benchmarked and it's effect on the overall performance of the heap was negligible and couldn't be distinguished from the default variance between tests on the vanilla version. A similar protection was added to Chromium's version of TCMalloc in 2012, and according to their documentation it had an overhead of less than 2%. Reviewed-by: DJ Delorie <dj@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com> Reviewed-by: Adhemerval Zacnella <adhemerval.zanella@linaro.org>
* malloc/tst-mallocfork2: Kill lingering process for unexpected failuresAdhemerval Zanella2020-02-271-11/+28
| | | | | | | | | | | | | | | If the test fails due some unexpected failure after the children creation, either in the signal handler by calling abort or in the main loop; the created children might not be killed properly. This patches fixes it by: * Avoid aborting in the signal handler by setting a flag that an error has occured and add a check in the main loop. * Add a atexit handler to handle kill child processes. Checked on x86_64-linux-gnu.
* Remove incorrect alloc_size attribute from pvalloc [BZ #25401]Florian Weimer2020-01-173-3/+50
| | | | | | | | | | | | | | pvalloc is guarantueed to round up the allocation size to the page size, so applications can assume that the memory region is larger than the passed-in argument. The alloc_size attribute cannot express that. The test case is based on a suggestion from Jakub Jelinek. This fixes commit 9bf8e29ca136094f73f69f725f15c51facc97206 ("malloc: make malloc fail with requests larger than PTRDIFF_MAX (BZ#23741)"). Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Move vDSO setup to rtld (BZ#24967)Adhemerval Zanella2020-01-031-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | This patch moves the vDSO setup from libc to loader code, just after the vDSO link_map setup. For static case the initialization is moved to _dl_non_dynamic_init instead. Instead of using the mangled pointer, the vDSO data is set as attribute_relro (on _rtld_global_ro for shared or _dl_vdso_* for static). It is read-only even with partial relro. It fixes BZ#24967 now that the vDSO pointer is setup earlier than malloc interposition is called. Also, vDSO calls should not be a problem for static dlopen as indicated by BZ#20802. The vDSO pointer would be zero-initialized and the syscall will be issued instead. Checked on x86_64-linux-gnu, i686-linux-gnu, aarch64-linux-gnu, arm-linux-gnueabihf, powerpc64le-linux-gnu, powerpc64-linux-gnu, powerpc-linux-gnu, s390x-linux-gnu, sparc64-linux-gnu, and sparcv9-linux-gnu. I also run some tests on mips. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* Update copyright dates not handled by scripts/update-copyrights.Joseph Myers2020-01-013-3/+3
| | | | | | | | | | | | | | | I've updated copyright dates in glibc for 2020. This is the patch for the changes not generated by scripts/update-copyrights and subsequent build / regeneration of generated files. As well as the usual annual updates, mainly dates in --version output (minus libc.texinfo which previously had to be handled manually but is now successfully updated by update-copyrights), there is a fix to sysdeps/unix/sysv/linux/powerpc/bits/termios-c_lflag.h where a typo in the copyright notice meant it failed to be updated automatically. Please remember to include 2020 in the dates for any new files added in future (which means updating any existing uncommitted patches you have that add new files to use the new copyright dates in them).
* Update copyright dates with scripts/update-copyrights.Joseph Myers2020-01-0177-77/+77
|
* Correct range checking in mallopt/mxfast/tcache [BZ #25194]DJ Delorie2019-12-051-12/+20
| | | | | | | | | | | | | | | | | do_set_tcache_max, do_set_mxfast: Fix two instances of comparing "size_t < 0" Both cases have upper limit, so the "negative value" case is already handled via overflow semantics. do_set_tcache_max, do_set_tcache_count: Fix return value on error. Note: currently not used. mallopt: pass return value of helper functions to user. Behavior should only be actually changed for mxfast, where we restore the old (pre-tunables) behavior. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* Base max_fast on alignment, not width, of bins (Bug 24903)DJ Delorie2019-10-301-1/+1
| | | | | | | | | | | | set_max_fast sets the "impossibly small" value based on, eventually, MALLOC_ALIGNMENT. The comparisons for the smallest chunk used is, eventually, MIN_CHUNK_SIZE. Note that i386 is the only platform where these are the same, so a smallest chunk *would* be put in a no-fastbins fastbin. This change calculates the "impossibly small" value based on MIN_CHUNK_SIZE instead, so that we can know it will always be impossibly small.
* Prefer https to http for gnu.org and fsf.org URLsPaul Eggert2019-09-0777-77/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Also, change sources.redhat.com to sourceware.org. This patch was automatically generated by running the following shell script, which uses GNU sed, and which avoids modifying files imported from upstream: sed -ri ' s,(http|ftp)(://(.*\.)?(gnu|fsf|sourceware)\.org($|[^.]|\.[^a-z])),https\2,g s,(http|ftp)(://(.*\.)?)sources\.redhat\.com($|[^.]|\.[^a-z]),https\2sourceware.org\4,g ' \ $(find $(git ls-files) -prune -type f \ ! -name '*.po' \ ! -name 'ChangeLog*' \ ! -path COPYING ! -path COPYING.LIB \ ! -path manual/fdl-1.3.texi ! -path manual/lgpl-2.1.texi \ ! -path manual/texinfo.tex ! -path scripts/config.guess \ ! -path scripts/config.sub ! -path scripts/install-sh \ ! -path scripts/mkinstalldirs ! -path scripts/move-if-change \ ! -path INSTALL ! -path locale/programs/charmap-kw.h \ ! -path po/libc.pot ! -path sysdeps/gnu/errlist.c \ ! '(' -name configure \ -execdir test -f configure.ac -o -f configure.in ';' ')' \ ! '(' -name preconfigure \ -execdir test -f preconfigure.ac ';' ')' \ -print) and then by running 'make dist-prepare' to regenerate files built from the altered files, and then executing the following to cleanup: chmod a+x sysdeps/unix/sysv/linux/riscv/configure # Omit irrelevant whitespace and comment-only changes, # perhaps from a slightly-different Autoconf version. git checkout -f \ sysdeps/csky/configure \ sysdeps/hppa/configure \ sysdeps/riscv/configure \ sysdeps/unix/sysv/linux/csky/configure # Omit changes that caused a pre-commit check to fail like this: # remote: *** error: sysdeps/powerpc/powerpc64/ppc-mcount.S: trailing lines git checkout -f \ sysdeps/powerpc/powerpc64/ppc-mcount.S \ sysdeps/unix/sysv/linux/s390/s390-64/syscall.S # Omit change that caused a pre-commit check to fail like this: # remote: *** error: sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S: last line does not end in newline git checkout -f sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S
* malloc: Various cleanups for malloc/tst-mxfastFlorian Weimer2019-08-152-9/+8
|
* Add glibc.malloc.mxfast tunableDJ Delorie2019-08-094-7/+69
| | | | | | | | | | | | * elf/dl-tunables.list: Add glibc.malloc.mxfast. * manual/tunables.texi: Document it. * malloc/malloc.c (do_set_mxfast): New. (__libc_mallopt): Call it. * malloc/arena.c: Add mxfast tunable. * malloc/tst-mxfast.c: New. * malloc/Makefile: Add it. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* malloc: Fix missing accounting of top chunk in malloc_info [BZ #24026]Niklas Hambüchen2019-08-081-0/+6
| | | | | | | | | | | | Fixes `<total type="rest" size="..."> incorrectly showing as 0 most of the time. The rest value being wrong is significant because to compute the actual amount of memory handed out via malloc, the user must subtract it from <system type="current" size="...">. That result being wrong makes investigating memory fragmentation issues like <https://bugzilla.redhat.com/show_bug.cgi?id=843478> close to impossible.
* malloc: Remove unwanted leading whitespace in malloc_info [BZ #24867]Florian Weimer2019-08-011-1/+1
| | | | | | | It was introduced in commit 6c8dbf00f536d78b1937b5af6f57be47fd376344 ("Reformat malloc to gnu style."). Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* Don't declare __malloc_check_init in <malloc.h> (bug 23352)Andreas Schwab2019-07-102-3/+3
| | | | The function was never part of the malloc API.
* malloc: Add nptl, htl dependency for the subdirectory [BZ #24757]Florian Weimer2019-07-021-0/+2
| | | | | | memusagestat may indirectly link against libpthread. The built libpthread should be used, but that is only possible if it has been built before the malloc programs.
* Fix malloc tests build with GCC 10.Joseph Myers2019-06-102-1/+12
| | | | | | | | | | | | | | | | | | GCC mainline has recently added warn_unused_result attributes to some malloc-like built-in functions, where glibc previously had them in its headers only for __USE_FORTIFY_LEVEL > 0. This results in those attributes being newly in effect for building the glibc testsuite, so resulting in new warnings that break the build where tests deliberately call such functions and ignore the result. Thus patch duly adds calls to DIAG_* macros around those calls to disable the warning. Tested with build-many-glibcs.py for aarch64-linux-gnu. * malloc/tst-calloc.c: Include <libc-diag.h>. (null_test): Ignore -Wunused-result around calls to calloc. * malloc/tst-mallocfork.c: Include <libc-diag.h>. (do_test): Ignore -Wunused-result around call to malloc.
* Small tcache improvementsWilco Dijkstra2019-05-171-8/+6
| | | | | | | | | | | | | | | | Change the tcache->counts[] entries to uint16_t - this removes the limit set by char and allows a larger tcache. Remove a few redundant asserts. bench-malloc-thread with 4 threads is ~15% faster on Cortex-A72. Reviewed-by: DJ Delorie <dj@redhat.com> * malloc/malloc.c (MAX_TCACHE_COUNT): Increase to UINT16_MAX. (tcache_put): Remove redundant assert. (tcache_get): Remove redundant asserts. (__libc_malloc): Check tcache count is not zero. * manual/tunables.texi (glibc.malloc.tcache_count): Update maximum.
* Fix tcache count maximum (BZ #24531)Wilco Dijkstra2019-05-101-2/+7
| | | | | | | | | | | | The tcache counts[] array is a char, which has a very small range and thus may overflow. When setting tcache_count tunable, there is no overflow check. However the tunable must not be larger than the maximum value of the tcache counts[] array, otherwise it can overflow when filling the tcache. [BZ #24531] * malloc/malloc.c (MAX_TCACHE_COUNT): New define. (do_set_tcache_count): Only update if count is small enough. * manual/tunables.texi (glibc.malloc.tcache_count): Document max value.
* malloc/tst-mallocfork2: Use process-shared barriersFlorian Weimer2019-05-082-52/+86
| | | | | | | | | | This synchronization method has a lower overhead and makes it more likely that the signal arrives during one of the critical functions. Also test for fork deadlocks explicitly. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* memusagestat: use local glibc when linking [BZ #18465]Mike Frysinger2019-04-241-2/+2
| | | | | | | | | The memusagestat is the only binary that has its own link line which causes it to be linked against the existing installed C library. It has been this way since it was originally committed in 1999, but I don't see any reason as to why. Since we want all the programs we build locally to be against the new copy of glibc, change the build to be like all other programs.
* Remove do_set_mallopt_check prototypeH.J. Lu2019-04-231-1/+0
| | | | | | | | | Remove do_set_mallopt_check prototype since it is unused. * malloc/arena.c (do_set_mallopt_check): Removed. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Reviewed-by: DJ Delorie <dj@redhat.com>
* malloc: make malloc fail with requests larger than PTRDIFF_MAX (BZ#23741)Adhemerval Zanella2019-04-189-83/+179
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As discussed previously on libc-alpha [1], this patch follows up the idea and add both the __attribute_alloc_size__ on malloc functions (malloc, calloc, realloc, reallocarray, valloc, pvalloc, and memalign) and limit maximum requested allocation size to up PTRDIFF_MAX (taking into consideration internal padding and alignment). This aligns glibc with gcc expected size defined by default warning -Walloc-size-larger-than value which warns for allocation larger than PTRDIFF_MAX. It also aligns with gcc expectation regarding libc and expected size, such as described in PR#67999 [2] and previously discussed ISO C11 issues [3] on libc-alpha. From the RFC thread [4] and previous discussion, it seems that consensus is only to limit such requested size for malloc functions, not the system allocation one (mmap, sbrk, etc.). The implementation changes checked_request2size to check for both overflow and maximum object size up to PTRDIFF_MAX. No additional checks are done on sysmalloc, so it can still issue mmap with values larger than PTRDIFF_T depending on the requested size. The __attribute_alloc_size__ is for functions that return a pointer only, which means it cannot be applied to posix_memalign (see remarks in GCC PR#87683 [5]). The runtimes checks to limit maximum requested allocation size does applies to posix_memalign. Checked on x86_64-linux-gnu and i686-linux-gnu. [1] https://sourceware.org/ml/libc-alpha/2018-11/msg00223.html [2] https://gcc.gnu.org/bugzilla//show_bug.cgi?id=67999 [3] https://sourceware.org/ml/libc-alpha/2011-12/msg00066.html [4] https://sourceware.org/ml/libc-alpha/2018-11/msg00224.html [5] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87683 [BZ #23741] * malloc/hooks.c (malloc_check, realloc_check): Use __builtin_add_overflow on overflow check and adapt to checked_request2size change. * malloc/malloc.c (__libc_malloc, __libc_realloc, _mid_memalign, __libc_pvalloc, __libc_calloc, _int_memalign): Limit maximum allocation size to PTRDIFF_MAX. (REQUEST_OUT_OF_RANGE): Remove macro. (checked_request2size): Change to inline function and limit maximum requested size to PTRDIFF_MAX. (__libc_malloc, __libc_realloc, _int_malloc, _int_memalign): Limit maximum allocation size to PTRDIFF_MAX. (_mid_memalign): Use _int_memalign call for overflow check. (__libc_pvalloc): Use __builtin_add_overflow on overflow check. (__libc_calloc): Use __builtin_mul_overflow for overflow check and limit maximum requested size to PTRDIFF_MAX. * malloc/malloc.h (malloc, calloc, realloc, reallocarray, memalign, valloc, pvalloc): Add __attribute_alloc_size__. * stdlib/stdlib.h (malloc, realloc, reallocarray, valloc): Likewise. * malloc/tst-malloc-too-large.c (do_test): Add check for allocation larger than PTRDIFF_MAX. * malloc/tst-memalign.c (do_test): Disable -Walloc-size-larger-than= around tests of malloc with negative sizes. * malloc/tst-posix_memalign.c (do_test): Likewise. * malloc/tst-pvalloc.c (do_test): Likewise. * malloc/tst-valloc.c (do_test): Likewise. * malloc/tst-reallocarray.c (do_test): Replace call to reallocarray with resulting size allocation larger than PTRDIFF_MAX with reallocarray_nowarn. (reallocarray_nowarn): New function. * NEWS: Mention the malloc function semantic change.
* malloc: Set and reset all hooks for tracing (Bug 16573)Carlos O'Donell2019-04-091-26/+46
| | | | | | | | | | | | | | | | | | | | If an error occurs during the tracing operation, particularly during a call to lock_and_info() which calls _dl_addr, we may end up calling back into the malloc-subsystem and relock the loader lock and deadlock. For all intents and purposes the call to _dl_addr can call any of the malloc family API functions and so we should disable all tracing before calling such loader functions. This is similar to the strategy that the new malloc tracer takes when calling the real malloc, namely that all tracing ceases at the boundary to the real function and any faults at that point are the purvue of the library (though the new tracer does this on a per-thread basis in an MT-safe fashion). Since the new tracer and the hook deprecation are not yet complete we must fix these issues where we can. Tested on x86_64 with no regressions. Co-authored-by: Kwok Cheung Yeung <kcy@codesourcery.com> Reviewed-by: DJ Delorie <dj@redhat.com>
* malloc: Check for large bin list corruption when inserting unsorted chunkAdam Maris2019-03-141-0/+4
| | | | | | | | Fixes bug 24216. This patch adds security checks for bk and bk_nextsize pointers of chunks in large bin when inserting chunk from unsorted bin. It was possible to write the pointer to victim (newly inserted chunk) to arbitrary memory locations if bk or bk_nextsize pointers of the next large bin chunk got corrupted.
* Break more lines before not after operators.Joseph Myers2019-02-251-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes further coding style fixes where code was breaking lines after an operator, contrary to the GNU Coding Standards. As with the previous patch, it is limited to files following a reasonable approximation to GNU style already, and is not exhaustive; more such issues remain to be fixed. Tested for x86_64, and with build-many-glibcs.py. * dirent/dirent.h [!_DIRENT_HAVE_D_NAMLEN && _DIRENT_HAVE_D_RECLEN] (_D_ALLOC_NAMLEN): Break lines before rather than after operators. * elf/cache.c (print_cache): Likewise. * gshadow/fgetsgent_r.c (__fgetsgent_r): Likewise. * htl/pt-getattr.c (__pthread_getattr_np): Likewise. * hurd/hurdinit.c (_hurd_setproc): Likewise. * hurd/hurdkill.c (_hurd_sig_post): Likewise. * hurd/hurdlookup.c (__file_name_lookup_under): Likewise. * hurd/hurdsig.c (_hurd_internal_post_signal): Likewise. (reauth_proc): Likewise. * hurd/lookup-at.c (__file_name_lookup_at): Likewise. (__file_name_split_at): Likewise. (__directory_name_split_at): Likewise. * hurd/lookup-retry.c (__hurd_file_name_lookup_retry): Likewise. * hurd/port2fd.c (_hurd_port2fd): Likewise. * iconv/gconv_dl.c (do_print): Likewise. * inet/netinet/in.h (struct sockaddr_in): Likewise. * libio/wstrops.c (_IO_wstr_seekoff): Likewise. * locale/setlocale.c (new_composite_name): Likewise. * malloc/memusagestat.c (main): Likewise. * misc/fstab.c (fstab_convert): Likewise. * nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_usercnt): Likewise. * nss/nss_compat/compat-grp.c (getgrent_next_nss): Likewise. (getgrent_next_file): Likewise. (internal_getgrnam_r): Likewise. (internal_getgrgid_r): Likewise. * nss/nss_compat/compat-initgroups.c (getgrent_next_nss): Likewise. (internal_getgrent_r): Likewise. * nss/nss_compat/compat-pwd.c (getpwent_next_nss_netgr): Likewise. (getpwent_next_nss): Likewise. (getpwent_next_file): Likewise. (internal_getpwnam_r): Likewise. (internal_getpwuid_r): Likewise. * nss/nss_compat/compat-spwd.c (getspent_next_nss_netgr): Likewise. (getspent_next_nss): Likewise. (internal_getspnam_r): Likewise. * pwd/fgetpwent_r.c (__fgetpwent_r): Likewise. * shadow/fgetspent_r.c (__fgetspent_r): Likewise. * string/strchr.c (STRCHR): Likewise. * string/strchrnul.c (STRCHRNUL): Likewise. * sysdeps/aarch64/fpu/fpu_control.h (_FPU_FPCR_IEEE): Likewise. * sysdeps/aarch64/sfp-machine.h (_FP_CHOOSENAN): Likewise. * sysdeps/csky/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/generic/memcopy.h (PAGE_COPY_FWD_MAYBE): Likewise. * sysdeps/generic/symbol-hacks.h (__stack_chk_fail_local): Likewise. * sysdeps/gnu/netinet/ip_icmp.h (ICMP_INFOTYPE): Likewise. * sysdeps/gnu/updwtmp.c (TRANSFORM_UTMP_FILE_NAME): Likewise. * sysdeps/gnu/utmp_file.c (TRANSFORM_UTMP_FILE_NAME): Likewise. * sysdeps/hppa/jmpbuf-unwind.h (_JMPBUF_UNWINDS): Likewise. * sysdeps/mach/hurd/bits/stat.h (S_ISPARE): Likewise. * sysdeps/mach/hurd/dl-sysdep.c (_dl_sysdep_start): Likewise. (open_file): Likewise. * sysdeps/mach/hurd/htl/pt-mutexattr-setprotocol.c (pthread_mutexattr_setprotocol): Likewise. * sysdeps/mach/hurd/ioctl.c (__ioctl): Likewise. * sysdeps/mach/hurd/mmap.c (__mmap): Likewise. * sysdeps/mach/hurd/ptrace.c (ptrace): Likewise. * sysdeps/mach/hurd/spawni.c (__spawni): Likewise. * sysdeps/microblaze/dl-machine.h (elf_machine_type_class): Likewise. (elf_machine_rela): Likewise. * sysdeps/mips/mips32/sfp-machine.h (_FP_CHOOSENAN): Likewise. * sysdeps/mips/mips64/sfp-machine.h (_FP_CHOOSENAN): Likewise. * sysdeps/mips/sys/asm.h (multiple #if conditionals): Likewise. * sysdeps/posix/rename.c (rename): Likewise. * sysdeps/powerpc/novmx-sigjmp.c (__novmx__sigjmp_save): Likewise. * sysdeps/powerpc/sigjmp.c (__vmx__sigjmp_save): Likewise. * sysdeps/s390/fpu/fenv_libc.h (FPC_VALID_MASK): Likewise. * sysdeps/s390/utf8-utf16-z9.c (gconv_end): Likewise. * sysdeps/unix/grantpt.c (grantpt): Likewise. * sysdeps/unix/sysv/linux/a.out.h (N_TXTOFF): Likewise. * sysdeps/unix/sysv/linux/updwtmp.c (TRANSFORM_UTMP_FILE_NAME): Likewise. * sysdeps/unix/sysv/linux/utmp_file.c (TRANSFORM_UTMP_FILE_NAME): Likewise. * sysdeps/x86/cpu-features.c (get_common_indices): Likewise. * time/tzfile.c (__tzfile_compute): Likewise.
* Avoid "inline" after return type in function definitions.Joseph Myers2019-02-061-22/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | One group of warnings seen with -Wextra is warnings for static or inline not at the start of a declaration (-Wold-style-declaration). This patch fixes various such cases for inline, ensuring it comes at the start of the declaration (after any static). A common case of the fix is "static inline <type> __always_inline"; the definition of __always_inline starts with __inline, so the natural change is to "static __always_inline <type>". Other cases of the warning may be harder to fix (one pattern is a function definition that gets rewritten to be static by an including file, "#define funcname static wrapped_funcname" or similar), but it seems worth fixing these cases with inline anyway. Tested for x86_64. * elf/dl-load.h (_dl_postprocess_loadcmd): Use __always_inline before return type, without separate inline. * elf/dl-tunables.c (maybe_enable_malloc_check): Likewise. * elf/dl-tunables.h (tunable_is_name): Likewise. * malloc/malloc.c (do_set_trim_threshold): Likewise. (do_set_top_pad): Likewise. (do_set_mmap_threshold): Likewise. (do_set_mmaps_max): Likewise. (do_set_mallopt_check): Likewise. (do_set_perturb_byte): Likewise. (do_set_arena_test): Likewise. (do_set_arena_max): Likewise. (do_set_tcache_max): Likewise. (do_set_tcache_count): Likewise. (do_set_tcache_unsorted_limit): Likewise. * nis/nis_subr.c (count_dots): Likewise. * nptl/allocatestack.c (advise_stack_range): Likewise. * sysdeps/ieee754/dbl-64/s_sin.c (do_cos): Likewise. (do_sin): Likewise. (reduce_sincos): Likewise. (do_sincos): Likewise. * sysdeps/unix/sysv/linux/x86/elision-conf.c (do_set_elision_enable): Likewise. (TUNABLE_CALLBACK_FNDECL): Likewise.
* Fix assertion in malloc.c:tcache_get.Joseph Myers2019-02-041-1/+1
| | | | | | | | | | | | | | | | | | One of the warnings that appears with -Wextra is "ordered comparison of pointer with integer zero" in malloc.c:tcache_get, for the assertion: assert (tcache->entries[tc_idx] > 0); Indeed, a "> 0" comparison does not make sense for tcache->entries[tc_idx], which is a pointer. My guess is that tcache->counts[tc_idx] is what's intended here, and this patch changes the assertion accordingly. Tested for x86_64. * malloc/malloc.c (tcache_get): Compare tcache->counts[tc_idx] with 0, not tcache->entries[tc_idx].
* malloc: Revert fastbins to old-style atomicsFlorian Weimer2019-01-181-96/+70
| | | | | | | Commit 6923f6db1e688dedcf3a6556da76e0bf24a41872 ("malloc: Use current (C11-style) atomics for fastbin access") caused a substantial performance regression on POWER and Aarch64, and the old atomics, while hard to prove correct, seem to work in practice.
* Update copyright dates not handled by scripts/update-copyrights.Joseph Myers2019-01-013-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I've updated copyright dates in glibc for 2019. This is the patch for the changes not generated by scripts/update-copyrights and subsequent build / regeneration of generated files. Please remember to include 2019 in the dates for any new files added in future (which means updating any existing uncommitted patches you have that add new files to use the new copyright dates in them). * NEWS: Update copyright dates. * catgets/gencat.c (print_version): Likewise. * csu/version.c (banner): Likewise. * debug/catchsegv.sh: Likewise. * debug/pcprofiledump.c (print_version): Likewise. * debug/xtrace.sh (do_version): Likewise. * elf/ldconfig.c (print_version): Likewise. * elf/ldd.bash.in: Likewise. * elf/pldd.c (print_version): Likewise. * elf/sotruss.sh: Likewise. * elf/sprof.c (print_version): Likewise. * iconv/iconv_prog.c (print_version): Likewise. * iconv/iconvconfig.c (print_version): Likewise. * locale/programs/locale.c (print_version): Likewise. * locale/programs/localedef.c (print_version): Likewise. * login/programs/pt_chown.c (print_version): Likewise. * malloc/memusage.sh (do_version): Likewise. * malloc/memusagestat.c (print_version): Likewise. * malloc/mtrace.pl: Likewise. * manual/libc.texinfo: Likewise. * nptl/version.c (banner): Likewise. * nscd/nscd.c (print_version): Likewise. * nss/getent.c (print_version): Likewise. * nss/makedb.c (print_version): Likewise. * posix/getconf.c (main): Likewise. * scripts/test-installation.pl: Likewise. * sysdeps/unix/sysv/linux/lddlibc4.c (main): Likewise.
* Update copyright dates with scripts/update-copyrights.Joseph Myers2019-01-0176-76/+76
| | | | | | | * All files with FSF copyright notices: Update copyright dates using scripts/update-copyrights. * locale/programs/charmap-kw.h: Regenerated. * locale/programs/locfile-kw.h: Likewise.
* malloc: Always call memcpy in _int_realloc [BZ #24027]Florian Weimer2018-12-311-42/+1
| | | | | | | | This commit removes the custom memcpy implementation from _int_realloc for small chunk sizes. The ncopies variable has the wrong type, and an integer wraparound could cause the existing code to copy too few elements (leaving the new memory region mostly uninitialized). Therefore, removing this code fixes bug 24027.
* Replace check_mul_overflow_size_t with __builtin_mul_overflowAdhemerval Zanella2018-12-285-30/+5
| | | | | | | | | | | | | | | Checked on x86_64-linux-gnu and i686-linux-gnu. * malloc/alloc_buffer_alloc_array.c (__libc_alloc_buffer_alloc_array): Use __builtin_mul_overflow in place of check_mul_overflow_size_t. * malloc/dynarray_emplace_enlarge.c (__libc_dynarray_emplace_enlarge): Likewise. * malloc/dynarray_resize.c (__libc_dynarray_resize): Likewise. * malloc/reallocarray.c (__libc_reallocarray): Likewise. * malloc/malloc-internal.h (check_mul_overflow_size_t): Remove function. * support/blob_repeat.c (check_mul_overflow_size_t, (minimum_stride_size, support_blob_repeat_allocate): Likewise.
* malloc: Check the alignment of mmapped chunks before unmapping.Istvan Kurucsai2018-12-211-1/+4
| | | | * malloc/malloc.c (munmap_chunk): Verify chunk alignment.
* malloc: Add more integrity checks to mremap_chunk.Istvan Kurucsai2018-12-201-3/+9
| | | | * malloc/malloc.c (mremap_chunk): Additional checks.
* malloc: Add another test for tcache double free check.DJ Delorie2018-12-072-1/+57
| | | | | | | | | | This one tests for BZ#23907 where the double free test didn't check the tcache bin bounds before dereferencing the bin. [BZ #23907] * malloc/tst-tcfree3.c: New. * malloc/Makefile: Add it.
* malloc: tcache: Validate tc_idx before checking for double-frees [BZ #23907]Florian Weimer2018-11-261-25/+25
| | | | | | The previous check could read beyond the end of the tcache entry array. If the e->key == tcache cookie check happened to pass, this would result in crashes.
* malloc: tcache double free checkDJ Delorie2018-11-204-0/+119
| | | | | | | | | | | | | * malloc/malloc.c (tcache_entry): Add key field. (tcache_put): Set it. (tcache_get): Likewise. (_int_free): Check for double free in tcache. * malloc/tst-tcfree1.c: New. * malloc/tst-tcfree2.c: New. * malloc/Makefile: Run the new tests. * manual/probes.texi: Document memory_tcache_double_free probe. * dlfcn/dlerror.c (check_free): Prevent double frees.
* malloc: Use current (C11-style) atomics for fastbin accessFlorian Weimer2018-11-131-70/+96
|
* malloc: Convert the unlink macro to the unlink_chunk functionFlorian Weimer2018-11-122-47/+49
| | | | | | | This commit is in preparation of turning the macro into a proper function. The output arguments of the macro were in fact unused. Also clean up uses of __builtin_expect.
* malloc: Add ChangeLog for accidentally committed changeFlorian Weimer2018-08-201-1/+1
| | | | | | Commit b90ddd08f6dd688e651df9ee89ca3a69ff88cd0c ("malloc: Additional checks for unsorted bin integrity I.") was committed without a whitespace fix, so it is adjusted here as well.
* malloc: Additional checks for unsorted bin integrity I.Istvan Kurucsai2018-08-171-4/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On Thu, Jan 11, 2018 at 3:50 PM, Florian Weimer <fweimer@redhat.com> wrote: > On 11/07/2017 04:27 PM, Istvan Kurucsai wrote: >> >> + next = chunk_at_offset (victim, size); > > > For new code, we prefer declarations with initializers. Noted. >> + if (__glibc_unlikely (chunksize_nomask (victim) <= 2 * SIZE_SZ) >> + || __glibc_unlikely (chunksize_nomask (victim) > >> av->system_mem)) >> + malloc_printerr("malloc(): invalid size (unsorted)"); >> + if (__glibc_unlikely (chunksize_nomask (next) < 2 * SIZE_SZ) >> + || __glibc_unlikely (chunksize_nomask (next) > >> av->system_mem)) >> + malloc_printerr("malloc(): invalid next size (unsorted)"); >> + if (__glibc_unlikely ((prev_size (next) & ~(SIZE_BITS)) != >> size)) >> + malloc_printerr("malloc(): mismatching next->prev_size >> (unsorted)"); > > > I think this check is redundant because prev_size (next) and chunksize > (victim) are loaded from the same memory location. I'm fairly certain that it compares mchunk_size of victim against mchunk_prev_size of the next chunk, i.e. the size of victim in its header and footer. >> + if (__glibc_unlikely (bck->fd != victim) >> + || __glibc_unlikely (victim->fd != unsorted_chunks (av))) >> + malloc_printerr("malloc(): unsorted double linked list >> corrupted"); >> + if (__glibc_unlikely (prev_inuse(next))) >> + malloc_printerr("malloc(): invalid next->prev_inuse >> (unsorted)"); > > > There's a missing space after malloc_printerr. Noted. > Why do you keep using chunksize_nomask? We never investigated why the > original code uses it. It may have been an accident. You are right, I don't think it makes a difference in these checks. So the size local can be reused for the checks against victim. For next, leaving it as such avoids the masking operation. > Again, for non-main arenas, the checks against av->system_mem could be made > tighter (against the heap size). Maybe you could put the condition into a > separate inline function? We could also do a chunk boundary check similar to what I proposed in the thread for the first patch in the series to be even more strict. I'll gladly try to implement either but believe that refining these checks would bring less benefits than in the case of the top chunk. Intra-arena or intra-heap overlaps would still be doable here with unsorted chunks and I don't see any way to counter that besides more generic measures like randomizing allocations and your metadata encoding patches. I've attached a revised version with the above comments incorporated but without the refined checks. Thanks, Istvan From a12d5d40fd7aed5fa10fc444dcb819947b72b315 Mon Sep 17 00:00:00 2001 From: Istvan Kurucsai <pistukem@gmail.com> Date: Tue, 16 Jan 2018 14:48:16 +0100 Subject: [PATCH v2 1/1] malloc: Additional checks for unsorted bin integrity I. Ensure the following properties of chunks encountered during binning: - victim chunk has reasonable size - next chunk has reasonable size - next->prev_size == victim->size - valid double linked list - PREV_INUSE of next chunk is unset * malloc/malloc.c (_int_malloc): Additional binning code checks.
* malloc: Mitigate null-byte overflow attacksMoritz Eckert2018-08-161-0/+4
| | | | | * malloc/malloc.c (_int_free): Check for corrupt prev_size vs size. (malloc_consolidate): Likewise.
* malloc: Verify size of top chunk.Pochang Chen2018-08-161-0/+3
| | | | | | | | | | | | The House of Force is a well-known technique to exploit heap overflow. In essence, this exploit takes three steps: 1. Overwrite the size of top chunk with very large value (e.g. -1). 2. Request x bytes from top chunk. As the size of top chunk is corrupted, x can be arbitrarily large and top chunk will still be offset by x. 3. The next allocation from top chunk will thus be controllable. If we verify the size of top chunk at step 2, we can stop such attack.
* libc: Extend __libc_freeres framework (Bug 23329).Carlos O'Donell2018-06-292-2/+17
| | | | | | | | | | | | | | | | | | | | | | | | | The __libc_freeres framework does not extend to non-libc.so objects. This causes problems in general for valgrind and mtrace detecting unfreed objects in both libdl.so and libpthread.so. This change is a pre-requisite to properly moving the malloc hooks out of malloc since such a move now requires precise accounting of all allocated data before destructors are run. This commit adds a proper hook in libc.so.6 for both libdl.so and for libpthread.so, this ensures that shm-directory.c which uses freeit () to free memory is called properly. We also remove the nptl_freeres hook and fall back to using weak-ref-and-check idiom for a loaded libpthread.so, thus making this process similar for all DSOs. Lastly we follow best practice and use explicit free calls for both libdl.so and libpthread.so instead of the generic hook process which has undefined order. Tested on x86_64 with no regressions. Signed-off-by: DJ Delorie <dj@redhat.com> Signed-off-by: Carlos O'Donell <carlos@redhat.com>