about summary refs log tree commit diff
Commit message (Collapse)AuthorAgeFilesLines
* [AArch64] Improve integer memcpyWilco Dijkstra2020-10-141-96/+101
| | | | | | | | Further optimize integer memcpy. Small cases now include copies up to 32 bytes. 64-128 byte copies are split into two cases to improve performance of 64-96 byte copies. Comments have been rewritten. (cherry picked from commit 700065132744e0dfa6d4d9142d63f6e3a1934726)
* aarch64: Increase small and medium cases for __memcpy_genericKrzysztof Koch2020-10-141-35/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | Increase the upper bound on medium cases from 96 to 128 bytes. Now, up to 128 bytes are copied unrolled. Increase the upper bound on small cases from 16 to 32 bytes so that copies of 17-32 bytes are not impacted by the larger medium case. Benchmarking: The attached figures show relative timing difference with respect to 'memcpy_generic', which is the existing implementation. 'memcpy_med_128' denotes the the version of memcpy_generic with only the medium case enlarged. The 'memcpy_med_128_small_32' numbers are for the version of memcpy_generic submitted in this patch, which has both medium and small cases enlarged. The figures were generated using the script from: https://www.sourceware.org/ml/libc-alpha/2019-10/msg00563.html Depending on the platform, the performance improvement in the bench-memcpy-random.c benchmark ranges from 6% to 20% between the original and final version of memcpy.S Tested against GLIBC testsuite and randomized tests. (cherry picked from commit b9f145df85145506f8e61bac38b792584a38d88f)
* AArch64: Improve backwards memmove performanceWilco Dijkstra2020-10-141-3/+4
| | | | | | | | | On some microarchitectures performance of the backwards memmove improves if the stores use STR with decreasing addresses. So change the memmove loop in memcpy_advsimd.S to use 2x STR rather than STP. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> (cherry picked from commit bd394d131c10c9ec22c6424197b79410042eed99)
* AArch64: Add optimized Q-register memcpyWilco Dijkstra2020-10-145-4/+255
| | | | | | | | | | | | | | | | | Add a new memcpy using 128-bit Q registers - this is faster on modern cores and reduces codesize. Similar to the generic memcpy, small cases include copies up to 32 bytes. 64-128 byte copies are split into two cases to improve performance of 64-96 byte copies. Large copies align the source rather than the destination. bench-memcpy-random is ~9% faster than memcpy_falkor on Neoverse N1, so make this memcpy the default on N1 (on Centriq it is 15% faster than memcpy_falkor). Passes GLIBC regression tests. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com> (cherry picked from commit 4a733bf375238a6a595033b5785cea7f27d61307)
* AArch64: Align ENTRY to a cachelineWilco Dijkstra2020-10-141-1/+1
| | | | | | | | Given almost all uses of ENTRY are for string/memory functions, align ENTRY to a cacheline to simplify things. Reviewed-by: Carlos O'Donell <carlos@redhat.com> (cherry picked from commit 34f0d01d5e43c7dedd002ab47f6266dfb5b79c22)
* NEWS: Mention BZ 25933 fixH.J. Lu2020-07-041-0/+1
|
* Fix avx2 strncmp offset compare condition check [BZ #25933]Sunil K Pandey2020-07-041-0/+15
| | | | | | | | | | | strcmp-avx2.S: In avx2 strncmp function, strings are compared in chunks of 4 vector size(i.e. 32x4=128 byte for avx2). After first 4 vector size comparison, code must check whether it already passed the given offset. This patch implement avx2 offset check condition for strncmp function, if both string compare same for first 4 vector size. (cherry picked from commit 75870237ff3bb363447b03f4b0af100227570910)
* Fix use-after-free in glob when expanding ~user (bug 25414)Andreas Schwab2020-03-202-12/+17
| | | | | | | The value of `end_name' points into the value of `dirname', thus don't deallocate the latter before the last use of the former. (cherry picked from commit ddc650e9b3dc916eab417ce9f79e67337b05035c)
* Fix array overflow in backtrace on PowerPC (bug 25423)Andreas Schwab2020-03-204-0/+17
| | | | | | | | When unwinding through a signal frame the backtrace function on PowerPC didn't check array bounds when storing the frame address. Fixes commit d400dcac5e ("PowerPC: fix backtrace to handle signal trampolines"). (cherry picked from commit d93769405996dfc11d216ddbe415946617b5a494)
* riscv: Do not use __has_include__Florian Weimer2020-01-212-1/+6
| | | | | | The user-visible preprocessor construct is called __has_include. (cherry picked from commit 28dd3939221ab26c6774097e9596e30d9753f758)
* misc/test-errno-linux: Handle EINVAL from quotactlFlorian Weimer2019-12-051-2/+3
| | | | | | | | | | In commit 3dd4d40b420846dd35869ccc8f8627feef2cff32 ("xfs: Sanity check flags of Q_XQUOTARM call"), Linux 5.4 added checking for the flags argument, causing the test to fail due to too restrictive test expectations. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> (cherry picked from commit 1f7525d924b608a3e43b10fcfb3d46b8a6e9e4f9)
* <string.h>: Define __CORRECT_ISO_CPP_STRING_H_PROTO for Clang [BZ #25232]Kamlesh Kumar2019-12-052-1/+3
| | | | | | | | | | | | | Without the asm redirects, strchr et al. are not const-correct. libc++ has a wrapper header that works with and without __CORRECT_ISO_CPP_STRING_H_PROTO (using a Clang extension). But when Clang is used with libstdc++ or just C headers, the overloaded functions with the correct types are not declared. This change does not impact current GCC (with libstdc++ or libc++). (cherry picked from commit 953ceff17a4a15b10cfdd5edc3c8cae4884c8ec3)
* x86: Assume --enable-cet if GCC defaults to CET [BZ #25225]Florian Weimer2019-12-033-2/+31
| | | | | | | | | This links in CET support if GCC defaults to CET. Otherwise, __CET__ is defined, yet CET functionality is not compiled and linked into the dynamic loader, resulting in a linker failure due to undefined references to _dl_cet_check and _dl_open_check. (cherry picked from commit 9fb8139079ef0bb1aa33a4ae418cbb113b9b9da7)
* libio: Disable vtable validation for pre-2.1 interposed handles [BZ #25203]Florian Weimer2019-11-282-0/+6
| | | | | | | | | | | | | Commit c402355dfa7807b8e0adb27c009135a7e2b9f1b0 ("libio: Disable vtable validation in case of interposition [BZ #23313]") only covered the interposable glibc 2.1 handles, in libio/stdfiles.c. The parallel code in libio/oldstdfiles.c needs similar detection logic. Fixes (again) commit db3476aff19b75c4fdefbe65fcd5f0a90588ba51 ("libio: Implement vtable verification [BZ #20191]"). Change-Id: Ief6f9f17e91d1f7263421c56a7dc018f4f595c21 (cherry picked from commit cb61630ed712d033f54295f776967532d3f4b46a)
* rtld: Check __libc_enable_secure before honoring LD_PREFER_MAP_32BIT_EXEC ↵Marcin Kościelnicki2019-11-222-1/+10
| | | | | | | | | | (CVE-2019-19126) [BZ #25204] The problem was introduced in glibc 2.23, in commit b9eb92ab05204df772eb4929eccd018637c9f3e9 ("Add Prefer_MAP_32BIT_EXEC to map executable pages with MAP_32BIT"). (cherry picked from commit d5dfad4326fc683c813df1e37bbf5cf920591c8e)
* Fix alignment of TLS variables for tls variant TLS_TCB_AT_TP [BZ #23403]Stefan Liebler2019-11-068-42/+132
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The alignment of TLS variables is wrong if accessed from within a thread for architectures with tls variant TLS_TCB_AT_TP. For the main thread the static tls data is properly aligned. For other threads the alignment depends on the alignment of the thread pointer as the static tls data is located relative to this pointer. This patch adds this alignment for TLS_TCB_AT_TP variants in the same way as it is already done for TLS_DTV_AT_TP. The thread pointer is also already properly aligned if the user provides its own stack for the new thread. This patch extends the testcase nptl/tst-tls1.c in order to check the alignment of the tls variables and it adds a pthread_create invocation with a user provided stack. The test itself is migrated from test-skeleton.c to test-driver.c and the missing support functions xpthread_attr_setstack and xposix_memalign are added. ChangeLog: [BZ #23403] * nptl/allocatestack.c (allocate_stack): Align pointer pd for TLS_TCB_AT_TP tls variant. * nptl/tst-tls1.c: Migrate to support/test-driver.c. Add alignment checks. * support/Makefile (libsupport-routines): Add xposix_memalign and xpthread_setstack. * support/support.h: Add xposix_memalign. * support/xthread.h: Add xpthread_attr_setstack. * support/xposix_memalign.c: New File. * support/xpthread_attr_setstack.c: Likewise. (cherry picked from commit bc79db3fd487daea36e7c130f943cfb9826a41b4)
* mips: Force RWX stack for hard-float builds that can run on pre-4.8 kernelsDragan Mladjenovic2019-11-053-5/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Linux/Mips kernels prior to 4.8 could potentially crash the user process when doing FPU emulation while running on non-executable user stack. Currently, gcc doesn't emit .note.GNU-stack for mips, but that will change in the future. To ensure that glibc can be used with such future gcc, without silently resulting in binaries that might crash in runtime, this patch forces RWX stack for all built objects if configured to run against minimum kernel version less than 4.8. * sysdeps/unix/sysv/linux/mips/Makefile (test-xfail-check-execstack): Move under mips-has-gnustack != yes. (CFLAGS-.o*, ASFLAGS-.o*): New rules. Apply -Wa,-execstack if mips-force-execstack == yes. * sysdeps/unix/sysv/linux/mips/configure: Regenerated. * sysdeps/unix/sysv/linux/mips/configure.ac (mips-force-execstack): New var. Set to yes for hard-float builds with minimum_kernel < 4.8.0 or minimum_kernel not set at all. (mips-has-gnustack): New var. Use value of libc_cv_as_noexecstack if mips-force-execstack != yes, otherwise set to no. (cherry picked from commit 33bc9efd91de1b14354291fc8ebd5bce96379f12)
* elf: Refuse to dlopen PIE objects [BZ #24323]Florian Weimer2019-11-013-5/+22
| | | | | | | | Another executable has already been mapped, so the dynamic linker cannot perform relocations correctly for the second executable. (cherry picked from commit 2c75b545de6fe3c44138799c68217a94bc669a88) (test omitted due to indirect dependency on test-in-container)
* nss_db: fix endent wrt NULL mappings [BZ #24695] [BZ #24696]DJ Delorie2019-10-312-1/+13
| | | | | | | | | | | | | | | | nss_db allows for getpwent et al to be called without a set*ent, but it only works once. After the last get*ent a set*ent is required to restart, because the end*ent did not properly reset the module. Resetting it to NULL allows for a proper restart. If the database doesn't exist, however, end*ent erroniously called munmap which set errno. The test case runs "makedb" inside the testroot, so needs selinux DSOs installed. (cherry picked from commit 99135114ba23c3110b7e4e650fabdc5e639746b7) (note: tests excluded as test-in-container infrastructure missing)
* Call _dl_open_check after relocation [BZ #24259]H.J. Lu2019-10-3118-5/+387
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a workaround for [BZ #20839] which doesn't remove the NODELETE object when _dl_open_check throws an exception. Move it after relocation in dl_open_worker to avoid leaving the NODELETE object mapped without relocation. [BZ #24259] * elf/dl-open.c (dl_open_worker): Call _dl_open_check after relocation. * sysdeps/x86/Makefile (tests): Add tst-cet-legacy-5a, tst-cet-legacy-5b, tst-cet-legacy-6a and tst-cet-legacy-6b. (modules-names): Add tst-cet-legacy-mod-5a, tst-cet-legacy-mod-5b, tst-cet-legacy-mod-5c, tst-cet-legacy-mod-6a, tst-cet-legacy-mod-6b and tst-cet-legacy-mod-6c. (CFLAGS-tst-cet-legacy-5a.c): New. (CFLAGS-tst-cet-legacy-5b.c): Likewise. (CFLAGS-tst-cet-legacy-mod-5a.c): Likewise. (CFLAGS-tst-cet-legacy-mod-5b.c): Likewise. (CFLAGS-tst-cet-legacy-mod-5c.c): Likewise. (CFLAGS-tst-cet-legacy-6a.c): Likewise. (CFLAGS-tst-cet-legacy-6b.c): Likewise. (CFLAGS-tst-cet-legacy-mod-6a.c): Likewise. (CFLAGS-tst-cet-legacy-mod-6b.c): Likewise. (CFLAGS-tst-cet-legacy-mod-6c.c): Likewise. ($(objpfx)tst-cet-legacy-5a): Likewise. ($(objpfx)tst-cet-legacy-5a.out): Likewise. ($(objpfx)tst-cet-legacy-mod-5a.so): Likewise. ($(objpfx)tst-cet-legacy-mod-5b.so): Likewise. ($(objpfx)tst-cet-legacy-5b): Likewise. ($(objpfx)tst-cet-legacy-5b.out): Likewise. (tst-cet-legacy-5b-ENV): Likewise. ($(objpfx)tst-cet-legacy-6a): Likewise. ($(objpfx)tst-cet-legacy-6a.out): Likewise. ($(objpfx)tst-cet-legacy-mod-6a.so): Likewise. ($(objpfx)tst-cet-legacy-mod-6b.so): Likewise. ($(objpfx)tst-cet-legacy-6b): Likewise. ($(objpfx)tst-cet-legacy-6b.out): Likewise. (tst-cet-legacy-6b-ENV): Likewise. * sysdeps/x86/tst-cet-legacy-5.c: New file. * sysdeps/x86/tst-cet-legacy-5a.c: Likewise. * sysdeps/x86/tst-cet-legacy-5b.c: Likewise. * sysdeps/x86/tst-cet-legacy-6.c: Likewise. * sysdeps/x86/tst-cet-legacy-6a.c: Likewise. * sysdeps/x86/tst-cet-legacy-6b.c: Likewise. * sysdeps/x86/tst-cet-legacy-mod-5.c: Likewise. * sysdeps/x86/tst-cet-legacy-mod-5a.c: Likewise. * sysdeps/x86/tst-cet-legacy-mod-5b.c: Likewise. * sysdeps/x86/tst-cet-legacy-mod-5c.c: Likewise. * sysdeps/x86/tst-cet-legacy-mod-6.c: Likewise. * sysdeps/x86/tst-cet-legacy-mod-6a.c: Likewise. * sysdeps/x86/tst-cet-legacy-mod-6b.c: Likewise. * sysdeps/x86/tst-cet-legacy-mod-6c.c: Likewise. (cherry picked from commit d0093c5cefb7f7a4143f3bb03743633823229cc6)
* Base max_fast on alignment, not width, of bins (Bug 24903)DJ Delorie2019-10-311-1/+1
| | | | | | | | | | | | | | set_max_fast sets the "impossibly small" value based on, eventually, MALLOC_ALIGNMENT. The comparisons for the smallest chunk used is, eventually, MIN_CHUNK_SIZE. Note that i386 is the only platform where these are the same, so a smallest chunk *would* be put in a no-fastbins fastbin. This change calculates the "impossibly small" value based on MIN_CHUNK_SIZE instead, so that we can know it will always be impossibly small. (cherry picked from commit ff12e0fb91b9072800f031cb21fb2651ee7b6251)
* malloc: Fix missing accounting of top chunk in malloc_info [BZ #24026]Niklas Hambüchen2019-10-302-0/+12
| | | | | | | | | | | | | | Fixes `<total type="rest" size="..."> incorrectly showing as 0 most of the time. The rest value being wrong is significant because to compute the actual amount of memory handed out via malloc, the user must subtract it from <system type="current" size="...">. That result being wrong makes investigating memory fragmentation issues like <https://bugzilla.redhat.com/show_bug.cgi?id=843478> close to impossible. (cherry picked from commit b6d2c4475d5abc05dd009575b90556bdd3c78ad0)
* malloc: Remove unwanted leading whitespace in malloc_info [BZ #24867]Florian Weimer2019-10-302-1/+7
| | | | | | | | It was introduced in commit 6c8dbf00f536d78b1937b5af6f57be47fd376344 ("Reformat malloc to gnu style."). Reviewed-by: Carlos O'Donell <carlos@redhat.com> (cherry picked from commit b0f6679bcd738ea244a14acd879d974901e56c8e)
* malloc: Various cleanups for malloc/tst-mxfastFlorian Weimer2019-10-303-9/+16
| | | | (cherry picked from commit f9769a239784772453d595bc2f4bed8739810e06)
* Add glibc.malloc.mxfast tunableDJ Delorie2019-10-307-7/+96
| | | | | | | | | | | | | * elf/dl-tunables.list: Add glibc.malloc.mxfast. * manual/tunables.texi: Document it. * malloc/malloc.c (do_set_mxfast): New. (__libc_mallopt): Call it. * malloc/arena.c: Add mxfast tunable. * malloc/tst-mxfast.c: New. * malloc/Makefile: Add it. Reviewed-by: Carlos O'Donell <carlos@redhat.com> (cherry picked from commit c48d92b430c480de06762f80c104922239416826)
* nscd: avoid assertion failure during persistent db checkAndreas Schwab2019-10-302-1/+6
| | | | | | nscd should not abort when it finds inconsistencies in the persistent db. (cherry picked from commit 61595e3d36ded374f97961503e843a314b0203c2)
* Small tcache improvementsWilco Dijkstra2019-10-303-9/+15
| | | | | | | | | | | | | | | | | | Change the tcache->counts[] entries to uint16_t - this removes the limit set by char and allows a larger tcache. Remove a few redundant asserts. bench-malloc-thread with 4 threads is ~15% faster on Cortex-A72. Reviewed-by: DJ Delorie <dj@redhat.com> * malloc/malloc.c (MAX_TCACHE_COUNT): Increase to UINT16_MAX. (tcache_put): Remove redundant assert. (tcache_get): Remove redundant asserts. (__libc_malloc): Check tcache count is not zero. * manual/tunables.texi (glibc.malloc.tcache_count): Update maximum. (cherry picked from commit 1f50f2ad854c84ead522bfc7331b46dbe6057d53)
* Fix assertion in malloc.c:tcache_get.Joseph Myers2019-10-302-1/+6
| | | | | | | | | | | | | | | | | | | | One of the warnings that appears with -Wextra is "ordered comparison of pointer with integer zero" in malloc.c:tcache_get, for the assertion: assert (tcache->entries[tc_idx] > 0); Indeed, a "> 0" comparison does not make sense for tcache->entries[tc_idx], which is a pointer. My guess is that tcache->counts[tc_idx] is what's intended here, and this patch changes the assertion accordingly. Tested for x86_64. * malloc/malloc.c (tcache_get): Compare tcache->counts[tc_idx] with 0, not tcache->entries[tc_idx]. (cherry picked from commit 77dc0d8643aa99c92bf671352b0a8adde705896f)
* Improve performance of memmemWilco Dijkstra2019-09-132-42/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch significantly improves performance of memmem using a novel modified Horspool algorithm. Needles up to size 256 use a bad-character table indexed by hashed pairs of characters to quickly skip past mismatches. Long needles use a self-adapting filtering step to avoid comparing the whole needle repeatedly. By limiting the needle length to 256, the shift table only requires 8 bits per entry, lowering preprocessing overhead and minimizing cache effects. This limit also implies worst-case performance is linear. Small needles up to size 2 use a dedicated linear search. Very long needles use the Two-Way algorithm (to avoid increasing stack size or slowing down the common case, inlining is disabled). The performance gain is 6.6 times on English text on AArch64 using random needles with average size 8. Tested against GLIBC testsuite and randomized tests. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com> * string/memmem.c (__memmem): Rewrite to improve performance. (cherry picked from commit 680942b0167715e123d934b609060cd382f8e39f)
* Improve performance of strstrWilco Dijkstra2019-09-133-51/+132
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch significantly improves performance of strstr using a novel modified Horspool algorithm. Needles up to size 256 use a bad-character table indexed by hashed pairs of characters to quickly skip past mismatches. Long needles use a self-adapting filtering step to avoid comparing the whole needle repeatedly. By limiting the needle length to 256, the shift table only requires 8 bits per entry, lowering preprocessing overhead and minimizing cache effects. This limit also implies worst-case performance is linear. Small needles up to size 3 use a dedicated linear search. Very long needles use the Two-Way algorithm. The performance gain using the improved bench-strstr on Cortex-A72 is 5.8 times basic_strstr and 3.7 times twoway_strstr. Tested against GLIBC testsuite, randomized tests and the GNULIB strstr test (https://git.savannah.gnu.org/cgit/gnulib.git/tree/tests/test-strstr.c). Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com> * string/str-two-way.h (two_way_short_needle): Add inline to avoid warning. (two_way_long_needle): Block inlining. * string/strstr.c (strstr2): Add new function. (strstr3): Likewise. (STRSTR): Completely rewrite strstr to improve performance. (cherry picked from commit 5e0a7ecb6629461b28adc1a5aabcc0ede122f201)
* Speedup first memmem matchRajalakshmi Srinivasaraghavan2019-09-132-0/+8
| | | | | | | | | As done in commit 284f42bc778e487dfd5dff5c01959f93b9e0c4f5, memcmp can be used after memchr to avoid the initialization overhead of the two-way algorithm for the first match. This has shown improvement >40% for first match. (cherry picked from commit c8dd67e7c958de04c3783cbea7c384431707b5f8)
* Simplify and speedup strstr/strcasestr first matchWilco Dijkstra2019-09-133-45/+40
| | | | | | | | | | | | | | Looking at the benchtests, both strstr and strcasestr spend a lot of time in a slow initialization loop handling one character per iteration. This can be simplified and use the much faster strlen/strnlen/strchr/memcmp. Read ahead a few cachelines to reduce the number of strnlen calls, which improves performance by ~3-4%. This patch improves the time taken for the full strstr benchtest by >40%. * string/strcasestr.c (STRCASESTR): Simplify and speedup first match. * string/strstr.c (AVAILABLE): Likewise. (cherry picked from commit 284f42bc778e487dfd5dff5c01959f93b9e0c4f5)
* [AArch64] Add ifunc support for AresWilco Dijkstra2019-09-065-2/+15
| | | | | | | | | | | | | | | | | Add Ares to the midr_el0 list and support ifunc dispatch. Since Ares supports 2 128-bit loads/stores, use Neon registers for memcpy by selecting __memcpy_falkor by default (we should rename this to __memcpy_simd or similar). * manual/tunables.texi (glibc.cpu.name): Add ares tunable. * sysdeps/aarch64/multiarch/memcpy.c (__libc_memcpy): Use __memcpy_falkor for ares. * sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_ARES): Add new define. * sysdeps/unix/sysv/linux/aarch64/cpu-features.c (cpu_list): Add ares cpu. (cherry picked from commit 02f440c1ef5d5d79552a524065aa3e2fabe469b9)
* posix: Fix large mmap64 offset for mips64n32 (BZ#24699)Adhemerval Zanella2019-07-125-5/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The fix for BZ#21270 (commit 158d5fa0e19) added a mask to avoid offset larger than 1^44 to be used along __NR_mmap2. However mips64n32 users __NR_mmap, as mips64n64, but still defines off_t as old non-LFS type (other ILP32, such x32, defines off_t being equal to off64_t). This leads to use the same mask meant only for __NR_mmap2 call for __NR_mmap, thus limiting the maximum offset it can use with mmap64. This patch fixes by setting the high mask only for __NR_mmap2 usage. The posix/tst-mmap-offset.c already tests it and also fails for mips64n32. The patch also change the test to check for an arch-specific header that defines the maximum supported offset. Checked on x86_64-linux-gnu, i686-linux-gnu, and I also tests tst-mmap-offset on qemu simulated mips64 with kernel 3.2.0 kernel for both mips-linux-gnu and mips64-n32-linux-gnu. [BZ #24699] * posix/tst-mmap-offset.c: Mention BZ #24699. (do_test_bz21270): Rename to do_test_large_offset and use mmap64_maximum_offset to check for maximum expected offset value. * sysdeps/generic/mmap_info.h: New file. * sysdeps/unix/sysv/linux/mips/mmap_info.h: Likewise. * sysdeps/unix/sysv/linux/mmap64.c (MMAP_OFF_HIGH_MASK): Define iff __NR_mmap2 is used. (cherry picked from commit a008c76b56e4f958cf5a0d6f67d29fade89421b7)
* aarch64: handle STO_AARCH64_VARIANT_PCSSzabolcs Nagy2019-07-122-4/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backport of commit 82bc69c012838a381c4167c156a06f4598f34227 and commit 30ba0375464f34e4bf8129f3d3dc14d0c09add17 without using DT_AARCH64_VARIANT_PCS for optimizing the symbol table check. This is needed so the internal abi between ld.so and libc.so is unchanged. Avoid lazy binding of symbols that may follow a variant PCS with different register usage convention from the base PCS. Currently the lazy binding entry code does not preserve all the registers required for AdvSIMD and SVE vector calls. Saving and restoring all registers unconditionally may break existing binaries, even if they never use vector calls, because of the larger stack requirement for lazy resolution, which can be significant on an SVE system. The solution is to mark all symbols in the symbol table that may follow a variant PCS so the dynamic linker can handle them specially. In this patch such symbols are always resolved at load time, not lazily. So currently LD_AUDIT for variant PCS symbols are not supported, for that the _dl_runtime_profile entry needs to be changed e.g. to unconditionally save/restore all registers (but pass down arg and retval registers to pltentry/exit callbacks according to the base PCS). This patch also removes a __builtin_expect from the modified code because the branch prediction hint did not seem useful. * sysdeps/aarch64/dl-machine.h (elf_machine_lazy_rel): Check STO_AARCH64_VARIANT_PCS and bind such symbols at load time.
* aarch64: add STO_AARCH64_VARIANT_PCS and DT_AARCH64_VARIANT_PCSSzabolcs Nagy2019-07-102-0/+12
| | | | | | | | | | | | | STO_AARCH64_VARIANT_PCS is a non-visibility st_other flag for marking symbols that reference functions that may follow a variant PCS with different register usage convention from the base PCS. DT_AARCH64_VARIANT_PCS is a dynamic tag that marks ELF modules that have R_*_JUMP_SLOT relocations for symbols marked with STO_AARCH64_VARIANT_PCS (i.e. have variant PCS calls via a PLT). * elf/elf.h (STO_AARCH64_VARIANT_PCS): Define. (DT_AARCH64_VARIANT_PCS): Define.
* io: Remove copy_file_range emulation [BZ #24744]Florian Weimer2019-07-0912-776/+78
| | | | | | | | | | | The kernel is evolving this interface (e.g., removal of the restriction on cross-device copies), and keeping up with that is difficult. Applications which need the function should run kernels which support the system call instead of relying on the imperfect glibc emulation. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> (cherry picked from commit 5a659ccc0ec217ab02a4c273a1f6d346a359560a)
* libio: do not attempt to free wide buffers of legacy streams [BZ #24228]Dmitry V. Levin2019-06-206-5/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a601b74d31ca086de38441d316a3dee24c866305 aka glibc-2.23~693 ("In preparation for fixing BZ#16734, fix failure in misc/tst-error1-mem when _G_HAVE_MMAP is turned off.") introduced a regression: _IO_unbuffer_all now invokes _IO_wsetb to free wide buffers of all files, including legacy standard files which are small statically allocated objects that do not have wide buffers and the _mode member, causing memory corruption. Another memory corruption in _IO_unbuffer_all happens when -1 is assigned to the _mode member of legacy standard files that do not have it. [BZ #24228] * libio/genops.c (_IO_unbuffer_all) [SHLIB_COMPAT (libc, GLIBC_2_0, GLIBC_2_1)]: Do not attempt to free wide buffers and access _IO_FILE_complete members of legacy libio streams. * libio/tst-bz24228.c: New file. * libio/tst-bz24228.map: Likewise. * libio/Makefile [build-shared] (tests): Add tst-bz24228. [build-shared] (generated): Add tst-bz24228.mtrace and tst-bz24228.check. [run-built-tests && build-shared] (tests-special): Add $(objpfx)tst-bz24228-mem.out. (LDFLAGS-tst-bz24228, tst-bz24228-ENV): New variables. ($(objpfx)tst-bz24228-mem.out): New rule. (cherry picked from commit 21cc130b78a4db9113fb6695e2b951e697662440)
* Fix tcache count maximum (BZ #24531)Wilco Dijkstra2019-05-223-4/+16
| | | | | | | | | | | | | | The tcache counts[] array is a char, which has a very small range and thus may overflow. When setting tcache_count tunable, there is no overflow check. However the tunable must not be larger than the maximum value of the tcache counts[] array, otherwise it can overflow when filling the tcache. [BZ #24531] * malloc/malloc.c (MAX_TCACHE_COUNT): New define. (do_set_tcache_count): Only update if count is small enough. * manual/tunables.texi (glibc.malloc.tcache_count): Document max value. (cherry picked from commit 5ad533e8e65092be962e414e0417112c65d154fb)
* dlfcn: Guard __dlerror_main_freeres with __libc_once_get (once) [BZ#24476]Mark Wielaard2019-05-163-8/+30
| | | | | | | | | | | | | | | | | | | dlerror.c (__dlerror_main_freeres) will try to free resources which only have been initialized when init () has been called. That function is called when resources are needed using __libc_once (once, init) where once is a __libc_once_define (static, once) in the dlerror.c file. Trying to free those resources if init () hasn't been called will produce errors under valgrind memcheck. So guard the freeing of those resources using __libc_once_get (once) and make sure we have a valid key. Also add a similar guard to __dlerror (). * dlfcn/dlerror.c (__dlerror_main_freeres): Guard using __libc_once_get (once) and static_bug == NULL. (__dlerror): Check we have a valid key, set result to static_buf otherwise. Reviewed-by: Carlos O'Donell <carlos@redhat.com> (cherry picked from commit 11b451c8868d8a2b0edc5dfd44fc58d9ee538be0)
* Fix crash in _IO_wfile_sync (bug 20568)Andreas Schwab2019-05-156-3/+57
| | | | | | | | When computing the length of the converted part of the stdio buffer, use the number of consumed wide characters, not the (negative) distance to the end of the wide buffer. (cherry picked from commit 32ff397533715988c19cbf3675dcbd727ec13e18)
* malloc: Check for large bin list corruption when inserting unsorted chunkAdam Maris2019-05-021-0/+4
| | | | | | | | | | Fixes bug 24216. This patch adds security checks for bk and bk_nextsize pointers of chunks in large bin when inserting chunk from unsorted bin. It was possible to write the pointer to victim (newly inserted chunk) to arbitrary memory locations if bk or bk_nextsize pointers of the next large bin chunk got corrupted. (cherry picked from commit 5b06f538c5aee0389ed034f60d90a8884d6d54de)
* malloc: Check the alignment of mmapped chunks before unmapping.Istvan Kurucsai2019-05-022-1/+8
| | | | | | * malloc/malloc.c (munmap_chunk): Verify chunk alignment. (cherry picked from commit c0e82f117357a941e4d40fcc08babbd6a3c3a1b5)
* malloc: Add more integrity checks to mremap_chunk.Istvan Kurucsai2019-05-022-3/+13
| | | | | | * malloc/malloc.c (mremap_chunk): Additional checks. (cherry picked from commit ebe544bf6e8eec35e754fd49efb027c6f161b6cb)
* malloc: Add ChangeLog for accidentally committed changeFlorian Weimer2019-05-022-1/+5
| | | | | | | | Commit b90ddd08f6dd688e651df9ee89ca3a69ff88cd0c ("malloc: Additional checks for unsorted bin integrity I.") was committed without a whitespace fix, so it is adjusted here as well. (cherry picked from commit 35cfefd96062145eeb8aee6bd72d07e0909a6b2e)
* elf: Fix pldd (BZ#18035)Adhemerval Zanella2019-04-264-108/+82
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since 9182aa67994 (Fix vDSO l_name for GDB's, BZ#387) the initial link_map for executable itself and loader will have both l_name and l_libname->name holding the same value due: elf/dl-object.c 95 new->l_name = *realname ? realname : (char *) newname->name + libname_len - 1; Since newname->name points to new->l_libname->name. This leads to pldd to an infinite call at: elf/pldd-xx.c 203 again: 204 while (1) 205 { 206 ssize_t n = pread64 (memfd, tmpbuf.data, tmpbuf.length, name_offset); 228 /* Try the l_libname element. */ 229 struct E(libname_list) ln; 230 if (pread64 (memfd, &ln, sizeof (ln), m.l_libname) == sizeof (ln)) 231 { 232 name_offset = ln.name; 233 goto again; 234 } Since the value at ln.name (l_libname->name) will be the same as previously read. The straightforward fix is just avoid the check and read the new list entry. I checked also against binaries issues with old loaders with fix for BZ#387, and pldd could dump the shared objects. Checked on x86_64-linux-gnu, i686-linux-gnu, aarch64-linux-gnu, and powerpc64le-linux-gnu. [BZ #18035] * elf/Makefile (tests-container): Add tst-pldd. * elf/pldd-xx.c: Use _Static_assert in of pldd_assert. (E(find_maps)): Avoid use alloca, use default read file operations instead of explicit LFS names, and fix infinite loop. * elf/pldd.c: Explicit set _FILE_OFFSET_BITS, cleanup headers. (get_process_info): Use _Static_assert instead of assert, use default directory operations instead of explicit LFS names, and free some leadek pointers. * elf/tst-pldd.c: New file. (cherry picked from commit 1a4c27355e146b6d8cc6487b998462c7fdd1048f) (Backported without the test case due to lack of test-in-container support.)
* Revert "memusagestat: use local glibc when linking [BZ #18465]"Florian Weimer2019-04-253-10/+2
| | | | | | | | This reverts commit 630d7201ceb12f8dcdbe20abce67e1333c5e15ee. The position of the -Wl,-rpath-link= options on the linker command line is not correct, so the new way of linking memusagestat does not always work.
* memusagestat: use local glibc when linking [BZ #18465]Mike Frysinger2019-04-243-2/+10
| | | | | | | | | | | The memusagestat is the only binary that has its own link line which causes it to be linked against the existing installed C library. It has been this way since it was originally committed in 1999, but I don't see any reason as to why. Since we want all the programs we build locally to be against the new copy of glibc, change the build to be like all other programs. (cherry picked from commit f9b645b4b0a10c43753296ce3fa40053fa44606a)
* ja_JP locale: Add entry for the new Japanese era [BZ #22964]TAMUKI Shoichi2019-04-033-1/+13
| | | | | | | | | | | | The Japanese era name will be changed on May 1, 2019. The Japanese government made a preliminary announcement on April 1, 2019. The glibc ja_JP locale must be updated to include the new era name for strftime's alternative year format support. This is a minimal cherry pick of just the required locale changes. (cherry picked from commit 466afec30896585b60c2106df7a722a86247c9f3)
* S390: Mark vx and vxe as important hwcap.Stefan Liebler2019-03-212-1/+7
| | | | | | | | | | | | | | | | This patch adds vx and vxe as important hwcaps which allows one to provide shared libraries tuned for platforms with non-vx/-vxe, vx or vxe. ChangeLog: * sysdeps/s390/dl-procinfo.h (HWCAP_IMPORTANT): Add HWCAP_S390_VX and HWCAP_S390_VXE. (cherry picked from commit 61f5e9470fb397a4c334938ac5a667427d9047df) Conflicts: ChangeLog