about summary refs log tree commit diff
path: root/sysdeps
Commit message (Collapse)AuthorAgeFilesLines
* nptl: Remove exit-thread.hAdhemerval Zanella2021-06-043-66/+2
| | | | | No function change. The code is used only for Linux, besides being included in generic code.
* dlfcn: Rework static dlopen hooksFlorian Weimer2021-06-031-0/+3
| | | | | | | | | | | | | | Consolidate all hooks structures into a single one. There are no static dlopen ABI concerns because glibc 2.34 already comes with substantial ABI-incompatible changes in this area. (Static dlopen requires the exact same dynamic glibc version that was used for static linking.) The new approach uses a pointer to the hooks structure into _rtld_global_ro and initalizes it in __rtld_static_init. This avoids a back-and-forth with various callback functions. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* dlfcn: Cleanups after -ldl is no longer requiredFlorian Weimer2021-06-033-15/+5
| | | | | | | | | | | | This commit removes the ELF constructor and internal variables from dlfcn/dlfcn.c. The file now serves the same purpose as nptl/libpthread-compat.c, so it is renamed to dlfcn/libdl-compat.c. The use of libdl-shared-only-routines ensures that libdl.a is empty. This commit adjusts the test suite not to use $(libdl). The libdl.so symbolic link is no longer installed. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* dlfcn: Move dlopen into libcFlorian Weimer2021-06-0363-43/+125
| | | | | | The symbol was moved using scripts/move-symbol-to-libc.py. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* dlfcn: Move dlvsym into libcFlorian Weimer2021-06-0363-30/+66
| | | | | | The symbol was moved using scripts/move-symbol-to-libc.py. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* dlfcn: Move dlinfo into libcFlorian Weimer2021-06-0363-30/+83
| | | | | | The symbol was moved using scripts/move-symbol-to-libc.py. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* dlfcn: Move dladdr1 into libcFlorian Weimer2021-06-0363-30/+66
| | | | | | The symbol was moved using scripts/move-symbol-to-libc.py. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* dlfcn: Move dlmopen into libcFlorian Weimer2021-06-0363-30/+83
| | | | | | The symbol was moved using scripts/move-symbol-to-libc.py. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* dlfcn: Move dlsym into libcFlorian Weimer2021-06-0363-30/+66
| | | | | | | | | | The symbol was moved using scripts/move-symbol-to-libc.py. In elf/Makefile, remove the $(libdl) dependency from testobj1.so because it the unused libdl DSO now causes elf/tst-unused-deps to fail. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* dlfcn: Move dladdr into libcFlorian Weimer2021-06-0363-30/+66
| | | | | | The symbol was moved using scripts/move-symbol-to-libc.py. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* dlfcn: Move dlclose into libcFlorian Weimer2021-06-0363-30/+66
| | | | | | The symbol was moved using scripts/move-symbol-to-libc.py. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* dlfcn: Move dlerror into libcFlorian Weimer2021-06-0263-30/+66
| | | | | | | | | | | | | | The symbol was moved using scripts/move-symbol-to-libc.py. There is a minor functionality enhancement: dlerror now sets errno if it was set as part of the exception. (This is the result of using %m in asprintf, to avoid the strerror PLT call.) The previous errno value upon function return was unpredictable. Documenting this as a feature is premature; we need to make sure that the error codes are meaningful when they are set by the dynamic loader. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Add libc ABI extension kludge for baseline-violating libdl symbolsFlorian Weimer2021-06-024-0/+4
| | | | | | | | | | | | | | | | | | | Some targets have a GLIBC_2.0 baseline for libdl, while using GLIBC_2.2 for libc. This means that the generated libc.map file does not have any version nodes for GLIBC_2.0 or GLIBC_2.1. However, moving symbols from libdl into libc needs such version nodes. (Future symbol moves from librt will need this as well.) This kludge is only necessary for symbols predating GLIBC_2.2 because the affected targets use GLIBC_2.2 as the baseline for libc. Given the small number and fixed set of affected architectures, no generic mechanism is implemented, and instead the map file fragment is hard-coded in scripts/versions.mk. The compat_symbol macro already emits the appropriate version strings, so no adjustments are needed there. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Add missing symbols to Version filesFlorian Weimer2021-06-0213-47/+55
| | | | | | | | | Some symbols have explicit versioned_symbol or compat_symbol markers in the sources, but no corresponding entry in the Versions files. This presently works because the local: * directive is only applied to the base version. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Fix use of __pthread_attr_copy in mq_notify (bug 27896)Florian Weimer2021-06-021-2/+9
| | | | | | | | | | | | | | __pthread_attr_copy can fail and does not initialize the attribute structure in that case. If __pthread_attr_copy is never called and there is no allocated attribute, pthread_attr_destroy should not be called, otherwise there is a null pointer dereference in rt/tst-mqueue6. Fixes commit 42d359350510506b87101cf77202fefcbfc790cb ("Use __pthread_attr_copy in mq_notify (bug 27896)"). Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* Use __pthread_attr_copy in mq_notify (bug 27896)Andreas Schwab2021-06-011-5/+10
| | | | | Make a deep copy of the pthread attribute object to remove a potential use-after-free issue.
* stdio-common: Remove _IO_vfwscanfFlorian Weimer2021-06-011-1/+0
| | | | | | | | | The symbol has never been exported, so no compatibility symbol is needed. Removing this file prevents ld from creation an exported symbol in case GLIBC_2_0 expands to a symbol version which does not have a local: *; directive in the symbol version map file. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* aarch64: align stack in clone [BZ #27939]Szabolcs Nagy2021-06-011-0/+2
| | | | | | | The AArch64 PCS requires 16 byte aligned stack. Previously if the caller passed an unaligned stack to clone then the child crashed. Fixes bug 27939.
* powerpc: Optimized memcmp for power10Lucas A. M. Magalhaes2021-05-315-1/+218
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch was based on the __memcmp_power8 and the recent __strlen_power10. Improvements from __memcmp_power8: 1. Don't need alignment code. On POWER10 lxvp and lxvl do not generate alignment interrupts, so they are safe for use on caching-inhibited memory. Notice that the comparison on the main loop will wait for both VSR to be ready. Therefore aligning one of the input address does not improve performance. In order to align both registers a vperm is necessary which add too much overhead. 2. Uses new POWER10 instructions This code uses lxvp to decrease contention on load by loading 32 bytes per instruction. The vextractbm is used to have a smaller tail code for calculating the return value. 3. Performance improvement This version has around 35% better performance on average. I saw no performance regressions for any length or alignment. Thanks Matheus for helping me out with some details. Co-authored-by: Matheus Castanho <msc@linux.ibm.com> Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
* x86-64: Align child stack to 16 bytes [BZ #27902]H.J. Lu2021-05-313-4/+103
| | | | | | | In the x86-64 clone wrapper, align child stack to 16 bytes per the x86-64 psABI. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* nptl: Move Linux createthread to nptlAdhemerval Zanella2021-05-271-153/+0
| | | | | | git mv -f sysdeps/unix/sysv/linux/createthread.c nptl/createthread.c No functional change.
* aarch64: Added optimized memset for A64FXNaohiro Tamura2021-05-274-5/+286
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch optimizes the performance of memset for A64FX [1] which implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB cache per NUMA node. The performance optimization makes use of Scalable Vector Register with several techniques such as loop unrolling, memory access alignment, cache zero fill and prefetch. SVE assembler code for memset is implemented as Vector Length Agnostic code so theoretically it can be run on any SOC which supports ARMv8-A SVE standard. We confirmed that all testcases have been passed by running 'make check' and 'make xcheck' not only on A64FX but also on ThunderX2. And also we confirmed that the SVE 512 bit vector register performance is roughly 4 times better than Advanced SIMD 128 bit register and 8 times better than scalar 64 bit register by running 'make bench'. [1] https://github.com/fujitsu/A64FX Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com> Reviewed-by: Szabolcs Nagy <Szabolcs.Nagy@arm.com>
* aarch64: Added optimized memcpy and memmove for A64FXNaohiro Tamura2021-05-278-13/+451
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch optimizes the performance of memcpy/memmove for A64FX [1] which implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB cache per NUMA node. The performance optimization makes use of Scalable Vector Register with several techniques such as loop unrolling, memory access alignment, cache zero fill, and software pipelining. SVE assembler code for memcpy/memmove is implemented as Vector Length Agnostic code so theoretically it can be run on any SOC which supports ARMv8-A SVE standard. We confirmed that all testcases have been passed by running 'make check' and 'make xcheck' not only on A64FX but also on ThunderX2. And also we confirmed that the SVE 512 bit vector register performance is roughly 4 times better than Advanced SIMD 128 bit register and 8 times better than scalar 64 bit register by running 'make bench'. [1] https://github.com/fujitsu/A64FX Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com> Reviewed-by: Szabolcs Nagy <Szabolcs.Nagy@arm.com>
* aarch64: Added Vector Length Set test helper scriptNaohiro Tamura2021-05-261-0/+82
| | | | | | | | | | | | | | | | | | | This patch is a test helper script to change Vector Length for child process. This script can be used as test-wrapper for 'make check'. Usage examples: ~/build$ make check subdirs=string \ test-wrapper='~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16' ~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16 \ make test t=string/test-memcpy ~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 32 \ ./debugglibc.sh string/test-memmove ~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 64 \ ./testrun.sh string/test-memset
* aarch64: define BTI_C and BTI_J macros as NOP unless HAVE_AARCH64_BTINaohiro Tamura2021-05-261-2/+7
| | | | | | | | | This patch defines BTI_C and BTI_J macros conditionally for performance. If HAVE_AARCH64_BTI is true, BTI_C and BTI_J are defined as HINT instruction for ARMv8.5 BTI (Branch Target Identification). If HAVE_AARCH64_BTI is false, both BTI_C and BTI_J are defined as NOP.
* config: Added HAVE_AARCH64_SVE_ASM for aarch64Naohiro Tamura2021-05-262-0/+43
| | | | | This patch checks if assembler supports '-march=armv8.2-a+sve' to generate SVE code or not, and then define HAVE_AARCH64_SVE_ASM macro.
* Linux: Remove remaining references to $(shared-thread-library)Florian Weimer2021-05-253-9/+0
| | | | | | | Since the variable expands to nothing under Linux, it is no longer necessary to clutter the makefiles with it. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* nptl: Do not install libpthread.so and do not link tests with itFlorian Weimer2021-05-251-2/+6
| | | | | | Keep installing libpthread.a, so that -lpthread works. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* powerpc: Fix handling of scv return error codes [BZ #27892]Nicholas Piggin2021-05-241-2/+3
| | | | | | | | | | | When using scv for templated ASM syscalls, current code interprets any negative return value as error, but the only valid error codes are in the range -4095..-1 according to the ABI. This commit also fixes 'signal.gen.test' strace test, where the issue was first identified. Reviewed-by: Matheus Castanho <msc@linux.ibm.com>
* Properly check stack alignment [BZ #27901]H.J. Lu2021-05-246-165/+61
| | | | | | | | | | | | | | | | | | | | | | 1. Replace if ((((uintptr_t) &_d) & (__alignof (double) - 1)) != 0) which may be optimized out by compiler, with int __attribute__ ((weak, noclone, noinline)) is_aligned (void *p, int align) { return (((uintptr_t) p) & (align - 1)) != 0; } 2. Add TEST_STACK_ALIGN_INIT to TEST_STACK_ALIGN. 3. Add a common TEST_STACK_ALIGN_INIT to check 16-byte stack alignment for both i386 and x86-64. 4. Update powerpc to use TEST_STACK_ALIGN_INIT. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* x86: Improve memmove-vec-unaligned-erms.SNoah Goldstein2021-05-231-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch changes the condition for copy 4x VEC so that if length is exactly equal to 4 * VEC_SIZE it will use the 4x VEC case instead of 8x VEC case. Results For Skylake memcpy-avx2-erms size, al1 , al2 , Cur T , New T , Win , New / Cur 128 , 0 , 0 , 9.137 , 6.873 , New , 75.22 128 , 7 , 0 , 12.933 , 7.732 , New , 59.79 128 , 0 , 7 , 11.852 , 6.76 , New , 57.04 128 , 7 , 7 , 12.587 , 6.808 , New , 54.09 Results For Icelake memcpy-evex-erms size, al1 , al2 , Cur T , New T , Win , New / Cur 128 , 0 , 0 , 9.963 , 5.416 , New , 54.36 128 , 7 , 0 , 16.467 , 8.061 , New , 48.95 128 , 0 , 7 , 14.388 , 7.644 , New , 53.13 128 , 7 , 7 , 14.546 , 7.642 , New , 52.54 Results For Tigerlake memcpy-evex-erms size, al1 , al2 , Cur T , New T , Win , New / Cur 128 , 0 , 0 , 8.979 , 4.95 , New , 55.13 128 , 7 , 0 , 14.245 , 7.122 , New , 50.0 128 , 0 , 7 , 12.668 , 6.675 , New , 52.69 128 , 7 , 7 , 13.042 , 6.802 , New , 52.15 Results For Skylake memmove-avx2-erms size, al1 , al2 , Cur T , New T , Win , New / Cur 128 , 0 , 32 , 6.181 , 5.691 , New , 92.07 128 , 32 , 0 , 6.165 , 5.752 , New , 93.3 128 , 0 , 7 , 13.923 , 9.37 , New , 67.3 128 , 7 , 0 , 12.049 , 10.182 , New , 84.5 Results For Icelake memmove-evex-erms size, al1 , al2 , Cur T , New T , Win , New / Cur 128 , 0 , 32 , 5.479 , 4.889 , New , 89.23 128 , 32 , 0 , 5.127 , 4.911 , New , 95.79 128 , 0 , 7 , 18.885 , 13.547 , New , 71.73 128 , 7 , 0 , 15.565 , 14.436 , New , 92.75 Results For Tigerlake memmove-evex-erms size, al1 , al2 , Cur T , New T , Win , New / Cur 128 , 0 , 32 , 5.275 , 4.815 , New , 91.28 128 , 32 , 0 , 5.376 , 4.565 , New , 84.91 128 , 0 , 7 , 19.426 , 14.273 , New , 73.47 128 , 7 , 0 , 15.924 , 14.951 , New , 93.89 Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
* nptl: Remove remaining code from libpthreadFlorian Weimer2021-05-2147-114/+26
| | | | | | | | | Only the placeholder compatibility symbols are left now. The __errno_location symbol was removed (moved) using scripts/move-symbol-to-libc.py. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* nptl: Move pthread_create, thrd_create into libcFlorian Weimer2021-05-2164-71/+184
| | | | | | | | | | | | | | | | | | | The symbols were moved using scripts/move-symbol-to-libc.py. The libpthread placeholder symbols need some changes because some symbol versions have gone away completely. But __errno_location@@GLIBC_2.0 still exists, so the GLIBC_2.0 version is still there. The internal __pthread_create symbol now points to the correct function, so the sysdeps/nptl/thrd_create.c override is no longer necessary. There was an issue how the hidden alias of pthread_getattr_default_np was defined, so this commit cleans up that aspects and removes the GLIBC_PRIVATE export altogether. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* nptl: Eliminate the __static_tls_size, __static_tls_align_m1 variablesFlorian Weimer2021-05-211-0/+28
| | | | | | | | | | | | | | | | | | | Use the __nptl_tls_static_size_for_stack inline function instead, and the GLRO (dl_tls_static_align) value directly. The computation of GLRO (dl_tls_static_align) in _dl_determine_tlsoffset ensures that the alignment is at least TLS_TCB_ALIGN, which at least STACK_ALIGN (see allocate_stack). Therefore, the additional rounding-up step is removed. ALso move the initialization of the default stack size from __pthread_initialize_minimal_internal to __pthread_early_init. This introduces an extra system call during single-threaded startup, but this simplifies the initialization sequence. No locking is needed around the writes to __default_pthread_attr because the process is single-threaded at this point. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86: Improve memset-vec-unaligned-erms.SNoah Goldstein2021-05-201-22/+28
| | | | | | | | | | | | | | | No bug. This commit makes a few small improvements to memset-vec-unaligned-erms.S. The changes are 1) only aligning to 64 instead of 128. Either alignment will perform equally well in a loop and 128 just increases the odds of having to do an extra iteration which can be significant overhead for small values. 2) Align some targets and the loop. 3) Remove an ALU from the alignment process. 4) Reorder the last 4x VEC so that they are stored after the loop. 5) Move the condition for leq 8x VEC to before the alignment process. test-memset and test-wmemset are both passing. Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* Hurd: Define ARCH_MIN_GUARD_SIZE in internal <pthread.h>Florian Weimer2021-05-201-0/+3
| | | | | | This macro is always defined on Linux. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* s390x: Check HWCAP bits against compiler flagsFlorian Weimer2021-05-191-0/+40
| | | | | | | | | | When compiled with GCC 11.1 and -march=z14 -O3 build flags, running ld.so (or any dynamically linked program) prints: Fatal glibc error: CPU lacks VXE support (z14 or later required) Co-Authored-By: Stefan Liebler <stli@linux.ibm.com> Reviewed-by: Stefan Liebler <stli@linux.ibm.com>
* powerpc64le: Check HWCAP bits against compiler build flagsFlorian Weimer2021-05-191-0/+52
| | | | | | | When built with GCC 11.1 and -mcpu=power9, ld.so prints this error message when running on POWER8: Fatal glibc error: CPU lacks ISA 3.00 support (POWER9 or later required)
* elf: Add hook for checking HWCAP bits after auxiliary vector parsingFlorian Weimer2021-05-191-0/+28
| | | | Reviewed-by: Stefan Liebler <stli@linux.ibm.com>
* x86: Optimize memcmp-evex-movbe.SNoah Goldstein2021-05-181-302/+408
| | | | | | | | | | | | No bug. This commit optimizes memcmp-evex.S. The optimizations include adding a new vec compare path for small sizes, reorganizing the entry control flow, removing some unnecissary ALU instructions from the main loop, and most importantly replacing the heavy use of vpcmp + kand logic with vpxor + vptern. test-memcmp and test-wmemcmp are both passing. Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86: Optimize memcmp-avx2-movbe.SNoah Goldstein2021-05-183-281/+402
| | | | | | | | | | No bug. This commit optimizes memcmp-avx2.S. The optimizations include adding a new vec compare path for small sizes, reorganizing the entry control flow, and removing some unnecissary ALU instructions from the main loop. test-memcmp and test-wmemcmp are both passing. Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* linux: Fix clock_getres fallbackAdhemerval Zanella2021-05-181-1/+1
| | | | | | | | The tst-timespec_getres (e5ac7bd679de5) triggers an issue on 32-bit architecture on Linux older than 5.1, where the fallback syscall is used. Checked on powerpc-linux-gnu.
* hurd: Add execveatSamuel Thibault2021-05-183-43/+121
|
* Add C2X timespec_getresJoseph Myers2021-05-1736-0/+85
| | | | | | | | | | | | | | | | | | | ISO C2X adds a timespec_getres function alongside the C11 timespec_get, with functionality similar to that of POSIX clock_getres (including allowing a NULL pointer to be passed to the function). Implement this function for glibc, similarly to the implementation of timespec_get. This includes a basic test like that of timespec_get, but no documentation in the manual, given that TIME_UTC and timespec_get aren't documented in the manual at all. The handling of 64-bit time follows that in timespec_get; people maintaining patch series for 64-bit time will need to update them accordingly (to export __timespec_getres64, redirect calls in time.h and run the test for _TIME_BITS=64). Tested for x86_64 and x86, and (previous version; only testcase differs) with build-many-glibcs.py.
* powerpc: Add optimized rawmemchr for POWER10Matheus Castanho2021-05-176-27/+188
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reuse code for optimized strlen to implement a faster version of rawmemchr. This takes advantage of the same benefits provided by the strlen implementation, but needs some extra steps. __strlen_power10 code should be unchanged after this change. rawmemchr returns a pointer to the char found, while strlen returns only the length, so we have to take that into account when preparing the return value. To quickly check 64B, the loop on __strlen_power10 merges the whole block into 16B by using unsigned minimum vector operations (vminub) and checks if there are any \0 on the resulting vector. The same code is used by rawmemchr if the char c is 0. However, this approach does not work when c != 0. We first need to subtract each byte by c, so that the value we are looking for is converted to a 0, then taking the minimum and checking for nulls works again. The new code branches after it has compared ~256 bytes and chooses which of the two strategies above will be used in the main loop, based on the char c. This extra branch adds some overhead (~5%) for length ~256, but is quickly amortized by the faster loop for larger sizes. Compared to __rawmemchr_power9, this version is ~20% faster for length < 256. Because of the optimized main loop, the improvement becomes ~35% for c != 0 and ~50% for c = 0 for strings longer than 256. Reviewed-by: Lucas A. M. Magalhaes <lamm@linux.ibm.com> Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
* nptl: Move pthread_sigqueue into libcFlorian Weimer2021-05-1761-29/+83
| | | | | | | | The symbol was moved using scripts/move-symbol-to-libc.py. The GLIBC_2.11 version is now empty, so add a placeholder symbol. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* nptl: Move pthread_setschedprio into libcFlorian Weimer2021-05-1761-29/+80
| | | | | | | | The symbol was moved using scripts/move-symbol-to-libc.py. The GLIBC_2.3.4 version is now empty, so add a placeholder symbol. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* nptl: Move pthread_setname_np into libcFlorian Weimer2021-05-1761-29/+83
| | | | | | | | | The symbol was moved using scripts/move-symbol-to-libc.py. Add __libpthread_version_placeholder@@GLIBC_2.12 for the targets that need it. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* nptl: Move pthread_setaffinity_np into libcFlorian Weimer2021-05-1761-45/+99
| | | | | | The symbol was moved using scripts/move-symbol-to-libc.py. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* nptl: Move pthread_getname_np into libcFlorian Weimer2021-05-1761-29/+64
| | | | | | The symbol was moved using scripts/move-symbol-to-libc.py. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>