about summary refs log tree commit diff
path: root/sysdeps/powerpc
Commit message (Collapse)AuthorAgeFilesLines
* elf: Drop elf/tls-macros.h in favor of __thread and tls_model attributes [BZ ↵Fangrui Song2021-08-162-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | #28152] [BZ #28205] elf/tls-macros.h was added for TLS testing when GCC did not support __thread. __thread and tls_model attributes are mature now and have been used by many newer tests. Also delete tst-tls2.c which tests .tls_common (unused by modern GCC and unsupported by Clang/LLD). .tls_common and .tbss definition are almost identical after linking, so the runtime test doesn't add additional coverage. Assembler and linker tests should be on the binutils side. When LLD 13.0.0 is allowed in configure.ac (https://sourceware.org/pipermail/libc-alpha/2021-August/129866.html), `make check` result is on par with glibc built with GNU ld on aarch64 and x86_64. As a future clean-up, TLS_GD/TLS_LD/TLS_IE/TLS_IE macros can be removed from sysdeps/*/tls-macros.h. We can add optional -mtls-dialect={gnu2,trad} tests to ensure coverage. Tested on aarch64-linux-gnu, powerpc64le-linux-gnu, and x86_64-linux-gnu. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* powerpc64: Add checks for Altivec and VSX in ifunc selectionAnton Blanchard2021-08-0626-68/+139
| | | | | | | We'd like to support processors without Altivec or VSX, so check the relevant hwcap bits before selecting them. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
* powerpc64: Check cacheline size before using optimised memset routinesAnton Blanchard2021-08-062-10/+23
| | | | | | | A number of optimised memset routines assume the cacheline size is 128B, so we better check before using them. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
* powerpc64: Replace some PPC_FEATURE_HAS_VSX with PPC_FEATURE_ARCH_2_06Anton Blanchard2021-08-0620-38/+38
| | | | | | | | We use PPC_FEATURE_HAS_VSX to select a number of POWER7 optimised functions. These functions don't use any VSX instructions, so PPC_FEATURE_ARCH_2_06 seems like a better fit. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
* Force building with -fno-commonFlorian Weimer2021-07-091-4/+4
| | | | | | | | | | As a result, is not necessary to specify __attribute__ ((nocommon)) on individual definitions. GCC 10 defaults to -fno-common on all architectures except ARC, but this change is compatible with older GCC versions and ARC, too. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* powerpc64le: Fix typo in configureAnton Blanchard2021-07-082-2/+2
| | | | | | The configure script checks for -mlong-double-128 but mentions -mlongdouble when it fails. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* powerpc64: Remove strcspn ifunc from the loaderTulio Magno Quites Machado Filho2021-07-081-0/+18
| | | | | | | | | | 5 years ago, commit 8f1b841e452dbb083112fd036033b7f4af506ba0 unintentionally added an ifunc to the loader. That modification has not caused any harm so far, but it doesn't add any value either, because the hwcap information is available later during libc initialization. Suggested-by: Anton Blanchard <anton@ozlabs.org>
* Update powerpc-nofpu libm-test-ulpsJoseph Myers2021-07-071-41/+45
|
* powerpc: optimize strcpy/stpcpy for POWER9/10Pedro Franco de Carvalho2021-07-011-71/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch modifies the current POWER9 implementation of strcpy and stpcpy to optimize it for POWER9/10. Since no new POWER10 instructions are used, the original POWER9 strcpy is modified instead of creating a new implementation for POWER10. This implementation is based on both the original POWER9 implementation of strcpy and the preamble of the new POWER10 implementation of strlen. The changes also affect stpcpy, which uses the same implementation with some additional code before returning. On POWER9, averaging improvements across the benchmark inputs (length/source alignment/destination alignment), for an experiment that ran the benchmark five times, bench-strcpy showed an improvement of 5.23%, and bench-stpcpy showed an improvement of 6.59%. On POWER10, bench-strcpy showed 13.16%, and bench-stpcpy showed 13.59%. The changes are: 1. Removed the null string optimization. Although this results in a few extra cycles for the null string, in combination with the second change, this resulted in improvements for for other cases. 2. Adapted the preamble from strlen for POWER10. This is the part of the function that handles up to the first 16 bytes of the string. 3. Increased number of unrolled iterations in the main loop to 6. Reviewed-by: Matheus Castanho <msc@linux.ibm.com> Tested-by: Matheus Castanho <msc@linux.ibm.com>
* Add build option to disable usage of scv on powerpcMatheus Castanho2021-06-101-8/+8
| | | | | | | | | | | | | | | | | Commit 68ab82f56690ada86ac1e0c46bad06ba189a10ef added support for the scv syscall ABI on powerpc. Since then systems that have kernel and processor support started using scv. However adding the proper support for a new syscall ABI requires changes to several other projects (e.g. qemu, valgrind, strace, kernel), which are gradually receiving support. Meanwhile, having a way to disable scv on glibc at build time can be useful for distros that may encounter conflicts with projects that still do not support the scv ABI, buying time until proper support is added. This commit adds a --disable-scv option that disables scv support and uses sc for all syscalls, like before commit 68ab82f56690ada86ac1e0c46bad06ba189a10ef. Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
* Remove stale references to libdl.aFlorian Weimer2021-06-092-2/+0
| | | | | | | | Since commit 0c1c3a771eceec46e66ce1183cf988e2303bd373 ("dlfcn: Move dlopen into libc") libdl.a is empty, so linking against it is no longer necessary. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* powerpc: Optimized memcmp for power10Lucas A. M. Magalhaes2021-05-315-1/+218
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch was based on the __memcmp_power8 and the recent __strlen_power10. Improvements from __memcmp_power8: 1. Don't need alignment code. On POWER10 lxvp and lxvl do not generate alignment interrupts, so they are safe for use on caching-inhibited memory. Notice that the comparison on the main loop will wait for both VSR to be ready. Therefore aligning one of the input address does not improve performance. In order to align both registers a vperm is necessary which add too much overhead. 2. Uses new POWER10 instructions This code uses lxvp to decrease contention on load by loading 32 bytes per instruction. The vextractbm is used to have a smaller tail code for calculating the return value. 3. Performance improvement This version has around 35% better performance on average. I saw no performance regressions for any length or alignment. Thanks Matheus for helping me out with some details. Co-authored-by: Matheus Castanho <msc@linux.ibm.com> Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
* powerpc: Fix handling of scv return error codes [BZ #27892]Nicholas Piggin2021-05-241-2/+3
| | | | | | | | | | | When using scv for templated ASM syscalls, current code interprets any negative return value as error, but the only valid error codes are in the range -4095..-1 according to the ABI. This commit also fixes 'signal.gen.test' strace test, where the issue was first identified. Reviewed-by: Matheus Castanho <msc@linux.ibm.com>
* Properly check stack alignment [BZ #27901]H.J. Lu2021-05-241-20/+7
| | | | | | | | | | | | | | | | | | | | | | 1. Replace if ((((uintptr_t) &_d) & (__alignof (double) - 1)) != 0) which may be optimized out by compiler, with int __attribute__ ((weak, noclone, noinline)) is_aligned (void *p, int align) { return (((uintptr_t) p) & (align - 1)) != 0; } 2. Add TEST_STACK_ALIGN_INIT to TEST_STACK_ALIGN. 3. Add a common TEST_STACK_ALIGN_INIT to check 16-byte stack alignment for both i386 and x86-64. 4. Update powerpc to use TEST_STACK_ALIGN_INIT. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* powerpc64le: Check HWCAP bits against compiler build flagsFlorian Weimer2021-05-191-0/+52
| | | | | | | When built with GCC 11.1 and -mcpu=power9, ld.so prints this error message when running on POWER8: Fatal glibc error: CPU lacks ISA 3.00 support (POWER9 or later required)
* powerpc: Add optimized rawmemchr for POWER10Matheus Castanho2021-05-176-27/+188
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reuse code for optimized strlen to implement a faster version of rawmemchr. This takes advantage of the same benefits provided by the strlen implementation, but needs some extra steps. __strlen_power10 code should be unchanged after this change. rawmemchr returns a pointer to the char found, while strlen returns only the length, so we have to take that into account when preparing the return value. To quickly check 64B, the loop on __strlen_power10 merges the whole block into 16B by using unsigned minimum vector operations (vminub) and checks if there are any \0 on the resulting vector. The same code is used by rawmemchr if the char c is 0. However, this approach does not work when c != 0. We first need to subtract each byte by c, so that the value we are looking for is converted to a 0, then taking the minimum and checking for nulls works again. The new code branches after it has compared ~256 bytes and chooses which of the two strategies above will be used in the main loop, based on the char c. This extra branch adds some overhead (~5%) for length ~256, but is quickly amortized by the faster loop for larger sizes. Compared to __rawmemchr_power9, this version is ~20% faster for length < 256. Because of the optimized main loop, the improvement becomes ~35% for c != 0 and ~50% for c = 0 for strings longer than 256. Reviewed-by: Lucas A. M. Magalhaes <lamm@linux.ibm.com> Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
* powerpc64le: Fix ifunc selection for memset, memmove, bzero and bcopyRaoni Fassina Firmino2021-05-075-20/+22
| | | | | | | | | The hwcap2 check for the aforementioned functions should check for both PPC_FEATURE2_ARCH_3_1 and PPC_FEATURE2_HAS_ISEL but was mistakenly checking for any one of them, enabling isa 3.1 version of the functions in incompatible processors, like POWER8. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
* Remove architecture specific sched_cpucount optimizationsAdhemerval Zanella2021-05-071-22/+0
| | | | | | | | | | | | | | And replace the generic algorithm with the Brian Kernighan's one. GCC optimize it with popcnt if the architecture supports, so there is no need to add the extra POPCNT define to enable it. This is really a micro-optimization that only adds complexity: recent ABIs already support it (x86-64-v2 or power64le) and it simplifies the code for internal usage, since i686 does not allow an internal iFUNC call. Checked on x86_64-linux-gnu, aarch64-linux-gnu, and powerpc64le-linux-gnu.
* powerpc64le: Optimize memset for POWER10Raoni Fassina Firmino2021-04-306-1/+314
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This implementation is based on __memset_power8 and integrates a lot of suggestions from Anton Blanchard. The biggest difference is that it makes extensive use of stxvl to alignment and tail code to avoid branches and small stores. It has three main execution paths: a) "Short lengths" for lengths up to 64 bytes, avoiding as many branches as possible. b) "General case" for larger lengths, it has an alignment section using stxvl to avoid branches, a 128 bytes loop and then a tail code, again using stxvl with few branches. c) "Zeroing cache blocks" for lengths from 256 bytes upwards and set value being zero. It is mostly the __memset_power8 code but the alignment phase was simplified because, at this point, address is already 16-bytes aligned and also changed to use vector stores. The tail code was also simplified to reuse the general case tail. All unaligned stores use stxvl instructions that do not generate alignment interrupts on POWER10, making it safe to use on caching-inhibited memory. On average, this implementation provides something around 30% improvement when compared to __memset_power8. Reviewed-by: Matheus Castanho <msc@linux.ibm.com> Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
* powerpc64le: Optimize memcpy for POWER10Tulio Magno Quites Machado Filho2021-04-305-1/+238
| | | | | | | | | | | | | | | | | | This implementation is based on __memcpy_power8_cached and integrates suggestions from Anton Blanchard. It benefits from loads and stores with length for short lengths and for tail code, simplifying the code. All unaligned memory accesses use instructions that do not generate alignment interrupts on POWER10, making it safe to use on caching-inhibited memory. The main loop has also been modified in order to increase instruction throughput by reducing the dependency on updates from previous iterations. On average, this implementation provides around 30% improvement when compared to __memcpy_power7 and 10% improvement in comparison to __memcpy_power8_cached.
* powerpc64le: Optimized memmove for POWER10Lucas A. M. Magalhaes2021-04-308-7/+388
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch was initially based on the __memmove_power7 with some ideas from strncpy implementation for Power 9. Improvements from __memmove_power7: 1. Use lxvl/stxvl for alignment code. The code for Power 7 uses branches when the input is not naturally aligned to the width of a vector. The new implementation uses lxvl/stxvl instead which reduces pressure on GPRs. It also allows the removal of branch instructions, implicitly removing branch stalls and mispredictions. 2. Use of lxv/stxv and lxvl/stxvl pair is safe to use on Cache Inhibited memory. On Power 10 vector load and stores are safe to use on CI memory for addresses unaligned to 16B. This code takes advantage of this to do unaligned loads. The unaligned loads don't have a significant performance impact by themselves. However doing so decreases register pressure on GPRs and interdependence stalls on load/store pairs. This also improved readability as there are now less code paths for different alignments. Finally this reduces the overall code size. 3. Improved performance. This version runs on average about 30% better than memmove_power7 for lengths larger than 8KB. For input lengths shorter than 8KB the improvement is smaller, it has on average about 17% better performance. This version has a degradation of about 50% for input lengths in the 0 to 31 bytes range when dest is unaligned. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
* powerpc: Add log IFUNC multiarch support for POWER10Raphael Moreira Zinsly2021-04-267-0/+101
| | | | | | | Checked on ppc64le built without --with-cpu, with --with-cpu=power9 and with --disable-multi-arch. Reviewed-by: Matheus Castanho <msc@linux.ibm.com>
* nptl: Move pthread_spin_trylock into libcFlorian Weimer2021-04-231-1/+9
| | | | The symbol was moved using scripts/move-symbol-to-libc.py.
* nptl: Move pthread_spin_lock into libcFlorian Weimer2021-04-231-1/+7
| | | | The symbol was moved using scripts/move-symbol-to-libc.py.
* nptl: Move pthread_spin_init, Move pthread_spin_unlock into libcFlorian Weimer2021-04-231-1/+9
| | | | | | | For some architectures, the two functions are aliased, so these symbols need to be moved at the same time. The symbols were moved using scripts/move-symbol-to-libc.py.
* powerpc: Add optimized strlen for POWER10Matheus Castanho2021-04-225-1/+230
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Improvements compared to POWER9 version: 1. Take into account first 16B comparison for aligned strings The previous version compares the first 16B and increments r4 by the number of bytes until the address is 16B-aligned, then starts doing aligned loads at that address. For aligned strings, this causes the first 16B to be compared twice, because the increment is 0. Here we calculate the next 16B-aligned address differently, which avoids that issue. 2. Use simple comparisons for the first ~192 bytes The main loop is good for big strings, but comparing 16B each time is better for smaller strings. So after aligning the address to 16 Bytes, we check more 176B in 16B chunks. There may be some overlaps with the main loop for unaligned strings, but we avoid using the more aggressive strategy too soon, and also allow the loop to start at a 64B-aligned address. This greatly benefits smaller strings and avoids overlapping checks if the string is already aligned at a 64B boundary. 3. Reduce dependencies between load blocks caused by address calculation on loop Doing a precise time tracing on the code showed many loads in the loop were stalled waiting for updates to r4 from previous code blocks. This implementation avoids that as much as possible by using 2 registers (r4 and r5) to hold addresses to be used by different parts of the code. Also, the previous code aligned the address to 16B, then to 64B by doing a few 48B loops (if needed) until the address was aligned. The main loop could not start until that 48B loop had finished and r4 was updated with the current address. Here we calculate the address used by the loop very early, so it can start sooner. The main loop now uses 2 pointers 128B apart to make pointer updates less frequent, and also unrolls 1 iteration to guarantee there is enough time between iterations to update the pointers, reducing stalled cycles. 4. Use new P10 instructions lxvp is used to load 32B with a single instruction, reducing contention in the load queue. vextractbm allows simplifying the tail code for the loop, replacing vbpermq and avoiding having to generate a permute control vector. Reviewed-by: Paul E Murphy <murphyp@linux.ibm.com> Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com> Reviewed-by: Lucas A. M. Magalhaes <lamm@linux.ibm.com>
* nptl: Move __pthread_unwind_next into libcFlorian Weimer2021-04-212-13/+5
| | | | | | | | | | | | | | | | | | | It's necessary to stub out __libc_disable_asynccancel and __libc_enable_asynccancel via rtld-stubbed-symbols because the new direct references to the unwinder result in symbol conflicts when the rtld exception handling from libc is linked in during the construction of librtld.map. unwind-forcedunwind.c is merged into unwind-resume.c. libc now needs the functions that were previously only used in libpthread. The GLIBC_PRIVATE exports of __libc_longjmp and __libc_siglongjmp are no longer needed, so switch them to hidden symbols. The symbol __pthread_unwind_next has been moved using scripts/move-symbol-to-libc.py. Reviewed-by: Adhemerva Zanella <adhemerval.zanella@linaro.org>
* powerpc: Update libm test ulpsTulio Magno Quites Machado Filho2021-04-091-10/+10
| | | | Update after commit 43576de04afc6a0896a3ecc094e1581069a0652a.
* Fix the inaccuracy of j0f/j1f/y0f/y1f [BZ #14469, #14470, #14471, #14472]Paul Zimmermann2021-04-021-31/+31
| | | | | | | | | | | | | | | | | | | | | | | For j0f/j1f/y0f/y1f, the largest error for all binary32 inputs is reduced to at most 9 ulps for all rounding modes. The new code is enabled only when there is a cancellation at the very end of the j0f/j1f/y0f/y1f computation, or for very large inputs, thus should not give any visible slowdown on average. Two different algorithms are used: * around the first 64 zeros of j0/j1/y0/y1, approximation polynomials of degree 3 are used, computed using the Sollya tool (https://www.sollya.org/) * for large inputs, an asymptotic formula from [1] is used [1] Fast and Accurate Bessel Function Computation, John Harrison, Proceedings of Arith 19, 2009. Inputs yielding the new largest errors are added to auto-libm-test-in, and ulps are regenerated for various targets (thanks Adhemerval Zanella). Tested on x86_64 with --disable-multi-arch and on powerpc64le-linux-gnu. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* powerpc64le: Use ifunc for _Float128 functions also in libcAndreas Schwab2021-04-013-8/+17
| | | | | | This fixes missing definition of math functions in libc in a static link that are no longer built for libm after commit 4898d9712b ("Avoid adding duplicated symbols into static libraries").
* powerpc: Add optimized llogb* for POWER9Raphael Moreira Zinsly2021-03-162-0/+43
| | | | | The POWER9 builtins used to improve the ilogb* functions can be used in the llogb* functions as well.
* powerpc: Add optimized ilogb* for POWER9Raphael Moreira Zinsly2021-03-163-1/+59
| | | | | | The instructions xsxexpdp and xsxexpqp introduced on POWER9 extract the exponent from a double-precision and quad-precision floating-point respectively, thus they can be used to improve ilogb, ilogbf and ilogbf128.
* powerpc: Update libm-test-ulpsMatheus Castanho2021-03-161-1/+1
| | | | | | Generated with 'make regen-ulps' on POWER8. Tested on powerpc, powerpc64, and powerpc64le
* powerpc: Regenerate ulpsFlorian Weimer2021-03-031-10/+10
| | | | This time on a POWER8 machine.
* powerpc: Update libm-test-ulpsMatheus Castanho2021-03-021-13/+15
| | | | | | Generated with 'make regen-ulps' Tested on powerpc, powerpc64, and powerpc64le
* Implement <unwind-link.h> for dynamically loading the libgcc_s unwinderFlorian Weimer2021-03-011-0/+28
| | | | | | | | | | This will be used to consolidate the libgcc_s access for backtrace and pthread_cancel. Unlike the existing backtrace implementations, it provides some hardening based on pointer mangling. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* Reduce the statically linked startup code [BZ #23323]Florian Weimer2021-02-252-4/+4
| | | | | | | | | | | | | | | | | | | It turns out the startup code in csu/elf-init.c has a perfect pair of ROP gadgets (see Marco-Gisbert and Ripoll-Ripoll, "return-to-csu: A New Method to Bypass 64-bit Linux ASLR"). These functions are not needed in dynamically-linked binaries because DT_INIT/DT_INIT_ARRAY are already processed by the dynamic linker. However, the dynamic linker skipped the main program for some reason. For maximum backwards compatibility, this is not changed, and instead, the main map is consulted from __libc_start_main if the init function argument is a NULL pointer. For statically linked binaries, the old approach based on linker symbols is still used because there is nothing else available. A new symbol version __libc_start_main@@GLIBC_2.34 is introduced because new binaries running on an old libc would not run their ELF constructors, leading to difficult-to-debug issues.
* powerpc64: Workaround sigtramp vdso return callRaoni Fassina Firmino2021-01-281-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A not so recent kernel change[1] changed how the trampoline `__kernel_sigtramp_rt64` is used to call signal handlers. This was exposed on the test misc/tst-sigcontext-get_pc Before kernel 5.9, the kernel set LR to the trampoline address and jumped directly to the signal handler, and at the end the signal handler, as any other function, would `blr` to the address set. In other words, the trampoline was executed just at the end of the signal handler and the only thing it did was call sigreturn. But since kernel 5.9 the kernel set CTRL to the signal handler and calls to the trampoline code, the trampoline then `bctrl` to the address in CTRL, setting the LR to the next instruction in the middle of the trampoline, when the signal handler returns, the rest of the trampoline code executes the same code as before. Here is the full trampoline code as of kernel 5.11.0-rc5 for reference: V_FUNCTION_BEGIN(__kernel_sigtramp_rt64) .Lsigrt_start: bctrl /* call the handler */ addi r1, r1, __SIGNAL_FRAMESIZE li r0,__NR_rt_sigreturn sc .Lsigrt_end: V_FUNCTION_END(__kernel_sigtramp_rt64) This new behavior breaks how `backtrace()` uses to detect the trampoline frame to correctly reconstruct the stack frame when it is called from inside a signal handling. This workaround rely on the fact that the trampoline code is at very least two (maybe 3?) instructions in size (as it is in the 32 bits version, only on `li` and `sc`), so it is safe to check the return address be in the range __kernel_sigtramp_rt64 .. + 4. [1] subject: powerpc/64/signal: Balance return predictor stack in signal trampoline commit: 0138ba5783ae0dcc799ad401a1e8ac8333790df9 url: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=0138ba5783ae0dcc799ad401a1e8ac8333790df9 Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* powerpc64: Select POWER9 machine for the scv instructionFlorian Weimer2021-01-221-0/+3
| | | | | | | | | It is not available with the baseline ISA. Fixes commit 68ab82f56690ada86ac1e0c46bad06ba189a10ef ("powerpc: Runtime selection between sc and scv for syscalls"). Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
* Update powerpc-nofpu libm-test-ulps.Joseph Myers2021-01-181-16/+17
|
* Update copyright dates with scripts/update-copyrightsPaul Eggert2021-01-02596-596/+596
| | | | | | | | | | | | | | | | I used these shell commands: ../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright (cd ../glibc && git commit -am"[this commit message]") and then ignored the output, which consisted lines saying "FOO: warning: copyright statement not found" for each of 6694 files FOO. I then removed trailing white space from benchtests/bench-pthread-locks.c and iconvdata/tst-iconv-big5-hkscs-to-2ucs4.c, to work around this diagnostic from Savannah: remote: *** pre-commit check failed ... remote: *** error: lines with trailing whitespace found remote: error: hook declined to update refs/heads/master
* powerpc: Runtime selection between sc and scv for syscallsMatheus Castanho2020-12-302-9/+126
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Linux kernel v5.9 added support for system calls using the scv instruction for POWER9 and later. The new codepath provides better performance (see below) if compared to using sc. For the foreseeable future, both sc and scv mechanisms will co-exist, so this patch enables glibc to do a runtime check and use scv when it is available. Before issuing the system call to the kernel, we check hwcap2 in the TCB for PPC_FEATURE2_SCV to see if scv is supported by the kernel. If not, we fallback to sc and keep the old behavior. The kernel implements a different error return convention for scv, so when returning from a system call we need to handle the return value differently depending on the instruction we used to enter the kernel. For syscalls implemented in ASM, entry and exit are implemented by different macros (PSEUDO and PSEUDO_RET, resp.), which may be used in sequence (e.g. for templated syscalls) or with other instructions in between (e.g. clone). To avoid accessing the TCB a second time on PSEUDO_RET to check which instruction we used, the value read from hwcap2 is cached on a non-volatile register. This is not needed when using INTERNAL_SYSCALL macro, since entry and exit are bundled into the same inline asm directive. The dynamic loader may issue syscalls before the TCB has been setup so it always uses sc with no extra checks. For the static case, there is no compile-time way to determine if we are inside startup code, so we also check the value of the thread pointer before effectively accessing the TCB. For such situations in which the availability of scv cannot be determined, sc is always used. Support for scv in syscalls implemented in their own ASM file (clone and vfork) will be added later. For now simply use sc as before. Average performance over 1M calls for each syscall "type": - stat: C wrapper calling INTERNAL_SYSCALL - getpid: templated ASM syscall - syscall: call to gettid using syscall function Standard: stat : 1.573445 us / ~3619 cycles getpid : 0.164986 us / ~379 cycles syscall : 0.162743 us / ~374 cycles With scv: stat : 1.537049 us / ~3535 cycles <~ -84 cycles / -2.32% getpid : 0.109923 us / ~253 cycles <~ -126 cycles / -33.25% syscall : 0.116410 us / ~268 cycles <~ -106 cycles / -28.34% Tested on powerpc, powerpc64, powerpc64le (with and without scv) Tested-by: Lucas A. M. Magalhães <lamm@linux.ibm.com> Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
* powerpc: Regenerate ulpsFlorian Weimer2020-12-221-12/+13
| | | | | For new inputs added in commit cad5ad81d2f7f58a7ad0d8afa8c1b710, as seen on a POWER8 system.
* powerpc64le: Add glibc-hwcaps supportFlorian Weimer2020-12-043-0/+128
| | | | | The "power10" and "power9" subdirectories are selected in a way that matches the -mcpu=power10 and -mcpu=power9 options of GCC.
* powerpc64le: ifunc select *f128 routines in multiarch modePaul E. Murphy2020-11-3016-197/+817
| | | | | | | | | | | | | | | | | | | | | | | | | Programatically generate simple wrappers for interesting libm *f128 objects. Selected functions are transcendental functions or those with trivial compiler builtins. This can result in a 2-3x speedup (e.g logf128 and expf128). A second set of implementation files are generated which include the first implementation encountered along the search path. This usually works, except when a wrapper is overriden and makefile search order slightly diverges from include order. Likewise, wrapper object files are created for each generated file. These hold the ifunc selection routines which export ABI. Next, several shared headers are intercepted to control renaming of asm function redirects are used first, and sometimes macro renames if the former is impractical. Notably, if the request machine supports hardware IEEE128 (i.e POWER9 and newer) this ifunc machinery is disabled. Likewise existing ifunc support for float128 is consolidated into this (e.g sqrtf128 and fmaf128). Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
* powerpc: Make PT_THREAD_POINTER available to assembly codeMatheus Castanho2020-11-241-10/+16
| | | | | | | | PT_THREAD_POINTER is currenty defined inside a #ifndef __ASSEMBLER__ block, but its usage should not be limited to C code, as it can be useful when accessing the TLS from assembly code as well. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
* nptl: Move stack list variables into _rtld_globalFlorian Weimer2020-11-161-2/+0
| | | | | | | | | Now __thread_gscope_wait (the function behind THREAD_GSCOPE_WAIT, formerly __wait_lookup_done) can be implemented directly in ld.so, eliminating the unprotected GL (dl_wait_lookup_done) function pointer. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* powerpc: Eliminate UP macro conditionalsFlorian Weimer2020-11-133-14/+5
| | | | | | The macro is never defined. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* powerpc: Add optimized stpncpy for POWER9Raphael M Zinsly2020-11-126-2/+135
| | | | | | | Add stpncpy support into the POWER9 strncpy. Reviewed-by: Matheus Castanho <msc@linux.ibm.com> Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
* powerpc: Add optimized strncpy for POWER9Raphael M Zinsly2020-11-125-1/+391
| | | | | | | | Similar to the strcpy P9 optimization, this version uses VSX to improve performance. Reviewed-by: Matheus Castanho <msc@linux.ibm.com> Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>