about summary refs log tree commit diff
path: root/sysdeps/aarch64
Commit message (Collapse)AuthorAgeFilesLines
* aarch64,falkor: Use vector registers for memcpySiddhesh Poyarekar2018-06-291-72/+65
| | | | | | | | | | | Vector registers perform better than scalar register pairs for copying data so prefer them instead. This results in a time reduction of over 50% (i.e. 2x speed improvemnet) for some smaller sizes for memcpy-walk. Larger sizes show improvements of around 1% to 2%. memcpy-random shows a very small improvement, in the range of 1-2%. * sysdeps/aarch64/multiarch/memcpy_falkor.S (__memcpy_falkor): Use vector registers.
* aarch64,falkor: Use vector registers for memmoveSiddhesh Poyarekar2018-06-291-105/+88
| | | | | | | | | | | Vector registers perform much better for moves compared to pairs of registers on falkor, so use them instead. This results in a time reduction of up to 50% (i.e. 2x improvement) for a lot of the smaller sizes, i.e. up to 1K in memmove-walk. Improvements for larger sizes are smaller, at about 1%-2%. * sysdeps/aarch64/multiarch/memmove_falkor.S (__memcpy_falkor): Use vector registers.
* aarch64: add HXT Phecda core memory operation ifuncsHongbo Zhang2018-06-123-5/+6
| | | | | | | | | | | | | | | | | | | | Phecda is HXT semiconductor's CPU core, this patch adds memory operation ifuncs for it: sharing the same optimized implementation with Qualcomm's Falkor core. 2018-06-07 Minfeng Kang <minfeng.kang@hxt-semitech.com> Hongbo Zhang <hongbo.zhang@linaro.org> * sysdeps/aarch64/multiarch/memcpy.c (libc_ifunc): reuse __memcpy_falkor for phecda core. * sysdeps/aarch64/multiarch/memmove.c (libc_ifunc): reuse __memmove_falkor for phecda core. * sysdeps/aarch64/multiarch/memset.c (libc_ifunc): reuse __memset_falkor for phecda core. * sysdeps/unix/sysv/linux/aarch64/cpu-features.c: add MIDR entry for phecda core. * sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_PHECDA): add macro to identify phecda core.
* Mark _init and _fini as hidden [BZ #23145]H.J. Lu2018-06-081-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | _init and _fini are special functions provided by glibc for linker to define DT_INIT and DT_FINI in executable and shared library. They should never be put in dynamic symbol table. This patch marks them as hidden to remove them from dynamic symbol table. Tested with build-many-glibcs.py. [BZ #23145] * elf/Makefile (tests-special): Add $(objpfx)check-initfini.out. ($(all-built-dso:=.dynsym): New target. (common-generated): Add $(all-built-dso:$(common-objpfx)%=%.dynsym). ($(objpfx)check-initfini.out): New target. (generated): Add check-initfini.out. * scripts/check-initfini.awk: New file. * sysdeps/aarch64/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/alpha/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/arm/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/hppa/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/i386/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/ia64/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/m68k/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/microblaze/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/mips/mips32/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/mips/mips64/n32/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/mips/mips64/n64/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/nios2/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/powerpc/powerpc32/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/powerpc/powerpc64/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/s390/s390-32/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/s390/s390-64/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/sh/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/sparc/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/x86_64/crti.S (_init): Mark as hidden. (_fini): Likewise.
* Remove sysdeps/aarch64/soft-fp directory.Joseph Myers2018-05-225-4/+4
| | | | | | | | | | | | | | | | | | | | | | As per <https://sourceware.org/ml/libc-alpha/2014-10/msg00369.html>, there should not be separate sysdeps/<arch>/soft-fp directories when those are used by all configurations that use sysdeps/<arch>, and, more generally, should not be sysdeps/foo/Implies files pointing to a subdirectory foo/bar. This patch eliminates the sysdeps/aarch64/soft-fp directory accordingly, merging its contents into sysdeps/aarch64. Tested with build-many-glibcs.py that installed stripped shared libraries for aarch64 configurations are unchanged by this patch. * sysdeps/aarch64/Implies: Remove aarch64/soft-fp. * sysdeps/aarch64/Makefile [$(subdir) = math] (CPPFLAGS): Add -I../soft-fp. Moved from .... * sysdeps/aarch64/soft-fp/Makefile: ... here. Remove file. * sysdeps/aarch64/soft-fp/e_sqrtl.c: Move to .... * sysdeps/aarch64/e_sqrtl.c: ... here. * sysdeps/aarch64/soft-fp/sfp-machine.h: Move to .... * sysdeps/aarch64/sfp-machine.h: ... here.
* Do not include math-barriers.h in math_private.h.Joseph Myers2018-05-114-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch continues the math_private.h cleanup by stopping math_private.h from including math-barriers.h and making the users of the barrier macros include the latter header directly. No attempt is made to remove any math_private.h includes that are now unused, except in strtod_l.c where that is done to avoid line number changes in assertions, so that installed stripped shared libraries can be compared before and after the patch. (I think the floating-point environment support in math_private.h should also move out - some architectures already have fenv_private.h as an architecture-internal header included from their math_private.h - and after moving that out might be a better time to identify unused math_private.h includes.) Tested for x86_64 and x86, and tested with build-many-glibcs.py that installed stripped shared libraries are unchanged by the patch. * sysdeps/generic/math_private.h: Do not include <math-barriers.h>. * stdlib/strtod_l.c: Include <math-barriers.h> instead of <math_private.h>. * math/fromfp.h: Include <math-barriers.h>. * math/math-narrow.h: Likewise. * math/s_nextafter.c: Likewise. * math/s_nexttowardf.c: Likewise. * sysdeps/aarch64/fpu/s_llrint.c: Likewise. * sysdeps/aarch64/fpu/s_llrintf.c: Likewise. * sysdeps/aarch64/fpu/s_lrint.c: Likewise. * sysdeps/aarch64/fpu/s_lrintf.c: Likewise. * sysdeps/i386/fpu/s_nextafterl.c: Likewise. * sysdeps/i386/fpu/s_nexttoward.c: Likewise. * sysdeps/i386/fpu/s_nexttowardf.c: Likewise. * sysdeps/ieee754/dbl-64/e_atan2.c: Likewise. * sysdeps/ieee754/dbl-64/e_atanh.c: Likewise. * sysdeps/ieee754/dbl-64/e_exp.c: Likewise. * sysdeps/ieee754/dbl-64/e_exp2.c: Likewise. * sysdeps/ieee754/dbl-64/e_j0.c: Likewise. * sysdeps/ieee754/dbl-64/e_sqrt.c: Likewise. * sysdeps/ieee754/dbl-64/s_expm1.c: Likewise. * sysdeps/ieee754/dbl-64/s_fma.c: Likewise. * sysdeps/ieee754/dbl-64/s_fmaf.c: Likewise. * sysdeps/ieee754/dbl-64/s_log1p.c: Likewise. * sysdeps/ieee754/dbl-64/s_nearbyint.c: Likewise. * sysdeps/ieee754/dbl-64/wordsize-64/s_nearbyint.c: Likewise. * sysdeps/ieee754/flt-32/e_atanhf.c: Likewise. * sysdeps/ieee754/flt-32/e_j0f.c: Likewise. * sysdeps/ieee754/flt-32/s_expm1f.c: Likewise. * sysdeps/ieee754/flt-32/s_log1pf.c: Likewise. * sysdeps/ieee754/flt-32/s_nearbyintf.c: Likewise. * sysdeps/ieee754/flt-32/s_nextafterf.c: Likewise. * sysdeps/ieee754/k_standardl.c: Likewise. * sysdeps/ieee754/ldbl-128/e_asinl.c: Likewise. * sysdeps/ieee754/ldbl-128/e_expl.c: Likewise. * sysdeps/ieee754/ldbl-128/e_powl.c: Likewise. * sysdeps/ieee754/ldbl-128/s_fmal.c: Likewise. * sysdeps/ieee754/ldbl-128/s_nearbyintl.c: Likewise. * sysdeps/ieee754/ldbl-128/s_nextafterl.c: Likewise. * sysdeps/ieee754/ldbl-128/s_nexttoward.c: Likewise. * sysdeps/ieee754/ldbl-128/s_nexttowardf.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/e_asinl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_fmal.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_nextafterl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_nexttoward.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_nexttowardf.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_rintl.c: Likewise. * sysdeps/ieee754/ldbl-96/e_atanhl.c: Likewise. * sysdeps/ieee754/ldbl-96/e_j0l.c: Likewise. * sysdeps/ieee754/ldbl-96/s_fma.c: Likewise. * sysdeps/ieee754/ldbl-96/s_fmal.c: Likewise. * sysdeps/ieee754/ldbl-96/s_nexttoward.c: Likewise. * sysdeps/ieee754/ldbl-96/s_nexttowardf.c: Likewise. * sysdeps/ieee754/ldbl-opt/s_nexttowardfd.c: Likewise. * sysdeps/m68k/m680x0/fpu/s_nextafterl.c: Likewise.
* aarch64,falkor: Ignore prefetcher tagging for smaller copiesSiddhesh Poyarekar2018-05-111-27/+41
| | | | | | | | | | | | | | | | | | For smaller and medium sized copies, the effect of hardware prefetching are not as dominant as instruction level parallelism. Hence it makes more sense to load data into multiple registers than to try and route them to the same prefetch unit. This is also the case for the loop exit where we are unable to latch on to the same prefetch unit anyway so it makes more sense to have data loaded in parallel. The performance results are a bit mixed with memcpy-random, with numbers jumping between -1% and +3%, i.e. the numbers don't seem repeatable. memcpy-walk sees a 70% improvement (i.e. > 2x) for 128 bytes and that improvement reduces down as the impact of the tail copy decreases in comparison to the loop. * sysdeps/aarch64/multiarch/memcpy_falkor.S (__memcpy_falkor): Use multiple registers to copy data in loop tail.
* aarch64,falkor: Ignore prefetcher hints for memmove tailSiddhesh Poyarekar2018-05-111-18/+28
| | | | | | | | | | | | | | | | | | The tail of the copy loops are unable to train the falkor hardware prefetcher because they load from a different base compared to the hot loop. In this case avoid serializing the instructions by loading them into different registers. Also peel the last iteration of the loop into the tail (and have them use different registers) since it gives better performance for medium sizes. This results in performance improvements of between 3% and 20% over the current falkor implementation for sizes between 128 bytes and 1K on the memmove-walk benchmark, thus mostly covering the regressions seen against the generic memmove. * sysdeps/aarch64/multiarch/memmove_falkor.S (__memmove_falkor): Use multiple registers to move data in loop tail.
* Move math_opt_barrier, math_force_eval to separate math-barriers.h.Joseph Myers2018-05-092-5/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch continues cleaning up math_private.h by moving the math_opt_barrier and math_force_eval macros to a separate header math-barriers.h. At present, those macros are inside a "#ifndef math_opt_barrier" in math_private.h to allow architectures to override them and then use a separate math-barriers.h header, no such #ifndef or #include_next is needed; architectures just have their own alternative version of math-barriers.h when providing their own optimized versions that avoid going through memory unnecessarily. The generic math-barriers.h has a comment added to document these two macros. In this patch, math_private.h is made to #include <math-barriers.h>, so files using these macros do not need updating yet. That is because of uses of math_force_eval in math_check_force_underflow and math_check_force_underflow_nonneg, which are still defined in math_private.h. Once those are moved out to a separate header, that separate header can be made to include <math-barriers.h>, as can the other files directly using these barrier macros, and then the include of <math-barriers.h> from math_private.h can be removed. Tested for x86_64 and x86. Also tested with build-many-glibcs.py that installed stripped shared libraries are unchanged by this patch. * sysdeps/generic/math-barriers.h: New file. * sysdeps/generic/math_private.h [!math_opt_barrier] (math_opt_barrier): Move to math-barriers.h. [!math_opt_barrier] (math_force_eval): Likewise. * sysdeps/aarch64/fpu/math-barriers.h: New file. * sysdeps/aarch64/fpu/math_private.h (math_opt_barrier): Move to math-barriers.h. (math_force_eval): Likewise. * sysdeps/alpha/fpu/math-barriers.h: New file. * sysdeps/alpha/fpu/math_private.h (math_opt_barrier): Move to math-barriers.h. (math_force_eval): Likewise. * sysdeps/x86/fpu/math-barriers.h: New file. * sysdeps/i386/fpu/fenv_private.h (math_opt_barrier): Move to math-barriers.h. (math_force_eval): Likewise. * sysdeps/m68k/m680x0/fpu/math_private.h: Move to.... * sysdeps/m68k/m680x0/fpu/math-barriers.h: ... here. Adjust multiple-include guard for rename. * sysdeps/powerpc/fpu/math-barriers.h: New file. * sysdeps/powerpc/fpu/math_private.h (math_opt_barrier): Move to math-barriers.h. (math_force_eval): Likewise.
* elf: Unify symbol address run-time calculation [BZ #19818]Maciej W. Rozycki2018-04-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Wrap symbol address run-time calculation into a macro and use it throughout, replacing inline calculations. There are a couple of variants, most of them different in a functionally insignificant way. Most calculations are right following RESOLVE_MAP, at which point either the map or the symbol returned can be checked for validity as the macro sets either both or neither. In some places both the symbol and the map has to be checked however. My initial implementation therefore always checked both, however that resulted in code larger by as much as 0.3%, as many places know from elsewhere that no check is needed. I have decided the size growth was unacceptable. Having looked closer I realized that it's the map that is the culprit. Therefore I have modified LOOKUP_VALUE_ADDRESS to accept an additional boolean argument telling it to access the map without checking it for validity. This in turn has brought quite nice results, with new code actually being smaller for i686, and MIPS o32, n32 and little-endian n64 targets, unchanged in size for x86-64 and, unusually, marginally larger for big-endian MIPS n64, as follows: i686: text data bss dec hex filename 152255 4052 192 156499 26353 ld-2.27.9000-base.so 152159 4052 192 156403 262f3 ld-2.27.9000-elf-symbol-value.so MIPS/o32/el: text data bss dec hex filename 142906 4396 260 147562 2406a ld-2.27.9000-base.so 142890 4396 260 147546 2405a ld-2.27.9000-elf-symbol-value.so MIPS/n32/el: text data bss dec hex filename 142267 4404 260 146931 23df3 ld-2.27.9000-base.so 142171 4404 260 146835 23d93 ld-2.27.9000-elf-symbol-value.so MIPS/n64/el: text data bss dec hex filename 149835 7376 408 157619 267b3 ld-2.27.9000-base.so 149787 7376 408 157571 26783 ld-2.27.9000-elf-symbol-value.so MIPS/o32/eb: text data bss dec hex filename 142870 4396 260 147526 24046 ld-2.27.9000-base.so 142854 4396 260 147510 24036 ld-2.27.9000-elf-symbol-value.so MIPS/n32/eb: text data bss dec hex filename 142019 4404 260 146683 23cfb ld-2.27.9000-base.so 141923 4404 260 146587 23c9b ld-2.27.9000-elf-symbol-value.so MIPS/n64/eb: text data bss dec hex filename 149763 7376 408 157547 2676b ld-2.27.9000-base.so 149779 7376 408 157563 2677b ld-2.27.9000-elf-symbol-value.so x86-64: text data bss dec hex filename 148462 6452 400 155314 25eb2 ld-2.27.9000-base.so 148462 6452 400 155314 25eb2 ld-2.27.9000-elf-symbol-value.so [BZ #19818] * sysdeps/generic/ldsodefs.h (LOOKUP_VALUE_ADDRESS): Add `set' parameter. (SYMBOL_ADDRESS): New macro. [!ELF_FUNCTION_PTR_IS_SPECIAL] (DL_SYMBOL_ADDRESS): Use SYMBOL_ADDRESS for symbol address calculation. * elf/dl-runtime.c (_dl_fixup): Likewise. (_dl_profile_fixup): Likewise. * elf/dl-symaddr.c (_dl_symbol_address): Likewise. * elf/rtld.c (dl_main): Likewise. * sysdeps/aarch64/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/alpha/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/arm/dl-machine.h (elf_machine_rel): Likewise. (elf_machine_rela): Likewise. * sysdeps/hppa/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/hppa/dl-symaddr.c (_dl_symbol_address): Likewise. * sysdeps/i386/dl-machine.h (elf_machine_rel): Likewise. (elf_machine_rela): Likewise. * sysdeps/ia64/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/m68k/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/microblaze/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/mips/dl-machine.h (ELF_MACHINE_BEFORE_RTLD_RELOC): Likewise. (elf_machine_reloc): Likewise. (elf_machine_got_rel): Likewise. * sysdeps/mips/dl-trampoline.c (__dl_runtime_resolve): Likewise. * sysdeps/nios2/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/powerpc/powerpc32/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/powerpc/powerpc64/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/riscv/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/s390/s390-32/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/s390/s390-64/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/sh/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/sparc/sparc32/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/sparc/sparc64/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/tile/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/x86_64/dl-machine.h (elf_machine_rela): Likewise. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* [PATCH 1/7] sin/cos slow paths: avoid slow paths for small inputsWilco Dijkstra2018-04-031-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | This series of patches removes the slow patchs from sin, cos and sincos. Besides greatly simplifying the implementation, the new version is also much faster for inputs up to PI (41% faster) and for large inputs needing range reduction (27% faster). ULP is ~0.55 with no errors found after testing 1.6 billion inputs across most of the range with mpsin and mpcos. The number of incorrectly rounded results (ie. ULP >0.5) is at most ~2750 per million inputs between 0.125 and 0.5, the average is ~850 per million between 0 and PI. Tested on AArch64 and x86_64 with no regressions. The first patch removes the slow paths for the cases where the input is small and doesn't require range reduction. Update ULP tables for sin, cos and sincos on AArch64 and x86_64. * sysdeps/aarch64/libm-test-ulps: Update ULP for sin, cos, sincos. * sysdeps/ieee754/dbl-64/s_sin.c (__sin): Remove slow paths for small inputs. (__cos): Likewise. * sysdeps/x86_64/fpu/libm-test-ulps: Update ULP for sin, cos, sincos.
* Use x86_64 backtrace as generic version.Joseph Myers2018-03-211-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | No glibc configuration uses the present debug/backtrace.c, whereas several #include the x86_64 version. The x86_64 version is effectively a generic one (using _Unwind_Backtrace from libgcc, which works much more reliably than the built-in functions used by debug/backtrace.c). This patch moves it to debug/backtrace.c and removes all the #includes of the x86_64 version from other architectures which are no longer required. I do not know whether all the other architecture-specific backtrace implementations that are based on _Unwind_Backtrace are required, or whether, where their differences from the generic version do something useful, suitable hooks could be added to the generic version to reduce the duplication involved. Tested with build-many-glibcs.py that installed stripped shared libraries are unchanged by this patch. * sysdeps/x86_64/backtrace.c: Move to .... * debug/backtrace.c: ... here. * sysdeps/aarch64/backtrace.c: Remove file. * sysdeps/alpha/backtrace.c: Likewise. * sysdeps/hppa/backtrace.c: Likewise. * sysdeps/ia64/backtrace.c: Likewise. * sysdeps/mips/backtrace.c: Likewise. * sysdeps/nios2/backtrace.c: Likewise. * sysdeps/riscv/backtrace.c: Likewise. * sysdeps/sh/backtrace.c: Likewise. * sysdeps/tile/backtrace.c: Likewise.
* Remove all target specific __ieee754_sqrt(f/l) inlinesWilco Dijkstra2018-03-151-16/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the now unused target specific__ieee754_sqrt(f/l) inlines. Also remove inlines of sqrt which are for really old GCC versions. Removing these is desirable, under the general principle of leaving such inlining to the compiler rather than trying to do it in installed headers, especially when only very old compilers are affected. Note that removing inlines for __ieee754_sqrt disables inlining in the sqrt wrapper functions. Given the sqrt function will typically only be called for negative arguments, it doesn't matter whether the inlining happens or not. * sysdeps/aarch64/fpu/math_private.h (__ieee754_sqrt): Remove. (__ieee754_sqrtf): Remove. * sysdeps/alpha/fpu/math_private.h (__ieee754_sqrt): Remove. (__ieee754_sqrtf): Remove. * sysdeps/generic/math-type-macros.h (M_SQRT): Use sqrt. * sysdeps/m68k/m680x0/fpu/mathimpl.h (__ieee754_sqrt): Remove. * sysdeps/powerpc/fpu/math_private.h (__ieee754_sqrt): Remove. (__ieee754_sqrtf): Remove. * sysdeps/s390/fpu/bits/mathinline.h: Remove file. * sysdeps/sparc/fpu/bits/mathinline.h (sqrt) Remove. (sqrtf): Remove. (sqrtl): Remove. (__ieee754_sqrt): Remove. (__ieee754_sqrtf): Remove. (__ieee754_sqrtl): Remove. * sysdeps/m68k/m680x0/fpu/mathimpl.h (__ieee754_sqrt): Remove. * sysdeps/x86/fpu/math_private.h (__ieee754_sqrt): Remove. * sysdeps/x86_64/fpu/math_private.h (__ieee754_sqrt): Remove. (__ieee754_sqrtf): Remove. (__ieee754_sqrtl): Remove.
* aarch64/strncmp: Use lsr instead of mov+lsrSiddhesh Poyarekar2018-03-151-4/+2
| | | | A lsr can do what the mov and lsr did.
* aarch64/strncmp: Unbreak builds with old binutilsSiddhesh Poyarekar2018-03-141-2/+4
| | | | | Binutils 2.26.* and older do not support moves with shifted registers, so use a separate shift instruction instead.
* aarch64: Improve strncmp for mutually misaligned inputsSiddhesh Poyarekar2018-03-131-15/+80
| | | | | | | | | | | | | | | | | | | | | The mutually misaligned inputs on aarch64 are compared with a simple byte copy, which is not very efficient. Enhance the comparison similar to strcmp by loading a double-word at a time. The peak performance improvement (i.e. 4k maxlen comparisons) due to this on the strncmp microbenchmark is as follows: falkor: 3.5x (up to 72% time reduction) cortex-a73: 3.5x (up to 71% time reduction) cortex-a53: 3.5x (up to 71% time reduction) All mutually misaligned inputs from 16 bytes maxlen onwards show upwards of 15% improvement and there is no measurable effect on the performance of aligned/mutually aligned inputs. * sysdeps/aarch64/strncmp.S (count): New macro. (strncmp): Store misaligned length in SRC1 in COUNT. (mutual_align): Adjust. (misaligned8): Load dword at a time when it is safe.
* hurd: add gscope supportSamuel Thibault2018-03-111-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | * elf/dl-support.c [!THREAD_GSCOPE_IN_TCB] (_dl_thread_gscope_count): Define variable. * sysdeps/generic/ldsodefs.h [!THREAD_GSCOPE_IN_TCB] (struct rtld_global): Add _dl_thread_gscope_count member. * sysdeps/mach/hurd/tls.h: Include <atomic.h>. [!defined __ASSEMBLER__] (THREAD_GSCOPE_GLOBAL, THREAD_GSCOPE_SET_FLAG, THREAD_GSCOPE_RESET_FLAG, THREAD_GSCOPE_WAIT): Define macros. * sysdeps/generic/tls.h: Document THREAD_GSCOPE_IN_TCB. * sysdeps/aarch64/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/alpha/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/arm/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/hppa/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/i386/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/ia64/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/m68k/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/microblaze/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/mips/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/nios2/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/powerpc/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/riscv/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/s390/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/sh/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/sparc/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/tile/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/x86_64/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* aarch64: Fix branch target to loop16Siddhesh Poyarekar2018-03-061-1/+1
| | | | | | | I goofed up when changing the loop8 name to loop16 and missed on out the branch instance. Fixed and actually build tested this time. * sysdeps/aarch64/memcmp.S (more16): Fix branch target loop16.
* aarch64: Optimized memcmp for medium to large sizesSiddhesh Poyarekar2018-03-061-21/+55
| | | | | | | | | | | | | This improved memcmp provides a fast path for compares up to 16 bytes and then compares 16 bytes at a time, thus optimizing loads from both sources. The glibc memcmp microbenchmark retains performance (with an error of ~1ns) for smaller compare sizes and reduces up to 31% of execution time for compares up to 4K on the APM Mustang. On Qualcomm Falkor this improves to almost 48%, i.e. it is almost 2x improvement for sizes of 2K and above. * sysdeps/aarch64/memcmp.S: Widen comparison to 16 bytes at a time.
* aarch64/strcmp: fix misaligned loop jump targetSiddhesh Poyarekar2018-02-221-1/+1
| | | | | | | | | I accidentally set the loop jump back label as misaligned8 instead of do_misaligned. The typo is harmless but it's always nice to not have to unnecessarily execute those two instructions. * sysdeps/aarch64/strcmp.S (do_misaligned): Jump back to do_misaligned, not misaligned8.
* IFUNC for Cavium ThunderX2Steve Ellcey2018-02-225-10/+51
| | | | | | | | | | | | | | | | | * sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add memcpy_thunderx2. * sysdeps/aarch64/multiarch/ifunc-impl-list.c (MAX_IFUNC): Increment to 4. (__libc_ifunc_impl_list): Add __memcpy_thunderx2. * sysdeps/aarch64/multiarch/memcpy.c (libc_ifunc): Add IS_THUNDERX2 and IS_THUNDERX2PA checks. * sysdeps/aarch64/multiarch/memcpy_thunderx.S (USE_THUNDERX2): Use macro to set name appropriately. (memcpy): Use USE_THUNDERX2 macro to modify prefetches. * sysdeps/aarch64/multiarch/memcpy_thunderx2.S: New file. * sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_THUNDERX2PA): New macro. (IS_THUNDERX2): New macro.
* [AArch64] Fix include.Wilco Dijkstra2018-02-151-1/+1
| | | | | | Fix include to use <>. * sysdeps/aarch64/fpu/fpu_control.h: Use <> in include.
* Remove slow paths from powWilco Dijkstra2018-02-121-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the slow paths from pow. Like several other double precision math functions, pow is exactly rounded. This is not required from math functions and causes major overheads as it requires multiple fallbacks using higher precision arithmetic if a result is close to 0.5ULP. Ridiculous slowdowns of up to 100000x have been reported when the highest precision path triggers. All GLIBC math tests pass on AArch64 and x64 (with ULP of pow set to 1). The worst case error is ~0.506ULP. A simple test over a few hundred million values shows pow is 10% faster on average. This fixes BZ #13932. [BZ #13932] * sysdeps/ieee754/dbl-64/uexp.h (err_1): Remove. * benchtests/pow-inputs: Update comment for slow path cases. * manual/probes.texi (slowpow_p10): Delete removed probe. (slowpow_p10): Likewise. * math/Makefile: Remove halfulp.c and slowpow.c. * sysdeps/aarch64/libm-test-ulps: Set ULP of pow to 1. * sysdeps/generic/math_private.h (__exp1): Remove error argument. (__halfulp): Remove. (__slowpow): Remove. * sysdeps/i386/fpu/halfulp.c: Delete file. * sysdeps/i386/fpu/slowpow.c: Likewise. * sysdeps/ia64/fpu/halfulp.c: Likewise. * sysdeps/ia64/fpu/slowpow.c: Likewise. * sysdeps/ieee754/dbl-64/e_exp.c (__exp1): Remove error argument, improve comments and add error analysis. * sysdeps/ieee754/dbl-64/e_pow.c (__ieee754_pow): Add error analysis. (power1): Remove function: (log1): Remove error argument, add error analysis. (my_log2): Remove function. * sysdeps/ieee754/dbl-64/halfulp.c: Delete file. * sysdeps/ieee754/dbl-64/slowpow.c: Likewise. * sysdeps/m68k/m680x0/fpu/halfulp.c: Likewise. * sysdeps/m68k/m680x0/fpu/slowpow.c: Likewise. * sysdeps/powerpc/power4/fpu/Makefile: Remove CPPFLAGS-slowpow.c. * sysdeps/x86_64/fpu/libm-test-ulps: Set ULP of pow to 1. * sysdeps/x86_64/fpu/multiarch/Makefile: Remove slowpow-fma.c, slowpow-fma4.c, halfulp-fma.c, halfulp-fma4.c. * sysdeps/x86_64/fpu/multiarch/e_pow-fma.c (__slowpow): Remove define. * sysdeps/x86_64/fpu/multiarch/e_pow-fma4.c (__slowpow): Likewise. * sysdeps/x86_64/fpu/multiarch/halfulp-fma.c: Delete file. * sysdeps/x86_64/fpu/multiarch/halfulp-fma4.c: Likewise. * sysdeps/x86_64/fpu/multiarch/slowpow-fma.c: Likewise. * sysdeps/x86_64/fpu/multiarch/slowpow-fma4.c: Likewise.
* [AArch64] Fix testsuite error due to fpsr/fscr changeWilco Dijkstra2018-02-101-0/+2
| | | | | | Add features.h include for __GNUC_PREREQ. * sysdeps/aarch64/fpu/fpu_control.h: Add features.h to fix build error.
* [AArch64] Use builtins for fpcr/fpsrWilco Dijkstra2018-02-091-4/+11
| | | | | | | | | | Since GCC has support for accessing FPSR/FPCR, use them when possible so that the asm instructions can be removed eventually. Although GCC 5 supports the builtins, it has an optimization bug, so use them from GCC 6 onwards. * sysdeps/aarch64/fpu/fpu_control.h: Use builtins for accessing FPCR/FPSR.
* aarch64: Use the L() macro for labels in memcmpSiddhesh Poyarekar2018-02-021-16/+16
| | | | | | The L() macro makes the assembly a bit more readable. * sysdeps/aarch64/memcmp.S: Use L() macro for labels.
* aarch64: fix static pie enabled libc when main is in a shared librarySzabolcs Nagy2018-01-121-2/+11
| | | | | | | | | | | | | | | | | In the static pie enabled libc, crt1.o uses the same position independent code as rcrt1.o and crt1.o is used instead of Scrt1.o when -no-pie executables are linked. When main is not defined in the executable, but in a shared library crt1.o is currently broken, it assumes main is local. (glibc has a test for this but i missed it in my previous testing.) To make both rcrt1.o and crt1.o happy with the same code, a wrapper is introduced around main: with this crt1.o works with extern main symbol while rcrt1.o does not depend on GOT relocations. (The change only affects static pie enabled libc. Further simplification of start.S is possible in the future by using the same approach for Scrt1.o too.) * aarch64/start.S (_start): Use __wrap_main. (__wrap_main): New local symbol.
* Update copyright dates with scripts/update-copyrights.Joseph Myers2018-01-01117-117/+117
| | | | | | | * All files with FSF copyright notices: Update copyright dates using scripts/update-copyrights. * locale/programs/charmap-kw.h: Regenerated. * locale/programs/locfile-kw.h: Likewise.
* aarch64: update libm-test-ulpsSzabolcs Nagy2017-12-201-20/+20
| | | | * sysdeps/aarch64/libm-test-ulps: Update.
* aarch64: fix memset with --disable-multi-archAdhemerval Zanella2017-12-201-0/+4
| | | | * sysdeps/aarch64/memset.S (MEMSET): Define.
* aarch64: fix start code for static pieSzabolcs Nagy2017-12-181-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | There are three flavors of the crt startup code: 1) crt1.o used for non-pie, 2) Scrt1.o used for dynamic linked pie (dynamic linker relocates), 3) rcrt1.o used for static linked pie (self relocation is needed) In the --enable-static-pie case crt1.o is built with -DPIC and in case of static linking it interposes _dl_relocate_static_pie in libc to avoid self relocation. Scrt1.o is built with -DPIC -DSHARED and it relies on GOT entries that the static linker cannot relax and thus need relocation before the start code is executed, so rcrt1.o needs separate implementation. This implementation does not work for .text > 4G position independent executables, which is fine since the toolchain does not support -mcmodel=large with -fPIE. Tests pass with ld/22269 and ld/22263 binutils bugs fixed. * sysdeps/aarch64/start.S (_start): Handle PIC && !SHARED case.
* aarch64: Improve strcmp unaligned performanceSiddhesh Poyarekar2017-12-131-2/+29
| | | | | | | | | | | | | | Replace the simple byte-wise compare in the misaligned case with a dword compare with page boundary checks in place. For simplicity I've chosen a 4K page boundary so that we don't have to query the actual page size on the system. This results in up to 3x improvement in performance in the unaligned case on falkor and about 2.5x improvement on mustang as measured using bench-strcmp. * sysdeps/aarch64/strcmp.S (misaligned8): Compare dword at a time whenever possible.
* aarch64: Avoid hidden symbols for memcpy/memmove into static binariesSiddhesh Poyarekar2017-12-041-0/+2
| | | | | | | | | | | The __GI_* symbol aliases for __memcpy_generic are unnecessary since they're never used. Add them only for libc.so to avoid PLT. Maybe some time in future we need to evaluate the relative cost of PLT vs gains from multiarch memcpy implementations and take a call on whether to drop this completely. * sysdeps/aarch64/multiarch/memcpy_generic.S (__GI_memcpy): Define only for libc.so.
* Use libm_alias_float for aarch64.Joseph Myers2017-11-2813-13/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Continuing the preparation for additional _FloatN / _FloatNx function aliases, this patch makes aarch64 libm function implementations use libm_alias_float to define function aliases. Tested with build-many-glibcs.py for aarch64-linux-gnu that installed stripped shared libraries are unchanged by the patch. * sysdeps/aarch64/fpu/s_ceilf.c: Include <libm-alias-float.h>. (ceilf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_floorf.c: Include <libm-alias-float.h>. (floorf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_fmaf.c: Include <libm-alias-float.h>. (fmaf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_fmaxf.c: Include <libm-alias-float.h>. (fmaxf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_fminf.c: Include <libm-alias-float.h>. (fminf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_llrintf.c: Include <libm-alias-float.h>. (llrintf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_llroundf.c: Include <libm-alias-float.h>. (llroundf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_lrintf.c: Include <libm-alias-float.h>. (lrintf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_lroundf.c: Include <libm-alias-float.h>. (lroundf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_nearbyintf.c: Include <libm-alias-float.h>. (nearbyintf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_rintf.c: Include <libm-alias-float.h>. (rintf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_roundf.c: Include <libm-alias-float.h>. (roundf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_truncf.c: Include <libm-alias-float.h>. (truncf): Define using libm_alias_float.
* Use libm_alias_double for aarch64.Joseph Myers2017-11-2713-13/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Continuing the preparation for additional _FloatN / _FloatNx function aliases, this patch makes aarch64 libm function implementations use libm_alias_double to define function aliases. Tested with build-many-glibcs.py for aarch64-linux-gnu that installed stripped shared libraries are unchanged by the patch. * sysdeps/aarch64/fpu/s_ceil.c: Include <libm-alias-double.h>. (ceil): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_floor.c: Include <libm-alias-double.h>. (floor): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_fma.c: Include <libm-alias-double.h>. (fma): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_fmax.c: Include <libm-alias-double.h>. (fmax): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_fmin.c: Include <libm-alias-double.h>. (fmin): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_llrint.c: Include <libm-alias-double.h>. (llrint): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_llround.c: Include <libm-alias-double.h>. (llround): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_lrint.c: Include <libm-alias-double.h>. (lrint): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_lround.c: Include <libm-alias-double.h>. (lround): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_nearbyint.c: Include <libm-alias-double.h>. (nearbyint): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_rint.c: Include <libm-alias-double.h>. (rint): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_round.c: Include <libm-alias-double.h>. (round): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_trunc.c: Include <libm-alias-double.h>. (trunc): Define using libm_alias_double.
* aarch64: Optimized memset for falkorSiddhesh Poyarekar2017-11-209-22/+193
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The generic memset reads dczid_el0 on every memset. This has a significant impact on falkor for a range of sizes because reading dczid_el0 is slow. The DZP bit in the dczid_el0 register does not change dynamically, so it is safe to read once during program startup. With this patch dczid_el0 is read once during startup and zva_size is cached. This is used to invoke the falkor-specific memset; the generic memset routine remains unchanged. The gains due to this are significant for falkor, with run time reductions as high as 48%. Here's a sample from the falkor tests: Function: memset Variant: walk simple_memset __memset_falkor __memset_generic ===================================================================== length=256, char=0: 139.96 (-698.28%) 9.07 ( 48.26%) 17.53 length=257, char=0: 140.50 (-699.03%) 9.53 ( 45.80%) 17.58 length=258, char=0: 140.96 (-703.95%) 9.58 ( 45.36%) 17.53 length=259, char=0: 141.56 (-705.16%) 9.53 ( 45.79%) 17.58 length=260, char=0: 142.15 (-710.76%) 9.57 ( 45.39%) 17.53 length=261, char=0: 142.50 (-710.39%) 9.53 ( 45.78%) 17.58 length=262, char=0: 142.97 (-715.09%) 9.57 ( 45.42%) 17.54 length=263, char=0: 143.51 (-716.18%) 9.53 ( 45.80%) 17.58 length=264, char=0: 143.93 (-720.55%) 9.58 ( 45.39%) 17.54 length=265, char=0: 144.56 (-722.07%) 9.53 ( 45.80%) 17.59 length=266, char=0: 144.98 (-726.42%) 9.58 ( 45.42%) 17.54 length=267, char=0: 145.53 (-727.53%) 9.53 ( 45.80%) 17.59 length=268, char=0: 146.25 (-731.81%) 9.53 ( 45.79%) 17.58 length=269, char=0: 146.52 (-735.39%) 9.53 ( 45.66%) 17.54 length=270, char=0: 146.97 (-735.81%) 9.53 ( 45.80%) 17.58 length=271, char=0: 147.54 (-741.08%) 9.58 ( 45.38%) 17.54 length=512, char=0: 268.26 (-1307.85%) 12.06 ( 36.71%) 19.05 length=513, char=0: 268.73 (-1273.89%) 13.56 ( 30.68%) 19.56 length=514, char=0: 269.31 (-1276.89%) 13.56 ( 30.68%) 19.56 length=515, char=0: 269.73 (-1279.05%) 13.56 ( 30.68%) 19.56 length=516, char=0: 270.34 (-1282.24%) 13.56 ( 30.67%) 19.56 length=517, char=0: 270.83 (-1284.71%) 13.56 ( 30.66%) 19.56 length=518, char=0: 271.20 (-1286.54%) 13.56 ( 30.67%) 19.56 length=519, char=0: 271.67 (-1288.67%) 13.65 ( 30.24%) 19.56 length=520, char=0: 272.14 (-1291.04%) 13.65 ( 30.22%) 19.56 length=521, char=0: 272.66 (-1293.69%) 13.65 ( 30.23%) 19.56 length=522, char=0: 273.14 (-1296.13%) 13.65 ( 30.20%) 19.56 length=523, char=0: 273.64 (-1298.75%) 13.65 ( 30.23%) 19.56 length=524, char=0: 274.34 (-1302.16%) 13.66 ( 30.20%) 19.57 length=525, char=0: 274.64 (-1297.78%) 13.56 ( 30.99%) 19.65 length=526, char=0: 275.20 (-1300.04%) 13.56 ( 31.01%) 19.66 length=527, char=0: 275.66 (-1302.86%) 13.56 ( 30.99%) 19.65 length=1024, char=0: 524.46 (-2169.75%) 20.12 ( 12.92%) 23.11 length=1025, char=0: 525.14 (-2124.63%) 21.62 ( 8.40%) 23.61 length=1026, char=0: 525.59 (-2125.36%) 21.88 ( 7.37%) 23.62 length=1027, char=0: 525.98 (-2127.14%) 21.62 ( 8.46%) 23.62 length=1028, char=0: 526.68 (-2131.10%) 21.62 ( 8.42%) 23.61 length=1029, char=0: 527.10 (-2131.70%) 21.79 ( 7.73%) 23.62 length=1030, char=0: 527.54 (-2118.51%) 21.62 ( 9.10%) 23.78 length=1031, char=0: 527.98 (-2136.37%) 21.62 ( 8.43%) 23.61 length=1032, char=0: 528.70 (-2139.38%) 21.62 ( 8.43%) 23.61 length=1033, char=0: 529.25 (-2124.37%) 21.62 ( 9.11%) 23.79 length=1034, char=0: 529.48 (-2142.95%) 21.62 ( 8.43%) 23.61 length=1035, char=0: 530.11 (-2145.13%) 21.62 ( 8.44%) 23.61 length=1036, char=0: 530.76 (-2147.10%) 21.79 ( 7.73%) 23.62 length=1037, char=0: 531.03 (-2149.45%) 21.62 ( 8.42%) 23.61 length=1038, char=0: 531.64 (-2151.87%) 21.62 ( 8.42%) 23.61 length=1039, char=0: 531.99 (-2151.63%) 21.80 ( 7.75%) 23.63 * sysdeps/aarch64/memset-reg.h: New file. * sysdeps/aarch64/memset.S: Use it. (__memset): Rename to MEMSET macro. [ZVA_MACRO]: Use zva_macro. * sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add memset_generic and memset_falkor. * sysdeps/aarch64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Add memset ifuncs. * sysdeps/aarch64/multiarch/init-arch.h (INIT_ARCH): New local variable zva_size. * sysdeps/aarch64/multiarch/memset.c: New file. * sysdeps/aarch64/multiarch/memset_generic.S: New file. * sysdeps/aarch64/multiarch/memset_falkor.S: New file. * sysdeps/aarch64/multiarch/rtld-memset.S: New file. * sysdeps/unix/sysv/linux/aarch64/cpu-features.c (DCZID_DZP_MASK): New macro. (DCZID_BS_MASK): Likewise. (init_cpu_features): Read and set zva_size. * sysdeps/unix/sysv/linux/aarch64/cpu-features.h (struct cpu_features): New member zva_size.
* aarch64: Fix f{max,min}{f} build for GCC 4.9 and 5Adhemerval Zanella2017-11-171-0/+6
| | | | | | | | | | | | | | | | | | | | | | | GCC 4.9 and 5 do not generate a correct f{max,min}nm instruction for __builtin_{fmax,fmin}{f} without -ffinite-math-only. It is clear a compiler issue since the instruction can handle NaN and Inf correctly and GCC6+ does not show this issue. We can backport a fix to GCC 5, raise the minimum required GCC version for aarch64 (since GCC 4.9 branch is now closed [1]) and/or add configure check to check for this issue. However I think -ffinite-math-only should be safe for these specific implementations and it is a simpler solution. Checked on aarch64-linux-gnu with GCC 5.3.1. * sysdeps/aarch64/fpu/Makefile (CFLAGS-s_fmax.c, CFLAGS-s_fmaxf.c, CFLAGS-s_fmin.c, CFLAGS-s_fminf.c): New rule: add -ffinite-math-only. [1] https://gcc.gnu.org/ml/gcc/2016-08/msg00010.html Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* nptl: Define __PTHREAD_MUTEX_{NUSERS_AFTER_KIND,USE_UNION}Adhemerval Zanella2017-11-071-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds two new internal defines to set the internal pthread_mutex_t layout required by the supported ABIS: 1. __PTHREAD_MUTEX_NUSERS_AFTER_KIND which control whether to define __nusers fields before or after __kind. The preferred value for is 0 for new ports and it sets __nusers before __kind. 2. __PTHREAD_MUTEX_USE_UNION which control whether internal __spins and __list members will be place inside an union for linuxthreads compatibility. The preferred value is 0 for ports and it sets to not use an union to define both fields. It fixes the wrong offsets value for __kind value on x86_64-linux-gnu-x32. Checked with a make check run-built-tests=no on all afected ABIs. [BZ #22298] * nptl/allocatestack.c (allocate_stack): Check if __PTHREAD_MUTEX_HAVE_PREV is non-zero, instead if __PTHREAD_MUTEX_HAVE_PREV is defined. * nptl/descr.h (pthread): Likewise. * nptl/nptl-init.c (__pthread_initialize_minimal_internal): Likewise. * nptl/pthread_create.c (START_THREAD_DEFN): Likewise. * sysdeps/nptl/fork.c (__libc_fork): Likewise. * sysdeps/nptl/pthread.h (PTHREAD_MUTEX_INITIALIZER): Likewise. * sysdeps/nptl/bits/thread-shared-types.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New defines. (__pthread_internal_list): Check __PTHREAD_MUTEX_USE_UNION instead of __WORDSIZE for internal layout. (__pthread_mutex_s): Check __PTHREAD_MUTEX_NUSERS_AFTER_KIND instead of __WORDSIZE for internal __nusers layout and __PTHREAD_MUTEX_USE_UNION instead of __WORDSIZE whether to use an union for __spins and __list fields. (__PTHREAD_MUTEX_HAVE_PREV): Define also for __PTHREAD_MUTEX_USE_UNION case. * sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New defines. * sysdeps/alpha/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/arm/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/hppa/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/powerpc/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/sparc/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/x86/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* nptl: Add tests for internal pthread_mutex_t offsetsAdhemerval Zanella2017-11-071-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a new build test to check for internal fields offsets for user visible internal field. Although currently the only field which is statically initialized to a non zero value is pthread_mutex_t.__data.__kind value, the tests also check the offset of __kind, __spins, __elision (if supported), and __list internal member. A internal header (pthread-offset.h) is added to each major ABI with the reference value. Checked on x86_64-linux-gnu and with a build check for all affected ABIs (aarch64-linux-gnu, alpha-linux-gnu, arm-linux-gnueabihf, hppa-linux-gnu, i686-linux-gnu, ia64-linux-gnu, m68k-linux-gnu, microblaze-linux-gnu, mips64-linux-gnu, mips64-n32-linux-gnu, mips-linux-gnu, powerpc64le-linux-gnu, powerpc-linux-gnu, s390-linux-gnu, s390x-linux-gnu, sh4-linux-gnu, sparc64-linux-gnu, sparcv9-linux-gnu, tilegx-linux-gnu, tilegx-linux-gnu-x32, tilepro-linux-gnu, x86_64-linux-gnu, and x86_64-linux-x32). * nptl/pthreadP.h (ASSERT_PTHREAD_STRING, ASSERT_PTHREAD_INTERNAL_OFFSET): New macro. * nptl/pthread_mutex_init.c (__pthread_mutex_init): Add build time checks for internal pthread_mutex_t offsets. * sysdeps/aarch64/nptl/pthread-offsets.h (__PTHREAD_MUTEX_NUSERS_OFFSET, __PTHREAD_MUTEX_KIND_OFFSET, __PTHREAD_MUTEX_SPINS_OFFSET, __PTHREAD_MUTEX_ELISION_OFFSET, __PTHREAD_MUTEX_LIST_OFFSET): New macro. * sysdeps/alpha/nptl/pthread-offsets.h: Likewise. * sysdeps/arm/nptl/pthread-offsets.h: Likewise. * sysdeps/hppa/nptl/pthread-offsets.h: Likewise. * sysdeps/i386/nptl/pthread-offsets.h: Likewise. * sysdeps/ia64/nptl/pthread-offsets.h: Likewise. * sysdeps/m68k/nptl/pthread-offsets.h: Likewise. * sysdeps/microblaze/nptl/pthread-offsets.h: Likewise. * sysdeps/mips/nptl/pthread-offsets.h: Likewise. * sysdeps/nios2/nptl/pthread-offsets.h: Likewise. * sysdeps/powerpc/nptl/pthread-offsets.h: Likewise. * sysdeps/s390/nptl/pthread-offsets.h: Likewise. * sysdeps/sh/nptl/pthread-offsets.h: Likewise. * sysdeps/sparc/nptl/pthread-offsets.h: Likewise. * sysdeps/tile/nptl/pthread-offsets.h: Likewise. * sysdeps/x86_64/nptl/pthread-offsets.h: Likewise. Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* aarch64: optimize _dl_tlsdesc_dynamic fast pathSzabolcs Nagy2017-11-031-54/+51
| | | | | | | | Remove some load/store instructions from the dynamic tlsdesc resolver fast path. This gives around 20% faster tls access in dlopened shared libraries (assuming glibc ran out of static tls space). * sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_dynamic): Optimize.
* aarch64: Remove barriers from TLS descriptor functionsSzabolcs Nagy2017-11-034-342/+1
| | | | | | | | | | | | | | | | | | | | Remove ldar synchronization and most lazy TLSDESC initialization related code. * sysdeps/aarch64/dl-machine.h (elf_machine_runtime_setup): Remove DT_TLSDESC_GOT initialization. * sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return_lazy): Remove. (_dl_tlsdesc_resolve_rela): Likewise. (_dl_tlsdesc_resolve_hold): Likewise. (_dl_tlsdesc_undefweak): Remove ldar. (_dl_tlsdesc_dynamic): Likewise. * sysdeps/aarch64/dl-tlsdesc.h (_dl_tlsdesc_return_lazy): Remove. (_dl_tlsdesc_resolve_rela): Likewise. (_dl_tlsdesc_resolve_hold): Likewise. * sysdeps/aarch64/tlsdesc.c (_dl_tlsdesc_resolve_rela_fixup): Remove. (_dl_tlsdesc_resolve_hold_fixup): Likewise. (_dl_tlsdesc_resolve_rela): Likewise. (_dl_tlsdesc_resolve_hold): Likewise.
* aarch64: Disable lazy symbol binding of TLSDESCSzabolcs Nagy2017-11-031-5/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Always do TLS descriptor initialization at load time during relocation processing to avoid barriers at every TLS access. In non-dlopened shared libraries the overhead of tls access vs static global access is > 3x bigger when lazy initialization is used (_dl_tlsdesc_return_lazy) compared to bind-now (_dl_tlsdesc_return) so the barriers dominate tls access performance. TLSDESC relocs are in DT_JMPREL which are processed at load time using elf_machine_lazy_rel which is only supposed to do lightweight initialization using the DT_TLSDESC_PLT trampoline (the trampoline code jumps to the entry point in DT_TLSDESC_GOT which does the lazy tlsdesc initialization at runtime). This patch changes elf_machine_lazy_rel in aarch64 to do the symbol binding and initialization as if DF_BIND_NOW was set, so the non-lazy code path of elf/do-rel.h was replicated. The static linker could be changed to emit TLSDESC relocs in DT_REL*, which are processed non-lazily, but the goal of this patch is to always guarantee bind-now semantics, even if the binary was produced with an old linker, so the barriers can be dropped in tls descriptor functions. After this change the synchronizing ldar instructions can be dropped as well as the lazy initialization machinery including the DT_TLSDESC_GOT setup. I believe this should be done on all targets, including ones where no barrier is needed for lazy initialization. There is very little gain in optimizing for large number of symbolic tlsdesc relocations which is an extremely uncommon case. And currently the tlsdesc entries are only readonly protected with -z now and some hardennings against writable JUMPSLOT relocs don't work for TLSDESC so they are a security hazard. (But to fix that the static linker has to be changed.) * sysdeps/aarch64/dl-machine.h (elf_machine_lazy_rel): Do symbol binding and initialization non-lazily for R_AARCH64_TLSDESC.
* aarch64: Add missing math Makefile for recent commitSzabolcs Nagy2017-10-231-0/+8
| | | | | Without -fno-math-errno, the builtins just do a call instead of inlining a single instruction.
* aarch64: Implement math acceleration via builtinsMichael Collison2017-10-2330-288/+250
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch converts asm statements into builtins for AArch64. As an example for the file sysdeps/aarch64/fpu/s_ceil.c, we convert the function from double __ceil (double x) { double result; asm ("frintp\t%d0, %d1" : "=w" (result) : "w" (x) ); return result; } into double __ceil (double x) { return __builtin_ceil (x); } Tested on aarch64-linux-gnu with gcc-4.9.4 and gcc-6. * sysdeps/aarch64/fpu/e_sqrt.c (ieee754_sqrt): Replace asm statements with __builtin_sqrt. * sysdeps/aarch64/fpu/e_sqrtf.c (ieee754_sqrtf): Replace asm statements with __builtin_sqrtf. * sysdeps/aarch64/fpu/s_ceil.c (__ceil): Replace asm statements with __builtin_ceil. * sysdeps/aarch64/fpu/s_ceilf.c (__ceilf): Replace asm statements with __builtin_ceilf. * sysdeps/aarch64/fpu/s_floor.c (__floor): Replace asm statements with __builtin_floor. * sysdeps/aarch64/fpu/s_floorf.c (__floorf): Replace asm statements with __builtin_floorf. * sysdeps/aarch64/fpu/s_fma.c (__fma): Replace asm statements with __builtin_fma. * sysdeps/aarch64/fpu/s_fmaf.c (__fmaf): Replace asm statements with __builtin_fmaf. * sysdeps/aarch64/fpu/s_fmax.c (__fmax): Replace asm statements with __builtin_fmax. * sysdeps/aarch64/fpu/s_fmaxf.c (__fmaxf): Replace asm statements with __builtin_fmaxf. * sysdeps/aarch64/fpu/s_fmin.c (__fmin): Replace asm statements with __builtin_fmin. * sysdeps/aarch64/fpu/s_fminf.c (__fminf): Replace asm statements with __builtin_fminf. * sysdeps/aarch64/fpu/s_frint.c: Delete file. * sysdeps/aarch64/fpu/s_frintf.c: Delete file. * sysdeps/aarch64/fpu/s_llrint.c (__llrint): Replace asm statements with builtin_rint and conversion to int. * sysdeps/aarch64/fpu/s_llrintf.c (__llrintf): Likewise. * sysdeps/aarch64/fpu/s_llround.c (__llround): Replace asm statements with builtin_llround. * sysdeps/aarch64/fpu/s_llroundf.c (__llroundf): Likewise. * sysdeps/aarch64/fpu/s_lrint.c (__lrint): Replace asm statements with builtin_rint and conversion to long int. * sysdeps/aarch64/fpu/s_lrintf.c (__lrintf): Likewise. * sysdeps/aarch64/fpu/s_lround.c (__lround): Replace asm statements with builtin_lround. * sysdeps/aarch64/fpu/s_lroundf.c (__lroundf): Replace asm statements with builtin_lroundf. * sysdeps/aarch64/fpu/s_nearbyint.c (__nearbyint): Replace asm statements with __builtin_nearbyint. * sysdeps/aarch64/fpu/s_nearbyintf.c (__nearbyintf): Replace asm statements with __builtin_nearbyintf. * sysdeps/aarch64/fpu/s_rint.c (__rint): Replace asm statements with __builtin_rint. * sysdeps/aarch64/fpu/s_rintf.c (__rintf): Replace asm statements with __builtin_rintf. * sysdeps/aarch64/fpu/s_round.c (__round): Replace asm statements with __builtin_round. * sysdeps/aarch64/fpu/s_roundf.c (__roundf): Replace asm statements with __builtin_roundf. * sysdeps/aarch64/fpu/s_trunc.c (__trunc): Replace asm statements with __builtin_trunc. * sysdeps/aarch64/fpu/s_truncf.c (__truncf): Replace asm statements with __builtin_truncf. * sysdeps/aarch64/fpu/Makefile: Build e_sqrt[f].c with -fno-math-errno.
* [AARCH64] Rewrite elf_machine_load_address using _DYNAMIC symbolSzabolcs Nagy2017-10-181-34/+5
| | | | | | | | | | | | | | | | | | | | This patch rewrites aarch64 elf_machine_load_address to use special _DYNAMIC symbol instead of _dl_start. The static address of _DYNAMIC symbol is stored in the first GOT entry. Here is the change which makes this solution work (part of binutils 2.24): https://sourceware.org/ml/binutils/2013-06/msg00248.html i386, x86_64 targets use the same method to do this as well. The original implementation relies on a trick that R_AARCH64_ABS32 relocation being resolved at link time and the static address fits in the 32bits. However, in LP64, normally, the address is defined to be 64 bit. Here is the C version one which should be portable in all cases. * sysdeps/aarch64/dl-machine.h (elf_machine_load_address): Use _DYNAMIC symbol to calculate load address.
* aarch64: Optimized implementation of memmove for Qualcomm FalkorSiddhesh Poyarekar2017-10-054-2/+241
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is an optimized memmove implementation for the Qualcomm Falkor processor core. Due to the way the falkor memcpy needs to be written, code cannot be easily shared between memmove and memcpy like in case of other aarch64 memcpy implementations due to which this routine is separate. The underlying principle is the same as that of memcpy where it tries to use registers with the same lower 4 bits for fetching the same stream, thus optimizing hardware prefetcher performance. The memcpy copy loop copies 64 bytes at a time using the same register pair since that's the way to train the hardware prefetcher on the falkor core. memmove cannot quite do that since it needs to avoid overlaps, so it does the next best thing, i.e. has a 32 byte loop with a 32 byte end (prefetch a loop ahead to account for overlapping locations) with register pairs that alias so that they hit the same prefetcher. Due to this difference in loop size, they have to currently be separate implementations but efforts are on to try and get memmove to fall back into memcpy whenever it can without simply duplicating all of the code. Performance: The routine fares around 20-25% better than the generic memmove for most medium to large sizes (i.e. > 128 bytes) for the new walking memmove benchmark (memmove-walk) with an unexplained regression between 1K and 2K. The minor regression is something worth looking into for us, but the remaining gains are significant enough that we would like this included upstream as we looking into the cause for the regression. Here is a snippet of the numbers as generated from the microbenchmark by the compare_strings script. Comparisons are against __memmove_generic: Function: memmove Variant: walk __memmove_thunderx __memmove_falkor __memmove_generic ======================================================================================================================== <snip> length=16384: 12508800.00 ( 6.09%) 11486800.00 ( 13.76%) 13319600.00 length=16400: 13614200.00 ( -0.67%) 11585000.00 ( 14.33%) 13523600.00 length=16385: 13448400.00 ( 0.10%) 11732700.00 ( 12.84%) 13461200.00 length=16399: 13594100.00 ( -0.22%) 11859600.00 ( 12.57%) 13564400.00 length=16386: 13211600.00 ( 1.13%) 11503800.00 ( 13.91%) 13362400.00 length=16398: 13218600.00 ( 2.12%) 11573200.00 ( 14.30%) 13504700.00 length=16387: 13510900.00 ( -0.37%) 11744200.00 ( 12.76%) 13461300.00 length=16397: 13603700.00 ( -0.15%) 11878200.00 ( 12.55%) 13583200.00 length=16388: 13461700.00 ( -0.13%) 11558000.00 ( 14.03%) 13444100.00 length=16396: 13517500.00 ( -0.03%) 11561300.00 ( 14.45%) 13513900.00 length=16389: 13534100.00 ( 0.17%) 11756800.00 ( 13.28%) 13556900.00 length=16395: 13585600.00 ( 0.11%) 11791800.00 ( 13.30%) 13601200.00 length=16390: 13480100.00 ( -0.13%) 11685500.00 ( 13.20%) 13462100.00 length=16394: 13529900.00 ( -0.23%) 11549800.00 ( 14.43%) 13498200.00 length=16391: 13595400.00 ( -0.26%) 11768200.00 ( 13.22%) 13560600.00 length=16393: 13567000.00 ( 0.20%) 11779700.00 ( 13.35%) 13594700.00 length=32768: 71308800.00 ( -6.53%) 50220800.00 ( 24.98%) 66939200.00 length=32784: 72100800.00 (-11.55%) 50114100.00 ( 22.47%) 64636300.00 length=32769: 71767000.00 ( -7.10%) 51238400.00 ( 23.54%) 67010000.00 length=32783: 70113700.00 (-40.95%) 51129000.00 ( -2.78%) 49744400.00 length=32770: 71367600.00 ( -6.52%) 50244700.00 ( 25.01%) 67000900.00 length=32782: 64366700.00 ( 4.71%) 50101400.00 ( 25.83%) 67545600.00 length=32771: 71440100.00 ( -6.51%) 51263900.00 ( 23.57%) 67074900.00 length=32781: 66993000.00 ( 0.34%) 51108300.00 ( 23.97%) 67220300.00 length=32772: 71443900.00 (-60.50%) 50062100.00 (-12.47%) 44512600.00 length=32780: 71759100.00 ( -6.58%) 50263200.00 ( 25.35%) 67328600.00 length=32773: 71714900.00 (-33.21%) 51076600.00 ( 5.12%) 53835400.00 length=32779: 71756900.00 ( -6.56%) 51290800.00 ( 23.83%) 67337800.00 length=32774: 59689300.00 (-34.55%) 50068400.00 (-12.86%) 44363300.00 length=32778: 71847500.00 (-18.20%) 50084100.00 ( 17.61%) 60786500.00 length=32775: 71599300.00 ( -6.54%) 51278200.00 ( 23.70%) 67204800.00 length=32777: 71862900.00 (-60.85%) 51094000.00 (-14.36%) 44677900.00 length=65536: 282848000.00 ( -6.60%) 199187000.00 ( 24.93%) 265325000.00 length=65552: 243285000.00 (-41.61%) 198512000.00 (-15.54%) 171805000.00 length=65537: 255415000.00 (-23.47%) 202499000.00 ( 2.11%) 206858000.00 length=65551: 280122000.00 (-62.95%) 203349000.00 (-18.29%) 171911000.00 length=65538: 283676000.00 (-14.46%) 198368000.00 ( 19.96%) 247848000.00 length=65550: 275566000.00 (-51.76%) 198494000.00 ( -9.31%) 181581000.00 length=65539: 283699000.00 ( -6.58%) 203453000.00 ( 23.57%) 266195000.00 length=65549: 286572000.00 ( -6.65%) 202607000.00 ( 24.60%) 268712000.00 length=65540: 283710000.00 ( -6.59%) 199161000.00 ( 25.17%) 266160000.00 length=65548: 237573000.00 ( 11.48%) 198462000.00 ( 26.06%) 268395000.00 length=65541: 284150000.00 ( -6.58%) 203273000.00 ( 23.75%) 266600000.00 length=65547: 286250000.00 ( -6.70%) 202594000.00 ( 24.48%) 268263000.00 length=65542: 284167000.00 ( -6.60%) 199122000.00 ( 25.31%) 266584000.00 length=65546: 285656000.00 ( -6.59%) 198443000.00 ( 25.95%) 268002000.00 length=65543: 284600000.00 ( -6.58%) 203247000.00 ( 23.89%) 267030000.00 length=65545: 285665000.00 ( -6.40%) 202575000.00 ( 24.55%) 268472000.00 <snip> * sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add memmove_falkor. * sysdeps/aarch64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Likewise. * sysdeps/aarch64/multiarch/memmove.c: Likewise. * sysdeps/aarch64/multiarch/memmove_falkor.S: New file.
* aarch64: don't use MIN in dl-machine.hSzabolcs Nagy2017-10-041-1/+2
| | | | | | | MIN is used, but param.h may not be included, so expand its single use inline. * sysdeps/aarch64/dl-machine.h (elf_machine_rela): Expand MIN.
* AArch64: update libm-test-ulpsSzabolcs Nagy2017-09-281-2/+8
| | | | | | Update for new expf and logf. * sysdeps/aarch64/libm-test-ulps: Update.
* Optimized generic expf and exp2f with wrappersSzabolcs Nagy2017-09-251-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Based on new expf and exp2f code from https://github.com/ARM-software/optimized-routines/ with wrapper on aarch64: expf reciprocal-throughput: 2.3x faster expf latency: 1.7x faster without wrapper on aarch64: expf reciprocal-throughput: 3.3x faster expf latency: 1.7x faster without wrapper on aarch64: exp2f reciprocal-throughput: 2.8x faster exp2f latency: 1.3x faster libm.so size on aarch64: .text size: -152 bytes .rodata size: -1740 bytes expf/exp2f worst case nearest rounding error: 0.502 ulp worst case non-nearest rounding error: 1 ulp Error checks are inline and errno setting is in separate tail called functions, but the wrappers are kept in this patch to handle the _LIB_VERSION==_SVID_ case. (So e.g. errno is set twice for expf calls and once for __expf_finite calls on targets where the new code is used.) Double precision arithmetics is used which is expected to be faster on most targets (including soft-float) than using single precision and it is easier to get good precision result with it. Const data is kept in a separate translation unit which complicates maintenance a bit, but is expected to give good code for literal loads on most targets and allows sharing data across expf, exp2f and powf. (This data is disabled on i386, m68k and ia64 which have their own expf, exp2f and powf code.) Some details may need target specific tweaks: - best convert and round to int operation in the arg reduction may be different across targets. - code was optimized on fma target, optimal polynomial eval may be different without fma. - gcc does not always generate good code for fp bit representation access via unions or it may be inherently slow on some targets. The libm-test-ulps will need adjustment because.. - The argument reduction ideally uses nearest rounded rint, but that is not efficient on most targets, so the polynomial can get evaluated on a wider interval in non-nearest rounding mode making 1 ulp errors common in that case. - The polynomial is evaluated such that it may have 1 ulp error on negative tiny inputs with upward rounding. * math/Makefile (type-float-routines): Add math_errf and e_exp2f_data. * sysdeps/aarch64/fpu/math_private.h (TOINT_INTRINSICS): Define. (roundtoint, converttoint): Likewise. * sysdeps/ieee754/flt-32/e_expf.c: New implementation. * sysdeps/ieee754/flt-32/e_exp2f.c: New implementation. * sysdeps/ieee754/flt-32/e_exp2f_data.c: New file. * sysdeps/ieee754/flt-32/math_config.h: New file. * sysdeps/ieee754/flt-32/math_errf.c: New file. * sysdeps/ieee754/flt-32/t_exp2f.h: Remove. * sysdeps/i386/fpu/e_exp2f_data.c: New file. * sysdeps/i386/fpu/math_errf.c: New file. * sysdeps/ia64/fpu/e_exp2f_data.c: New file. * sysdeps/ia64/fpu/math_errf.c: New file. * sysdeps/m68k/m680x0/fpu/e_exp2f_data.c: New file. * sysdeps/m68k/m680x0/fpu/math_errf.c: New file.
* Enable unwind info in libc-start.c and backtrace.cWilco Dijkstra2017-09-191-4/+0
| | | | | | | | | | | | | | | | | Add unwind info to __libc_start_main so that unwinding continues one extra level to _start. Similarly add unwind info to backtrace. Given many targets require this, do this in a general way. * csu/Makefile: Add -funwind-tables to libc-start.c. * debug/Makefile: Add -funwind-tables to backtrace.c. * sysdeps/aarch64/Makefile: Remove CFLAGS-backtrace.c. * sysdeps/arm/Makefile: Likewise. * sysdeps/i386/Makefile: Likewise. * sysdeps/m68k/Makefile: Likewise. * sysdeps/mips/Makefile: Likewise. * sysdeps/nios2/Makefile: Likewise. * sysdeps/sh/Makefile: Likewise. * sysdeps/sparc/Makefile: Likewise.