about summary refs log tree commit diff
path: root/sysdeps
Commit message (Collapse)AuthorAgeFilesLines
* Improve generic strcspn performanceWilco Dijkstra2016-04-011-18/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Improve strcspn performance using a much faster algorithm. It is kept simple so it works well on most targets. It is generally at least 10 times faster than the existing implementation on bench-strcspn on a few AArch64 implementations, and for some tests 100 times as fast (repeatedly calling strchr on a small string is extremely slow...). In fact the string/bits/string2.h inlines make no longer sense, as GCC already uses strlen if reject is an empty string, strchrnul is 5 times as fast as __strcspn_c1, while __strcspn_c2 and __strcspn_c3 are slower than the strcspn main loop for large strings (though reject length 2-4 could be special cased in the future to gain even more performance). Tested on x86_64, i686, and aarch64. * string/Version (libc): Add GLIBC_2.24. * string/strcspn.c (strcspn): Rewrite function. * string/bits/string2.h (strcspn): Use __builtin_strcspn. (__strcspn_c1): Remove inline function. (__strcspn_c2): Likewise. (__strcspn_c3): Likewise. * string/string-inline.c [SHLIB_COMPAT(libc, GLIBC_2_1_1, GLIBC_2_24)] (__strcspn_c1): Add compatibility symbol. [SHLIB_COMPAT(libc, GLIBC_2_1_1, GLIBC_2_24)] (__strcspn_c2): Likewise. [SHLIB_COMPAT(libc, GLIBC_2_1_1, GLIBC_2_24)] (__strcspn_c3): Likewise. * sysdeps/i386/string-inlines.c: Include generic string-inlines.c.
* S390: Use ahi instead of aghi in 32bit _dl_runtime_resolve.Stefan Liebler2016-04-011-1/+1
| | | | | | | | | | | This patch uses ahi instead of aghi in 32bit _dl_runtime_resolve to adjust the stack pointer. This is no functional change, but a cosmetic one. ChangeLog: * sysdeps/s390/s390-32/dl-trampoline.h (_dl_runtime_resolve): Use ahi instead of aghi to adjust stack pointer.
* Increase internal precision of ldbl-128ibm decimal printf [BZ #19853]Paul E. Murphy2016-03-311-7/+18
| | | | | | | | | | | | | | When the signs differ, the precision of the conversion sometimes drops below 106 bits. This strategy is identical to the hexadecimal variant. I've refactored tst-sprintf3 to enable testing a value with more than 30 significant digits in order to demonstrate this failure and its solution. Additionally, this implicitly fixes a typo in the shift quantities when subtracting from the high mantissa to compute the difference.
* Add x86-64 memset with unaligned store and rep stosbH.J. Lu2016-03-316-1/+335
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement x86-64 memset with unaligned store and rep movsb. Support 16-byte, 32-byte and 64-byte vector register sizes. A single file provides 2 implementations of memset, one with rep stosb and the other without rep stosb. They share the same codes when size is between 2 times of vector register size and REP_STOSB_THRESHOLD which defaults to 2KB. Key features: 1. Use overlapping store to avoid branch. 2. For size <= 4 times of vector register size, fully unroll the loop. 3. For size > 4 times of vector register size, store 4 times of vector register size at a time. [BZ #19881] * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add memset-sse2-unaligned-erms, memset-avx2-unaligned-erms and memset-avx512-unaligned-erms. * sysdeps/x86_64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Test __memset_chk_sse2_unaligned, __memset_chk_sse2_unaligned_erms, __memset_chk_avx2_unaligned, __memset_chk_avx2_unaligned_erms, __memset_chk_avx512_unaligned, __memset_chk_avx512_unaligned_erms, __memset_sse2_unaligned, __memset_sse2_unaligned_erms, __memset_erms, __memset_avx2_unaligned, __memset_avx2_unaligned_erms, __memset_avx512_unaligned_erms and __memset_avx512_unaligned. * sysdeps/x86_64/multiarch/memset-avx2-unaligned-erms.S: New file. * sysdeps/x86_64/multiarch/memset-avx512-unaligned-erms.S: Likewise. * sysdeps/x86_64/multiarch/memset-sse2-unaligned-erms.S: Likewise. * sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: Likewise.
* Add x86-64 memmove with unaligned load/store and rep movsbH.J. Lu2016-03-316-1/+594
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement x86-64 memmove with unaligned load/store and rep movsb. Support 16-byte, 32-byte and 64-byte vector register sizes. When size <= 8 times of vector register size, there is no check for address overlap bewteen source and destination. Since overhead for overlap check is small when size > 8 times of vector register size, memcpy is an alias of memmove. A single file provides 2 implementations of memmove, one with rep movsb and the other without rep movsb. They share the same codes when size is between 2 times of vector register size and REP_MOVSB_THRESHOLD which is 2KB for 16-byte vector register size and scaled up by large vector register size. Key features: 1. Use overlapping load and store to avoid branch. 2. For size <= 8 times of vector register size, load all sources into registers and store them together. 3. If there is no address overlap bewteen source and destination, copy from both ends with 4 times of vector register size at a time. 4. If address of destination > address of source, backward copy 8 times of vector register size at a time. 5. Otherwise, forward copy 8 times of vector register size at a time. 6. Use rep movsb only for forward copy. Avoid slow backward rep movsb by fallbacking to backward copy 8 times of vector register size at a time. 7. Skip when address of destination == address of source. [BZ #19776] * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add memmove-sse2-unaligned-erms, memmove-avx-unaligned-erms and memmove-avx512-unaligned-erms. * sysdeps/x86_64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Test __memmove_chk_avx512_unaligned_2, __memmove_chk_avx512_unaligned_erms, __memmove_chk_avx_unaligned_2, __memmove_chk_avx_unaligned_erms, __memmove_chk_sse2_unaligned_2, __memmove_chk_sse2_unaligned_erms, __memmove_avx_unaligned_2, __memmove_avx_unaligned_erms, __memmove_avx512_unaligned_2, __memmove_avx512_unaligned_erms, __memmove_erms, __memmove_sse2_unaligned_2, __memmove_sse2_unaligned_erms, __memcpy_chk_avx512_unaligned_2, __memcpy_chk_avx512_unaligned_erms, __memcpy_chk_avx_unaligned_2, __memcpy_chk_avx_unaligned_erms, __memcpy_chk_sse2_unaligned_2, __memcpy_chk_sse2_unaligned_erms, __memcpy_avx_unaligned_2, __memcpy_avx_unaligned_erms, __memcpy_avx512_unaligned_2, __memcpy_avx512_unaligned_erms, __memcpy_sse2_unaligned_2, __memcpy_sse2_unaligned_erms, __memcpy_erms, __mempcpy_chk_avx512_unaligned_2, __mempcpy_chk_avx512_unaligned_erms, __mempcpy_chk_avx_unaligned_2, __mempcpy_chk_avx_unaligned_erms, __mempcpy_chk_sse2_unaligned_2, __mempcpy_chk_sse2_unaligned_erms, __mempcpy_avx512_unaligned_2, __mempcpy_avx512_unaligned_erms, __mempcpy_avx_unaligned_2, __mempcpy_avx_unaligned_erms, __mempcpy_sse2_unaligned_2, __mempcpy_sse2_unaligned_erms and __mempcpy_erms. * sysdeps/x86_64/multiarch/memmove-avx-unaligned-erms.S: New file. * sysdeps/x86_64/multiarch/memmove-avx512-unaligned-erms.S: Likwise. * sysdeps/x86_64/multiarch/memmove-sse2-unaligned-erms.S: Likwise. * sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: Likwise.
* S390: Extend structs La_s390_regs / La_s390_retval with vector-registers.Stefan Liebler2016-03-313-65/+124
| | | | | | | | | | | | | | | | | | | | | Starting with z13, vector registers can also occur as argument registers. Thus the passed input/output register structs for la_s390_[32|64]_gnu_plt[enter|exit] functions should reflect those new registers. This patch extends these structs La_s390_regs and La_s390_retval and adjusts _dl_runtime_profile() to handle those fields in case of running on a z13 machine. ChangeLog: * sysdeps/s390/bits/link.h: (La_s390_vr) New typedef. (La_s390_32_regs): Append vector register lr_v24-lr_v31. (La_s390_64_regs): Likewise. (La_s390_32_retval): Append vector register lrv_v24. (La_s390_64_retval): Likeweise. * sysdeps/s390/s390-32/dl-trampoline.h (_dl_runtime_profile): Handle extended structs La_s390_32_regs and La_s390_32_retval. * sysdeps/s390/s390-64/dl-trampoline.h (_dl_runtime_profile): Handle extended structs La_s390_64_regs and La_s390_64_retval.
* S390: Save and restore fprs/vrs while resolving symbols.Stefan Liebler2016-03-316-248/+496
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On s390, no fpr/vrs were saved while resolving a symbol via _dl_runtime_resolve/_dl_runtime_profile. According to the abi, the fpr-arguments are defined as call clobbered. In leaf-functions, gcc 4.9 and newer can use fprs for saving/restoring gprs instead of saving them to the stack. If gcc do this in one of the resolver-functions, then the floating point arguments of a library-function are invalid for the first library-function-call. Thus, this patch saves/restores the fprs around the resolving code. The same could occur for vector registers. Furthermore an ifunc-resolver could also clobber the vector/floating point argument registers. Thus this patch provides the further variants _dl_runtime_resolve_vx/ _dl_runtime_profile_vx, which are used if the kernel claims, that we run on a machine with vector registers. Furthermore, if _dl_runtime_profile calls _dl_call_pltexit, the pointers to inregs-/outregs-structs were setup invalid. Now they point to the correct location in the stack-frame. Before branching back to the caller, the return values are now restored instead of containing the return values of the _dl_call_pltexit() call. On s390-32, an endless loop occurs if _dl_call_pltexit() should be called. Now, this code-path branches to this function instead of just after the preceding basr-instruction. ChangeLog: * sysdeps/s390/s390-32/dl-trampoline.S: Include dl-trampoline.h twice to create a non-vector/vector version for _dl_runtime_resolve and _dl_runtime_profile. Move implementation to ... * sysdeps/s390/s390-32/dl-trampoline.h: ... here. (_dl_runtime_resolve) Save and restore fpr/vrs. (_dl_runtime_profile) Save and restore vrs and fix some issues if _dl_call_pltexit is called. * sysdeps/s390/s390-32/dl-machine.h (elf_machine_runtime_setup): Choose the correct resolver function if running on a machine with vx. * sysdeps/s390/s390-64/dl-trampoline.S: Include dl-trampoline.h twice to create a non-vector/vector version for _dl_runtime_resolve and _dl_runtime_profile. Move implementation to ... * sysdeps/s390/s390-64/dl-trampoline.h: ... here. (_dl_runtime_resolve) Save and restore fpr/vrs. (_dl_runtime_profile) Save and restore vrs and fix some issues * sysdeps/s390/s390-64/dl-machine.h: (elf_machine_runtime_setup): Choose the correct resolver function if running on a machine with vx.
* [microblaze] Remove __ASSUME_FUTIMESAT.Joseph Myers2016-03-292-33/+0
| | | | | | | | | | | | | | | MicroBlaze has a special version of futimesat.c because it gained the futimesat syscall later than other non-asm-generic architectures. Now the minimum kernel is recent enough that this syscall can always be assumed to be present for MicroBlaze, so this patch removes the special version and the __ASSUME_FUTIMESAT macro, resulting in the sysdeps/unix/sysv/linux/futimesat.c version being used. Untested. * sysdeps/unix/sysv/linux/microblaze/kernel-features.h (__ASSUME_FUTIMESAT): Remove macro. * sysdeps/unix/sysv/linux/microblaze/futimesat.c: Remove file.
* Initial Enhanced REP MOVSB/STOSB (ERMS) supportH.J. Lu2016-03-281-0/+4
| | | | | | | | | | The newer Intel processors support Enhanced REP MOVSB/STOSB (ERMS) which has a feature bit in CPUID. This patch adds the Enhanced REP MOVSB/STOSB (ERMS) bit to x86 cpu-features. * sysdeps/x86/cpu-features.h (bit_cpu_ERMS): New. (index_cpu_ERMS): Likewise. (reg_ERMS): Likewise.
* Synchronize <sys/personality.h> with kernel headersAurelien Jarno2016-03-281-0/+3
| | | | | | | | | <sys/personality.h> is out of sync with kernel headers, missing the UNAME26, FDPIC_FUNCPTRS and PER_LINUX_FDPIC entries. Fix that. Changelog: * sysdeps/unix/sysv/linux/sys/personality.h (UNAME26, FDPIC_FUNCPTRS, PER_LINUX_FDPIC): Add.
* Make __memcpy_avx512_no_vzeroupper an aliasH.J. Lu2016-03-283-430/+404
| | | | | | | | | | | | | | | | | | | | | | | | | | Since x86-64 memcpy-avx512-no-vzeroupper.S implements memmove, make __memcpy_avx512_no_vzeroupper an alias of __memmove_avx512_no_vzeroupper to reduce code size of libc.so. * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove memcpy-avx512-no-vzeroupper. * sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S: Renamed to ... * sysdeps/x86_64/multiarch/memmove-avx512-no-vzeroupper.S: This. (MEMCPY): Don't define. (MEMCPY_CHK): Likewise. (MEMPCPY): Likewise. (MEMPCPY_CHK): Likewise. (MEMPCPY_CHK): Renamed to ... (__mempcpy_chk_avx512_no_vzeroupper): This. (MEMPCPY_CHK): Renamed to ... (__mempcpy_chk_avx512_no_vzeroupper): This. (MEMCPY_CHK): Renamed to ... (__memmove_chk_avx512_no_vzeroupper): This. (MEMCPY): Renamed to ... (__memmove_avx512_no_vzeroupper): This. (__memcpy_avx512_no_vzeroupper): New alias. (__memcpy_chk_avx512_no_vzeroupper): Likewise.
* Implement x86-64 multiarch mempcpy in memcpyH.J. Lu2016-03-289-57/+69
| | | | | | | | | | | | | | | | | | | | | | | | | Implement x86-64 multiarch mempcpy in memcpy to share most of code. It reduces code size of libc.so. [BZ #18858] * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove mempcpy-ssse3, mempcpy-ssse3-back, mempcpy-avx-unaligned and mempcpy-avx512-no-vzeroupper. * sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S (MEMPCPY_CHK): New. (MEMPCPY): Likewise. * sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S (MEMPCPY_CHK): New. (MEMPCPY): Likewise. * sysdeps/x86_64/multiarch/memcpy-ssse3-back.S (MEMPCPY_CHK): New. (MEMPCPY): Likewise. * sysdeps/x86_64/multiarch/memcpy-ssse3.S (MEMPCPY_CHK): New. (MEMPCPY): Likewise. * sysdeps/x86_64/multiarch/mempcpy-avx-unaligned.S: Removed. * sysdeps/x86_64/multiarch/mempcpy-avx512-no-vzeroupper.S: Likewise. * sysdeps/x86_64/multiarch/mempcpy-ssse3-back.S: Likewise. * sysdeps/x86_64/multiarch/mempcpy-ssse3.S: Likewise.
* [x86] Add a feature bit: Fast_Unaligned_CopyH.J. Lu2016-03-283-2/+17
| | | | | | | | | | | | | | | | | | | On AMD processors, memcpy optimized with unaligned SSE load is slower than emcpy optimized with aligned SSSE3 while other string functions are faster with unaligned SSE load. A feature bit, Fast_Unaligned_Copy, is added to select memcpy optimized with unaligned SSE load. [BZ #19583] * sysdeps/x86/cpu-features.c (init_cpu_features): Set Fast_Unaligned_Copy with Fast_Unaligned_Load for Intel processors. Set Fast_Copy_Backward for AMD Excavator processors. * sysdeps/x86/cpu-features.h (bit_arch_Fast_Unaligned_Copy): New. (index_arch_Fast_Unaligned_Copy): Likewise. * sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check Fast_Unaligned_Copy instead of Fast_Unaligned_Load.
* tst-audit10: Fix compilation on compilers without bit_AVX512F [BZ #19860]Florian Weimer2016-03-251-1/+4
| | | | | | [BZ# 19860] * sysdeps/x86_64/tst-audit10.c (avx512_enabled): Always return zero if the compiler does not provide the AVX512F bit.
* Fix x86_64 / x86 powl inaccuracy for integer exponents (bug 19848).Joseph Myers2016-03-244-21/+21
| | | | | | | | | | | | | | | | | | | | | | | Bug 19848 reports cases where powl on x86 / x86_64 has error accumulation, for small integer exponents, larger than permitted by glibc's accuracy goals, at least in some rounding modes. This patch further restricts the exponent range for which the small-integer-exponent logic is used to limit the possible error accumulation. Tested for x86_64 and x86 and ulps updated accordingly. [BZ #19848] * sysdeps/i386/fpu/e_powl.S (p3): Rename to p2 and change value from 8 to 4. (__ieee754_powl): Compare integer exponent against 4 not 8. * sysdeps/x86_64/fpu/e_powl.S (p3): Rename to p2 and change value from 8 to 4. (__ieee754_powl): Compare integer exponent against 4 not 8. * math/auto-libm-test-in: Add more tests of pow. * math/auto-libm-test-out: Regenerated. * sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Update. * sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
* Assume __NR_utimensat is always definedAurelien Jarno2016-03-233-22/+0
| | | | | | | | | | | | | | | | With the 2.6.32 minimum kernel on x86 and 3.2 on other architectures, __NR_utimensat is always defined. Changelog: * sysdeps/unix/sysv/linux/futimens.c (futimens) [__NR_utimensat]: Make code unconditional. [!__NR_utimensat]: Remove conditional code. * sysdeps/unix/sysv/linux/lutimes.c (lutimes) [__NR_utimensat]: Make code unconditional. [!__NR_utimensat]: Remove conditional code. * sysdeps/unix/sysv/linux/utimensat.c (utimensat) [__NR_utimensat]: Make code unconditional. [!__NR_utimensat]: Remove conditional code.
* Assume __NR_openat is always definedAurelien Jarno2016-03-231-4/+0
| | | | | | | | | With the 2.6.32 minimum kernel on x86 and 3.2 on other architectures, __NR_openat is always defined. Changelog: * sysdeps/unix/sysv/linux/dl-openat64.c (openat64) [__NR_openat]: Make code unconditional.
* x86, pthread_cond_*wait: Do not depend on %eax not being clobberedNick Alcock2016-03-232-0/+2
| | | | | | | | | | | | | | | | | | | | | | The x86-specific versions of both pthread_cond_wait and pthread_cond_timedwait have (in their fall-back-to-futex-wait slow paths) calls to __pthread_mutex_cond_lock_adjust followed by __pthread_mutex_unlock_usercnt, which load the parameters before the first call but then assume that the first parameter, in %eax, will survive unaffected. This happens to have been true before now, but %eax is a call-clobbered register, and this assumption is not safe: it could change at any time, at GCC's whim, and indeed the stack-protector canary checking code clobbers %eax while checking that the canary is uncorrupted. So reload %eax before calling __pthread_mutex_unlock_usercnt. (Do this unconditionally, even when stack-protection is not in use, because it's the right thing to do, it's a slow path, and anything else is dicing with death.) * sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S: Reload call-clobbered %eax on retry path. * sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S: Likewise.
* Don't set %rcx twice before "rep movsb"H.J. Lu2016-03-221-1/+0
| | | | | * sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S (MEMCPY): Don't set %rcx twice before "rep movsb".
* Set index_arch_AVX_Fast_Unaligned_Load only for Intel processorsH.J. Lu2016-03-222-74/+88
| | | | | | | | | | | | | | | | | | | | | | | | | | Since only Intel processors with AVX2 have fast unaligned load, we should set index_arch_AVX_Fast_Unaligned_Load only for Intel processors. Move AVX, AVX2, AVX512, FMA and FMA4 detection into get_common_indeces and call get_common_indeces for other processors. Add CPU_FEATURES_CPU_P and CPU_FEATURES_ARCH_P to aoid loading GLRO(dl_x86_cpu_features) in cpu-features.c. [BZ #19583] * sysdeps/x86/cpu-features.c (get_common_indeces): Remove inline. Check family before setting family, model and extended_model. Set AVX, AVX2, AVX512, FMA and FMA4 usable bits here. (init_cpu_features): Replace HAS_CPU_FEATURE and HAS_ARCH_FEATURE with CPU_FEATURES_CPU_P and CPU_FEATURES_ARCH_P. Set index_arch_AVX_Fast_Unaligned_Load for Intel processors with usable AVX2. Call get_common_indeces for other processors with family == NULL. * sysdeps/x86/cpu-features.h (CPU_FEATURES_CPU_P): New macro. (CPU_FEATURES_ARCH_P): Likewise. (HAS_CPU_FEATURE): Use CPU_FEATURES_CPU_P. (HAS_ARCH_FEATURE): Use CPU_FEATURES_ARCH_P.
* Remove __ASSUME_GETDENTS64_SYSCALL.Joseph Myers2016-03-222-114/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes the __ASSUME_GETDENTS64_SYSCALL macro, as its definition is constant given the new kernel version requirements (and was constant anyway before those requirements except for MIPS n32). Note that the "#ifdef __NR_getdents64" conditional *is* still needed, because MIPS n64 only has the getdents syscall (being a 64-bit ABI, that syscall is 64-bit; the difference between the two on 64-bit architectures is where d_type goes). If MIPS n64 were to gain the getdents64 syscall and we wanted to use it conditionally on the kernel version at runtime we'd have to revert this patch, but I think that's unlikely (and in any case, we could follow the simpler approach of undefining __NR_getdents64 if the syscall can't be assumed, just like we do for accept4 / recvmmsg / sendmmsg syscalls on architectures where socketcall support came first). Most of the getdents.c changes are reindentation. Tested for x86_64 and x86 that installed stripped shared libraries are unchanged by the patch. * sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_GETDENTS64_SYSCALL): Remove macro. * sysdeps/unix/sysv/linux/getdents.c [!__ASSUME_GETDENTS64_SYSCALL]: Remove conditional code. [!have_no_getdents64_defined]: Likewise. (__GETDENTS): Remove __have_no_getdents64 conditional.
* Remove __ASSUME_SIGNALFD4.Joseph Myers2016-03-212-26/+1
| | | | | | | | | | | | | | | | Current Linux kernel version requirements mean the signalfd4 syscall can always be assumed to be available. This patch removes __ASSUME_SIGNALFD4 and associated conditionals. Tested for x86_64 and x86 that installed stripped shared libraries are unchanged by the patch. * sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_SIGNALFD4): Remove macro. * sysdeps/unix/sysv/linux/signalfd.c: Do not include <kernel-features.h>. (signalfd) [__NR_signalfd4]: Make code unconditional. (signalfd) [!__ASSUME_SIGNALFD4]: Remove conditional code.
* posix: Fix posix_spawn implict check styleAdhemerval Zanella2016-03-211-1/+1
| | | | | | | | | | This patch fixes the implicit check style add in 2a69f853c for the general convention one. Checked on x86_64. * sysdeps/unix/sysv/linux/spawni.c (__spawnix): Fix implict checks style.
* Use JUMPTARGET in x86-64 pthreadH.J. Lu2016-03-213-7/+3
| | | | | | | | | | | | | When PLT may be used, JUMPTARGET should be used instead calling the function directly. * sysdeps/unix/sysv/linux/x86_64/cancellation.S (__pthread_enable_asynccancel): Use JUMPTARGET to call __pthread_unwind. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S (__condvar_cleanup2): Use JUMPTARGET to call _Unwind_Resume. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S (__condvar_cleanup1): Likewise.
* posix: Fix posix_spawn invalid memory accessAdhemerval Zanella2016-03-201-1/+1
| | | | | | | | | | | | | Current Linux posix_spawn spawn do not test if the pid argument is valid before trying to update it for success case. This patch fixes it. Tested on x86_64 and i686. * sysdeps/unix/sysv/linux/spawni.c (__spawnix): Fix invalid memory access where posix_spawn success and pid argument is null. * posix/tst-spawn.c (do_test): Add posix_spawn null pid argument for success case.
* hurd: Add c++-types expected resultSamuel Thibault2016-03-201-0/+67
| | | | * sysdeps/mach/hurd/i386/c++-types.data: New file.
* hurd: Allow inlining IO locksSamuel Thibault2016-03-201-0/+3
| | | | * sysdeps/mach/hurd/libc-lock.h (_IO_lock_inexpensive): Define to 1.
* hurd: Do not hide rtld symbols which need to be preemptedSamuel Thibault2016-03-202-0/+43
| | | | | | | | | * sysdeps/generic/dl-fcntl.h: New file, adds attribute_hidden to __open and __fcntl. * sysdeps/mach/hurd/dl-fcntl.h: New file, adds attribute_hidden to __fcntl only. * include/fcntl.h [IS_IN (rtld)]: Include <dl-fcntl.h> instead of adding attribute_hidden to __open and __fcntl.
* hurd: Break errnos.d / libc-modules.h dependency loopSamuel Thibault2016-03-201-2/+4
| | | | | | | | Generating errnos.d does not actually need libc-modules.h. * sysdeps/mach/hurd/Makefile ($(common-objpfx)errnos.d): Strip "-include $(common-objpfx)libc-modules.h" from CPPFLAGS, and do not depend on libc-modules.h,
* Remove __ASSUME_EVENTFD2, move eventfd to syscalls.list.Joseph Myers2016-03-173-52/+1
| | | | | | | | | | | | | | | | Given current Linux kernel version requirements, we can assume the presence of the eventfd2 syscall. This means that __ASSUME_EVENTFD2 can be removed, and a syscalls.list entry suffices for eventfd instead of needing a .c file. This patch implements those changes. Tested for x86_64 and x86 (not that that means much, given the lack of testsuite coverage for eventfd). * sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_EVENTFD2): Remove macro. * sysdeps/unix/sysv/linux/eventfd.c: Remove file. * sysdeps/unix/sysv/linux/syscalls.list (eventfd): New syscall entry.
* Remove __ASSUME_FALLOCATE.Joseph Myers2016-03-172-36/+12
| | | | | | | | | | | | | | | | Given current Linux kernel version requirements, we can always assume the fallocate syscall to be available. This patch removes __ASSUME_FALLOCATE and a test for whether __NR_fallocate is defined. Tested for x86_64 and x86 that installed stripped shared libraries are unchanged by the patch. * sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_FALLOCATE): Remove macro. * sysdeps/unix/sysv/linux/wordsize-64/posix_fallocate.c: Do not include <kernel-features.h>. [!__ASSUME_FALLOCATE]: Remove conditional code. (posix_fallocate) [__NR_fallocate]: Make code unconditional.
* Use JUMPTARGET in x86-64 mathvecH.J. Lu2016-03-1638-130/+130
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When PLT may be used, JUMPTARGET should be used instead calling the function directly. * sysdeps/x86_64/fpu/multiarch/svml_d_cos2_core_sse4.S (_ZGVbN2v_cos_sse4): Use JUMPTARGET to call cos. * sysdeps/x86_64/fpu/multiarch/svml_d_cos4_core_avx2.S (_ZGVdN4v_cos_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core_avx512.S (_ZGVdN4v_cos): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core_sse4.S (_ZGVbN2v_exp_sse4): Use JUMPTARGET to call exp. * sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core_avx2.S (_ZGVdN4v_exp_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core_avx512.S (_ZGVdN4v_exp): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_log2_core_sse4.S (_ZGVbN2v_log_sse4): Use JUMPTARGET to call log. * sysdeps/x86_64/fpu/multiarch/svml_d_log4_core_avx2.S (_ZGVdN4v_log_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_log8_core_avx512.S (_ZGVdN4v_log): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core_sse4.S (_ZGVbN2vv_pow_sse4): Use JUMPTARGET to call pow. * sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core_avx2.S (_ZGVdN4vv_pow_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core_avx512.S (_ZGVdN4vv_pow): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_sin2_core_sse4.S (_ZGVbN2v_sin_sse4): Use JUMPTARGET to call sin. * sysdeps/x86_64/fpu/multiarch/svml_d_sin4_core_avx2.S (_ZGVdN4v_sin_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core_avx512.S (_ZGVdN4v_sin): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core_sse4.S (_ZGVbN2vvv_sincos_sse4): Use JUMPTARGET to call sin and cos. * sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core_avx2.S (_ZGVdN4vvv_sincos_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.S (_ZGVdN4vvv_sincos): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_cosf16_core_avx512.S (_ZGVdN8v_cosf): Use JUMPTARGET to call cosf. * sysdeps/x86_64/fpu/multiarch/svml_s_cosf4_core_sse4.S (_ZGVbN4v_cosf_sse4): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_cosf8_core_avx2.S (_ZGVdN8v_cosf_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core_avx512.S (_ZGVdN8v_expf): Use JUMPTARGET to call expf. * sysdeps/x86_64/fpu/multiarch/svml_s_expf4_core_sse4.S (_ZGVbN4v_expf_sse4): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_expf8_core_avx2.S (_ZGVdN8v_expf_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S (_ZGVdN8v_logf): Use JUMPTARGET to call logf. * sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S (_ZGVbN4v_logf_sse4): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S (_ZGVdN8v_logf_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_powf16_core_avx512.S (_ZGVdN8vv_powf): Use JUMPTARGET to call powf. * sysdeps/x86_64/fpu/multiarch/svml_s_powf4_core_sse4.S (_ZGVbN4vv_powf_sse4): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_powf8_core_avx2.S (_ZGVdN8vv_powf_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S (_ZGVdN8vv_powf): Use JUMPTARGET to call sinf and cosf. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core_sse4.S (_ZGVbN4vvv_sincosf_sse4): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core_avx2.S (_ZGVdN8vvv_sincosf_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sinf16_core_avx512.S (_ZGVdN8v_sinf): Use JUMPTARGET to call sinf. * sysdeps/x86_64/fpu/multiarch/svml_s_sinf4_core_sse4.S (_ZGVbN4v_sinf_sse4): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sinf8_core_avx2.S (_ZGVdN8v_sinf_avx2): Likewise. * sysdeps/x86_64/fpu/svml_d_wrapper_impl.h (WRAPPER_IMPL_SSE2): Use JUMPTARGET to call callee. (WRAPPER_IMPL_SSE2_ff): Likewise. (WRAPPER_IMPL_SSE2_fFF): Likewise. (WRAPPER_IMPL_AVX): Likewise. (WRAPPER_IMPL_AVX_ff): Likewise. (WRAPPER_IMPL_AVX_fFF): Likewise. (WRAPPER_IMPL_AVX512): Likewise. (WRAPPER_IMPL_AVX512_ff): Likewise. * sysdeps/x86_64/fpu/svml_s_wrapper_impl.h (WRAPPER_IMPL_SSE2): Likewise. (WRAPPER_IMPL_SSE2_ff): Likewise. (WRAPPER_IMPL_SSE2_fFF): Likewise. (WRAPPER_IMPL_AVX): Likewise. (WRAPPER_IMPL_AVX_ff): Likewise. (WRAPPER_IMPL_AVX_fFF): Likewise. (WRAPPER_IMPL_AVX512): Likewise. (WRAPPER_IMPL_AVX512_ff): Likewise. (WRAPPER_IMPL_AVX512_fFF): Likewise.
* Fix hurd buildSamuel Thibault2016-03-161-1/+1
| | | | | | | | * sysdeps/mach/hurd/openat.c (__openat): Add missing ellipsis. * resolv/gai_sigqueue.c (__gai_sigqueue): Add missing internal_function qualifier. * /rt/aio_sigqueue.c (__aio_sigqueue): Add missing attribute_hidden internal_function qualifiers.
* Fix building glibc master with NDEBUG and --with-cpu.Carlos O'Donell2016-03-152-1/+2
| | | | | | | | | | When building on i686, x86_64, and arm, and with NDEBUG, or --with-cpu there are various variables and functions which are unused based on these settings. This patch marks all such variables with __attribute__((unused)) to avoid the compiler warnings when building with the aformentioned options.
* Remove __ASSUME_PPOLL.Joseph Myers2016-03-152-29/+1
| | | | | | | | | | | | | | | | | | With current kernel version requirements, the ppoll Linux syscall can be assumed to be present on all architectures; this patch removes the __ASSUME_PPOLL macro and conditionals on it and on whether __NR_ppoll is defined. (Note that the same can't yet be done for pselect, because MicroBlaze only wired that up in the syscall table in 3.15.) Tested for x86_64 and x86 that installed stripped shared libraries are unchanged by the patch. * sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_PPOLL): Remove macro. * sysdeps/unix/sysv/linux/ppoll.c: Do not include <kernel-features.h>. [__NR_ppoll]: Make code unconditional. [!__ASSUME_PPOLL]: Remove conditional code.
* Adjust kernel-features.h defaults for socket syscalls.Joseph Myers2016-03-1517-174/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adjusts the defaults for kernel-features.h macros relating to availability of accept4, recvmmsg and sendmmsg. It is not intended to affect which macros end up getting defined in any configuration. At present, all architectures with syscalls for those functions need to define __ASSUME_*_SYSCALL macros; in particular, any new architecture needs its own kernel-features.h file for that purpose, though it may not otherwise need such a header. Those macros are then used together with __ASSUME_SOCKETCALL to define macros for whether the functions in question are available. This patch changes the defaults so that the syscalls are assumed to be available by default with recent-enough kernels, and it is the responsibility of architecture headers to undefine the macros if they are unavailable in supported kernels at least as recent as the version where the architecture-independent functionality was introduced. The __ASSUME_<function> macros are defaulted similarly instead of being defined based on other macros (defining based on other macros would no longer work because the #undefs appear after the generic header is included), so where the syscall being unavailable means the function is unavailable this means the architecture header has to undefine the __ASSUME_<function> macro; this only affects __ASSUME_ACCEPT4 for ia64, as other cases where the syscalls were added late enough to be relevant with current kernel version requirements are all on socketcall architectures. As a consequence, the AArch64 and Nios II kernel-features.h header files are removed, and others simplified. When the minimum kernel version becomes 4.3 or later on all architectures, the syscalls in question can just be assumed unconditionally, permitting further simplification. Tested for x86_64, x86 and powerpc (that installed shared libraries are unchanged by the patch, and testsuite for x86_64 and x86). * sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Define unconditionally. (__ASSUME_ACCEPT4): Likewise. [__LINUX_KERNEL_VERSION >= 0x020621] (__ASSUME_RECVMMSG_SYSCALL): Define. [__LINUX_KERNEL_VERSION >= 0x020621] (__ASSUME_RECVMMSG): Likewise. [__LINUX_KERNEL_VERSION >= 0x030000] (__ASSUME_SENDMMSG_SYSCALL): Likewise. [__LINUX_KERNEL_VERSION >= 0x030000] (__ASSUME_SENDMMSG): Likewise. * sysdeps/unix/sysv/linux/aarch64/kernel-features.h: Remove file. * sysdeps/unix/sysv/linux/nios2/kernel-features.h: Likewise. * sysdeps/unix/sysv/linux/alpha/kernel-features.h (__ASSUME_RECVMMSG_SYSCALL): Do not define. (__ASSUME_ACCEPT4_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/arm/kernel-features.h (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_ACCEPT4_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/hppa/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Likewise. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/i386/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020621] (__ASSUME_RECVMMSG_SYSCALL): Likewise. [__LINUX_KERNEL_VERSION >= 0x030000] (__ASSUME_SENDMMSG_SYSCALL): Likewise. (__ASSUME_ACCEPT4_SYSCALL): Undefine if [__LINUX_KERNEL_VERSION < 0x040300] instead of defining if [__LINUX_KERNEL_VERSION >= 0x040300]. * sysdeps/unix/sysv/linux/ia64/kernel-features.h (__ASSUME_RECVMMSG_SYSCALL): Do not define. (__ASSUME_SENDMMSG_SYSCALL): Likewise. (__ASSUME_ACCEPT4_SYSCALL): Undefine if [__LINUX_KERNEL_VERSION < 0x030300] instead of defining if [__LINUX_KERNEL_VERSION >= 0x030300]. [__LINUX_KERNEL_VERSION < 0x030300] (__ASSUME_ACCEPT4): Undefine. * sysdeps/unix/sysv/linux/m68k/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Undefine if [__LINUX_KERNEL_VERSION < 0x040300] instead of defining if [__LINUX_KERNEL_VERSION >= 0x040300]. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/microblaze/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Do not define. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Undefine if [__LINUX_KERNEL_VERSION < 0x030300] instead of defining if [__LINUX_KERNEL_VERSION >= 0x030300]. * sysdeps/unix/sysv/linux/mips/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Do not define. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/powerpc/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Likewise. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/s390/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Undefine if [__LINUX_KERNEL_VERSION < 0x040300] instead of defining if [__LINUX_KERNEL_VERSION >= 0x040300]. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/sh/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Do not define. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/sparc/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Likewise. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/tile/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Likewise. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/x86_64/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Likewise. [__LINUX_KERNEL_VERSION >= 0x020621] (__ASSUME_RECVMMSG_SYSCALL): Likewise. [__LINUX_KERNEL_VERSION >= 0x030000] (__ASSUME_SENDMMSG_SYSCALL): Likewise.
* Update glibc headers for Linux 4.5.Joseph Myers2016-03-143-0/+4
| | | | | | | | | | | | | | | This patch updates the glibc headers with the defines MADV_FREE, IPV6_HDRINCL and EPOLLEXCLUSIVE that are added in Linux 4.5. Tested for x86_64 and x86 (testsuite, and that installed stripped shared libraries are unchanged by the patch). * bits/mman-linux.h [__USE_MISC] (MADV_FREE): New macro. * sysdeps/unix/sysv/linux/hppa/bits/mman.h [__USE_MISC] (MADV_FREE): Likewise. * sysdeps/unix/sysv/linux/bits/in.h (IPV6_HDRINCL): Likewise. * sysdeps/unix/sysv/linux/sys/epoll.h (enum EPOLL_EVENTS): Add EPOLLEXCLUSIVE.
* Fix flag test in waitid compatibility layerSamuel Thibault2016-03-131-1/+1
| | | | | * sysdeps/posix/waitid.c (OUR_WAITID): Test against WSTOPPED instead of WUNTRACED.
* powerpc: Rearrange cfi_offset callsRajalakshmi Srinivasaraghavan2016-03-116-34/+34
| | | | | This patch rearranges cfi_offset() calls after the last store so as to avoid extra DW_CFA_advance opcodes in unwind information.
* Add _arch_/_cpu_ to index_*/bit_* in x86 cpu-features.hH.J. Lu2016-03-103-151/+159
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | index_* and bit_* macros are used to access cpuid and feature arrays o struct cpu_features. It is very easy to use bits and indices of cpuid array on feature array, especially in assembly codes. For example, sysdeps/i386/i686/multiarch/bcopy.S has HAS_CPU_FEATURE (Fast_Rep_String) which should be HAS_ARCH_FEATURE (Fast_Rep_String) We change index_* and bit_* to index_cpu_*/index_arch_* and bit_cpu_*/bit_arch_* so that we can catch such error at build time. [BZ #19762] * sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h (EXTRA_LD_ENVVARS): Add _arch_ to index_*/bit_*. * sysdeps/x86/cpu-features.c (init_cpu_features): Likewise. * sysdeps/x86/cpu-features.h (bit_*): Renamed to ... (bit_arch_*): This for feature array. (bit_*): Renamed to ... (bit_cpu_*): This for cpu array. (index_*): Renamed to ... (index_arch_*): This for feature array. (index_*): Renamed to ... (index_cpu_*): This for cpu array. [__ASSEMBLER__] (HAS_FEATURE): Add and use field. [__ASSEMBLER__] (HAS_CPU_FEATURE)): Pass cpu to HAS_FEATURE. [__ASSEMBLER__] (HAS_ARCH_FEATURE)): Pass arch to HAS_FEATURE. [!__ASSEMBLER__] (HAS_CPU_FEATURE): Replace index_##name and bit_##name with index_cpu_##name and bit_cpu_##name. [!__ASSEMBLER__] (HAS_ARCH_FEATURE): Replace index_##name and bit_##name with index_arch_##name and bit_arch_##name.
* mips: terminate the FDE before the return trampoline in makecontextAurelien Jarno2016-03-091-0/+7
| | | | | | | | | | | | | | | | | In makecontext the FDE needs to be terminated before the return trampoline otherwise backtrace called within a context created by makecontext yields infinite backtrace. This bug has been present for a long time, stdlib/tst-makecontext did not fail until recent commit e535ce25. Tested on mips-linux-gnu and mips64el-linux-gnuabi64 and mips-linux-gnu, no regression. This fixes stdlib/tst-makecontext on MIPS. Changelog: [BZ #19792] * sysdeps/unix/sysv/linux/mips/makecontext.S (__makecontext): Terminate FDE before return label.
* Fix ldbl-128ibm nearbyintl in non-default rounding modes (bug 19790).Joseph Myers2016-03-093-109/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ldbl-128ibm implementation of nearbyintl uses logic that only works in round-to-nearest mode. This contrasts with rintl, which works in all rounding modes. Now, arguably nearbyintl could simply be aliased to rintl, given that spurious "inexact" is generally allowed for ldbl-128ibm, even for the underlying arithmetic operations. But given that the only point of nearbyintl is to avoid "inexact", this patch follows the more conservative approach of adding conditionals to the rintl implementation to make it suitable for use to implement nearbyintl, then builds it for nearbyintl with USE_AS_NEARBYINTL defined. The test test-nearbyint-except-2 shows up issues when traps on "inexact" are enabled, which turn out to be problems with the powerpc fenv_private.h implementation (two functions that should disable exception traps potentially failing to do so in some cases); this patch duly fixes that as well (I don't see any other existing cases where this would be user-visible; there isn't much use of *_NOEX, *hold* etc. in libm that requires exceptions to be discarded and not trapped on). Tested for powerpc. [BZ #19790] * sysdeps/ieee754/ldbl-128ibm/s_rintl.c [USE_AS_NEARBYINTL] (rintl): Define as macro. [USE_AS_NEARBYINTL] (__rintl): Likewise. (__rintl) [USE_AS_NEARBYINTL]: Use SET_RESTORE_ROUND_NOEX instead of fesetround. Ensure results are evaluated before end of scope. * sysdeps/ieee754/ldbl-128ibm/s_nearbyintl.c: Define USE_AS_NEARBYINTL and include s_rintl.c. * sysdeps/powerpc/fpu/fenv_private.h (libc_feholdsetround_ppc): Disable exception traps in new environment. (libc_feholdsetround_ppc_ctx): Likewise.
* Fix tst-audit10 build when -mavx512f is not supported.Roland McGrath2016-03-082-3/+4
|
* Define _HAVE_STRING_ARCH_mempcpy to 1 for x86H.J. Lu2016-03-081-0/+3
| | | | | | | | Since x86 has an optimized mempcpy and GCC can inline mempcpy on x86, define _HAVE_STRING_ARCH_mempcpy to 1 for x86. [BZ #19759] * sysdeps/x86/bits/string.h (_HAVE_STRING_ARCH_mempcpy): New.
* powerpc: Remove uses of operand modifier (%s) in inline asmGabriel F. T. Gomes2016-03-081-6/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The operand modifier %s on powerpc is an undocumented internal implementation detail of GCC. Besides that, the GCC community wants to remove it. This patch rewrites the expressions that use this modifier with logically equivalent expressions that don't require it. Explanation for the substitution: The %s modifier takes an immediate operand and prints 32 less such immediate. Thus, in the previous code, the expression resulted in: 32 - __builtin_ffs(e) where e was guaranteed to have exactly a single bit set, by the following expressions: (e & (e-1) == 0) : e has at most one bit set. (e != 0) : e is not zero, thus it has at least one bit set. Since we guarantee that there is exactly only one bit set, the following statement is true: 32 - __builtin_ffs(e) == __builtin_clz(e) Thus, we can replace __builtin_ffs with __builtin_clz and remove the %s operand modifier.
* powerpc: Fix dl-procinfo HWCAPCarlos Eduardo Seo2016-03-082-8/+6
| | | | | | HWCAP-related code should had been updated when the 32 bits of HWCAP were used. This patch updates the code in dl-procinfo.h to loop through all the 32 bits in HWCAP and updates _dl_powerpc_cap_flags accordingly.
* Fix ldbl-128ibm remainderl equality test for zero low part (bug 19677).Joseph Myers2016-03-086-60/+144
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ldbl-128ibm implementation of remainderl has logic resulting in incorrect tests for equality of the absolute values of the arguments in the case of zero low parts. If the low parts are both zero but with different signs, this can wrongly cause equal arguments to be treated as different, resulting in turn in incorrect signs of zero result in nondefault rounding modes arising from the subtractions done when the arguments are not equal. This patch fixes the logic to convert -0 low parts into +0 before the comparison (remquo already has separate logic to deal with signs of zero results, so doesn't need such a change). Tests are added for remainderl and remquol similar to that for fmodl, and based on a refactoring of it, since the bug depends on low parts which should not be relied upon in tests not setting the representation explicitly (although in fact the bug shows up in test-ldouble with current GCC). Tested for powerpc. [BZ #19677] * sysdeps/ieee754/ldbl-128ibm/e_remainderl.c (__ieee754_remainderl): Put zero low parts in canonical form. * sysdeps/ieee754/ldbl-128ibm/test-fmodrem-ldbl-128ibm.c: New file. Based on sysdeps/ieee754/ldbl-128ibm/test-fmodl-ldbl-128ibm.c. * sysdeps/ieee754/ldbl-128ibm/test-fmodl-ldbl-128ibm.c: Replace with wrapper round test-fmodrem-ldbl-128ibm.c. * sysdeps/ieee754/ldbl-128ibm/test-remainderl-ldbl-128ibm.c: New file. * sysdeps/ieee754/ldbl-128ibm/test-remquol-ldbl-128ibm.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/Makefile (tests): Add test-remainderl-ldbl-128ibm and test-remquol-ldbl-128ibm.
* tst-audit4, tst-audit10: Compile AVX/AVX-512 code separately [BZ #19269]Florian Weimer2016-03-075-55/+112
| | | | | This ensures that GCC will not use unsupported instructions before the run-time check to ensure support.
* posix: New Linux posix_spawn{p} implementationAdhemerval Zanella2016-03-0722-1/+437
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements a new posix_spawn{p} implementation for Linux. The main difference is it uses the clone syscall directly with CLONE_VM and CLONE_VFORK flags and a direct allocated stack. The new stack and start function solves most the vfork limitation (possible parent clobber due stack spilling). The remaning issue are related to signal handling: 1. That no signal handlers must run in child context, to avoid corrupt parent's state. 2. Child must synchronize with parent to enforce stack deallocation and to possible return execv issues. The first one is solved by blocking all signals in child, even NPTL-internal ones (SIGCANCEL and SIGSETXID). The second issue is done by a stack allocation in parent and a synchronization with using a pipe or waitpid (in case or error). The pipe has the advantage of allowing the child signal an exec error (checked with new tst-spawn2 test). There is an inherent race condition in pipe2 usage for architectures that do not support the syscall directly. In such cases the a pipe plus fctnl is used instead and it may lead to file descriptor leak in parent (as decribed by fcntl documentation). The child process stack is allocate with a mmap with MAP_STACK flag using default architecture stack size. Although it is slower than use a stack buffer from parent, it allows some slack for the compatibility code to run scripts with no shebang (which may use a buffer with size depending of argument list count). Performance should be similar to the vfork default posix implementation and way faster than fork path (vfork on mostly linux ports are basically clone with CLONE_VM plus CLONE_VFORK). The only difference is the syscalls required for the stack allocation/deallocation. It fixes BZ#10354, BZ#14750, and BZ#18433. Tested on i386, x86_64, powerpc64le, and aarch64. [BZ #14750] [BZ #10354] [BZ #18433] * include/sched.h (__clone): Add hidden prototype. (__clone2): Likewise. * include/unistd.h (__dup): Likewise. * posix/Makefile (tests): Add tst-spawn2. * posix/tst-spawn2.c: New file. * sysdeps/posix/dup.c (__dup): Add hidden definition. * sysdeps/unix/sysv/linux/aarch64/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/alpha/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/arm/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/hppa/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/i386/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/ia64/clone2.S (__clone): Likewise. * sysdeps/unix/sysv/linux/m68k/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/microblaze/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/mips/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/nios2/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/sh/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/tile/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/x86_64/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/nptl-signals.h (____nptl_is_internal_signal): New function. * sysdeps/unix/sysv/linux/spawni.c: New file.
* Group AVX512 functions in .text.avx512 sectionH.J. Lu2016-03-062-2/+2
| | | | | | | * sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S: Replace .text with .text.avx512. * sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S: Likewise.