about summary refs log tree commit diff
path: root/sysdeps
Commit message (Collapse)AuthorAgeFilesLines
* Update glibc headers for Linux 4.5.Joseph Myers2016-03-143-0/+4
| | | | | | | | | | | | | | | This patch updates the glibc headers with the defines MADV_FREE, IPV6_HDRINCL and EPOLLEXCLUSIVE that are added in Linux 4.5. Tested for x86_64 and x86 (testsuite, and that installed stripped shared libraries are unchanged by the patch). * bits/mman-linux.h [__USE_MISC] (MADV_FREE): New macro. * sysdeps/unix/sysv/linux/hppa/bits/mman.h [__USE_MISC] (MADV_FREE): Likewise. * sysdeps/unix/sysv/linux/bits/in.h (IPV6_HDRINCL): Likewise. * sysdeps/unix/sysv/linux/sys/epoll.h (enum EPOLL_EVENTS): Add EPOLLEXCLUSIVE.
* Fix flag test in waitid compatibility layerSamuel Thibault2016-03-131-1/+1
| | | | | * sysdeps/posix/waitid.c (OUR_WAITID): Test against WSTOPPED instead of WUNTRACED.
* powerpc: Rearrange cfi_offset callsRajalakshmi Srinivasaraghavan2016-03-116-34/+34
| | | | | This patch rearranges cfi_offset() calls after the last store so as to avoid extra DW_CFA_advance opcodes in unwind information.
* Add _arch_/_cpu_ to index_*/bit_* in x86 cpu-features.hH.J. Lu2016-03-103-151/+159
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | index_* and bit_* macros are used to access cpuid and feature arrays o struct cpu_features. It is very easy to use bits and indices of cpuid array on feature array, especially in assembly codes. For example, sysdeps/i386/i686/multiarch/bcopy.S has HAS_CPU_FEATURE (Fast_Rep_String) which should be HAS_ARCH_FEATURE (Fast_Rep_String) We change index_* and bit_* to index_cpu_*/index_arch_* and bit_cpu_*/bit_arch_* so that we can catch such error at build time. [BZ #19762] * sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h (EXTRA_LD_ENVVARS): Add _arch_ to index_*/bit_*. * sysdeps/x86/cpu-features.c (init_cpu_features): Likewise. * sysdeps/x86/cpu-features.h (bit_*): Renamed to ... (bit_arch_*): This for feature array. (bit_*): Renamed to ... (bit_cpu_*): This for cpu array. (index_*): Renamed to ... (index_arch_*): This for feature array. (index_*): Renamed to ... (index_cpu_*): This for cpu array. [__ASSEMBLER__] (HAS_FEATURE): Add and use field. [__ASSEMBLER__] (HAS_CPU_FEATURE)): Pass cpu to HAS_FEATURE. [__ASSEMBLER__] (HAS_ARCH_FEATURE)): Pass arch to HAS_FEATURE. [!__ASSEMBLER__] (HAS_CPU_FEATURE): Replace index_##name and bit_##name with index_cpu_##name and bit_cpu_##name. [!__ASSEMBLER__] (HAS_ARCH_FEATURE): Replace index_##name and bit_##name with index_arch_##name and bit_arch_##name.
* mips: terminate the FDE before the return trampoline in makecontextAurelien Jarno2016-03-091-0/+7
| | | | | | | | | | | | | | | | | In makecontext the FDE needs to be terminated before the return trampoline otherwise backtrace called within a context created by makecontext yields infinite backtrace. This bug has been present for a long time, stdlib/tst-makecontext did not fail until recent commit e535ce25. Tested on mips-linux-gnu and mips64el-linux-gnuabi64 and mips-linux-gnu, no regression. This fixes stdlib/tst-makecontext on MIPS. Changelog: [BZ #19792] * sysdeps/unix/sysv/linux/mips/makecontext.S (__makecontext): Terminate FDE before return label.
* Fix ldbl-128ibm nearbyintl in non-default rounding modes (bug 19790).Joseph Myers2016-03-093-109/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ldbl-128ibm implementation of nearbyintl uses logic that only works in round-to-nearest mode. This contrasts with rintl, which works in all rounding modes. Now, arguably nearbyintl could simply be aliased to rintl, given that spurious "inexact" is generally allowed for ldbl-128ibm, even for the underlying arithmetic operations. But given that the only point of nearbyintl is to avoid "inexact", this patch follows the more conservative approach of adding conditionals to the rintl implementation to make it suitable for use to implement nearbyintl, then builds it for nearbyintl with USE_AS_NEARBYINTL defined. The test test-nearbyint-except-2 shows up issues when traps on "inexact" are enabled, which turn out to be problems with the powerpc fenv_private.h implementation (two functions that should disable exception traps potentially failing to do so in some cases); this patch duly fixes that as well (I don't see any other existing cases where this would be user-visible; there isn't much use of *_NOEX, *hold* etc. in libm that requires exceptions to be discarded and not trapped on). Tested for powerpc. [BZ #19790] * sysdeps/ieee754/ldbl-128ibm/s_rintl.c [USE_AS_NEARBYINTL] (rintl): Define as macro. [USE_AS_NEARBYINTL] (__rintl): Likewise. (__rintl) [USE_AS_NEARBYINTL]: Use SET_RESTORE_ROUND_NOEX instead of fesetround. Ensure results are evaluated before end of scope. * sysdeps/ieee754/ldbl-128ibm/s_nearbyintl.c: Define USE_AS_NEARBYINTL and include s_rintl.c. * sysdeps/powerpc/fpu/fenv_private.h (libc_feholdsetround_ppc): Disable exception traps in new environment. (libc_feholdsetround_ppc_ctx): Likewise.
* Fix tst-audit10 build when -mavx512f is not supported.Roland McGrath2016-03-082-3/+4
|
* Define _HAVE_STRING_ARCH_mempcpy to 1 for x86H.J. Lu2016-03-081-0/+3
| | | | | | | | Since x86 has an optimized mempcpy and GCC can inline mempcpy on x86, define _HAVE_STRING_ARCH_mempcpy to 1 for x86. [BZ #19759] * sysdeps/x86/bits/string.h (_HAVE_STRING_ARCH_mempcpy): New.
* powerpc: Remove uses of operand modifier (%s) in inline asmGabriel F. T. Gomes2016-03-081-6/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The operand modifier %s on powerpc is an undocumented internal implementation detail of GCC. Besides that, the GCC community wants to remove it. This patch rewrites the expressions that use this modifier with logically equivalent expressions that don't require it. Explanation for the substitution: The %s modifier takes an immediate operand and prints 32 less such immediate. Thus, in the previous code, the expression resulted in: 32 - __builtin_ffs(e) where e was guaranteed to have exactly a single bit set, by the following expressions: (e & (e-1) == 0) : e has at most one bit set. (e != 0) : e is not zero, thus it has at least one bit set. Since we guarantee that there is exactly only one bit set, the following statement is true: 32 - __builtin_ffs(e) == __builtin_clz(e) Thus, we can replace __builtin_ffs with __builtin_clz and remove the %s operand modifier.
* powerpc: Fix dl-procinfo HWCAPCarlos Eduardo Seo2016-03-082-8/+6
| | | | | | HWCAP-related code should had been updated when the 32 bits of HWCAP were used. This patch updates the code in dl-procinfo.h to loop through all the 32 bits in HWCAP and updates _dl_powerpc_cap_flags accordingly.
* Fix ldbl-128ibm remainderl equality test for zero low part (bug 19677).Joseph Myers2016-03-086-60/+144
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ldbl-128ibm implementation of remainderl has logic resulting in incorrect tests for equality of the absolute values of the arguments in the case of zero low parts. If the low parts are both zero but with different signs, this can wrongly cause equal arguments to be treated as different, resulting in turn in incorrect signs of zero result in nondefault rounding modes arising from the subtractions done when the arguments are not equal. This patch fixes the logic to convert -0 low parts into +0 before the comparison (remquo already has separate logic to deal with signs of zero results, so doesn't need such a change). Tests are added for remainderl and remquol similar to that for fmodl, and based on a refactoring of it, since the bug depends on low parts which should not be relied upon in tests not setting the representation explicitly (although in fact the bug shows up in test-ldouble with current GCC). Tested for powerpc. [BZ #19677] * sysdeps/ieee754/ldbl-128ibm/e_remainderl.c (__ieee754_remainderl): Put zero low parts in canonical form. * sysdeps/ieee754/ldbl-128ibm/test-fmodrem-ldbl-128ibm.c: New file. Based on sysdeps/ieee754/ldbl-128ibm/test-fmodl-ldbl-128ibm.c. * sysdeps/ieee754/ldbl-128ibm/test-fmodl-ldbl-128ibm.c: Replace with wrapper round test-fmodrem-ldbl-128ibm.c. * sysdeps/ieee754/ldbl-128ibm/test-remainderl-ldbl-128ibm.c: New file. * sysdeps/ieee754/ldbl-128ibm/test-remquol-ldbl-128ibm.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/Makefile (tests): Add test-remainderl-ldbl-128ibm and test-remquol-ldbl-128ibm.
* tst-audit4, tst-audit10: Compile AVX/AVX-512 code separately [BZ #19269]Florian Weimer2016-03-075-55/+112
| | | | | This ensures that GCC will not use unsupported instructions before the run-time check to ensure support.
* posix: New Linux posix_spawn{p} implementationAdhemerval Zanella2016-03-0722-1/+437
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements a new posix_spawn{p} implementation for Linux. The main difference is it uses the clone syscall directly with CLONE_VM and CLONE_VFORK flags and a direct allocated stack. The new stack and start function solves most the vfork limitation (possible parent clobber due stack spilling). The remaning issue are related to signal handling: 1. That no signal handlers must run in child context, to avoid corrupt parent's state. 2. Child must synchronize with parent to enforce stack deallocation and to possible return execv issues. The first one is solved by blocking all signals in child, even NPTL-internal ones (SIGCANCEL and SIGSETXID). The second issue is done by a stack allocation in parent and a synchronization with using a pipe or waitpid (in case or error). The pipe has the advantage of allowing the child signal an exec error (checked with new tst-spawn2 test). There is an inherent race condition in pipe2 usage for architectures that do not support the syscall directly. In such cases the a pipe plus fctnl is used instead and it may lead to file descriptor leak in parent (as decribed by fcntl documentation). The child process stack is allocate with a mmap with MAP_STACK flag using default architecture stack size. Although it is slower than use a stack buffer from parent, it allows some slack for the compatibility code to run scripts with no shebang (which may use a buffer with size depending of argument list count). Performance should be similar to the vfork default posix implementation and way faster than fork path (vfork on mostly linux ports are basically clone with CLONE_VM plus CLONE_VFORK). The only difference is the syscalls required for the stack allocation/deallocation. It fixes BZ#10354, BZ#14750, and BZ#18433. Tested on i386, x86_64, powerpc64le, and aarch64. [BZ #14750] [BZ #10354] [BZ #18433] * include/sched.h (__clone): Add hidden prototype. (__clone2): Likewise. * include/unistd.h (__dup): Likewise. * posix/Makefile (tests): Add tst-spawn2. * posix/tst-spawn2.c: New file. * sysdeps/posix/dup.c (__dup): Add hidden definition. * sysdeps/unix/sysv/linux/aarch64/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/alpha/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/arm/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/hppa/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/i386/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/ia64/clone2.S (__clone): Likewise. * sysdeps/unix/sysv/linux/m68k/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/microblaze/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/mips/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/nios2/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/sh/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/tile/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/x86_64/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/nptl-signals.h (____nptl_is_internal_signal): New function. * sysdeps/unix/sysv/linux/spawni.c: New file.
* Group AVX512 functions in .text.avx512 sectionH.J. Lu2016-03-062-2/+2
| | | | | | | * sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S: Replace .text with .text.avx512. * sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S: Likewise.
* Add placeholder libnsl.abilist and libutil.abilist filesAurelien Jarno2016-03-072-0/+0
| | | | | | Changelog: * sysdeps/generic/libnsl.abilist: New file. * sysdeps/generic/libutil.abilist: New file.
* Use HAS_ARCH_FEATURE with Fast_Rep_StringH.J. Lu2016-03-069-9/+9
| | | | | | | | | | | | | | | | | | | | | HAS_ARCH_FEATURE, not HAS_CPU_FEATURE, should be used with Fast_Rep_String. [BZ #19762] * sysdeps/i386/i686/multiarch/bcopy.S (bcopy): Use HAS_ARCH_FEATURE with Fast_Rep_String. * sysdeps/i386/i686/multiarch/bzero.S (__bzero): Likewise. * sysdeps/i386/i686/multiarch/memcpy.S (memcpy): Likewise. * sysdeps/i386/i686/multiarch/memcpy_chk.S (__memcpy_chk): Likewise. * sysdeps/i386/i686/multiarch/memmove_chk.S (__memmove_chk): Likewise. * sysdeps/i386/i686/multiarch/mempcpy.S (__mempcpy): Likewise. * sysdeps/i386/i686/multiarch/mempcpy_chk.S (__mempcpy_chk): Likewise. * sysdeps/i386/i686/multiarch/memset.S (memset): Likewise. * sysdeps/i386/i686/multiarch/memset_chk.S (__memset_chk): Likewise.
* Replace PREINIT_FUNCTION@PLT with *%rax in callH.J. Lu2016-03-041-1/+1
| | | | | | | | | Since we have loaded address of PREINIT_FUNCTION into %rax, we can avoid extra branch to PLT slot. [BZ #19745] * sysdeps/x86_64/crti.S (_init): Replace PREINIT_FUNCTION@PLT with *%rax in call.
* Replace @PLT with @GOTPCREL(%rip) in callH.J. Lu2016-03-041-2/+4
| | | | | | | | | Since __libc_start_main is called very early, lazy binding isn't relevant here. Use indirect branch via GOT to avoid extra branch to PLT slot. [BZ #19745] * sysdeps/x86_64/start.S (_start): __libc_start_main@PLT with *__libc_start_main@GOTPCREL(%rip) in call.
* Add a comment in sysdeps/x86_64/MakefileH.J. Lu2016-03-041-0/+3
| | | | | | Mention recursive calls when ENTRY is used in _mcount.S. * sysdeps/x86_64/Makefile (sysdep_noprof): Add a comment.
* x86-64: Fix memcpy IFUNC selectionH.J. Lu2016-03-041-13/+14
| | | | | | | | | | | | | | | | | Chek Fast_Unaligned_Load, instead of Slow_BSF, and also check for Fast_Copy_Backward to enable __memcpy_ssse3_back. Existing selection order is updated with following selection order: 1. __memcpy_avx_unaligned if AVX_Fast_Unaligned_Load bit is set. 2. __memcpy_sse2_unaligned if Fast_Unaligned_Load bit is set. 3. __memcpy_sse2 if SSSE3 isn't available. 4. __memcpy_ssse3_back if Fast_Copy_Backward bit it set. 5. __memcpy_ssse3 [BZ #18880] * sysdeps/x86_64/multiarch/memcpy.S: Check Fast_Unaligned_Load, instead of Slow_BSF, and also check for Fast_Copy_Backward to enable __memcpy_ssse3_back.
* Or bit_Prefer_MAP_32BIT_EXEC in EXTRA_LD_ENVVARSH.J. Lu2016-03-031-1/+1
| | | | | | | | | We should turn on bit_Prefer_MAP_32BIT_EXEC in EXTRA_LD_ENVVARS without overriding other bits. [BZ #19758] * sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h (EXTRA_LD_ENVVARS): Or bit_Prefer_MAP_32BIT_EXEC.
* 2016-03-03 Paul Pluzhnikov <ppluzhnikov@google.com>Paul Pluzhnikov2016-03-031-13/+42
| | | | | | [BZ #19490] * sysdeps/x86_64/_mcount.S (_mcount): Add unwind descriptor. (__fentry__): Likewise
* Copy x86_64 _mcount.op from _mcount.oH.J. Lu2016-03-031-0/+1
| | | | | | | | No need to compile x86_64 _mcount.S with -pg. We can just copy the normal static object. * gmon/Makefile (noprof): Add $(sysdep_noprof). * sysdeps/x86_64/Makefile (sysdep_noprof): Add _mcount.
* Call x86-64 __mcount_internal/__sigjmp_save directlyH.J. Lu2016-03-012-12/+0
| | | | | | | | | | | | | | | Since __mcount_internal and __sigjmp_save are internal to x86-64 libc.so: 3532: 0000000000104530 289 FUNC LOCAL DEFAULT 13 __mcount_internal 3391: 0000000000034170 38 FUNC LOCAL DEFAULT 13 __sigjmp_save they can be called directly without PLT. * sysdeps/x86_64/_mcount.S (C_LABEL(_mcount)): Call __mcount_internal directly. (C_LABEL(__fentry__)): Likewise. * sysdeps/x86_64/setjmp.S __sigsetjmp): Call __sigjmp_save directly.
* Call x86-64 __setcontext directlyH.J. Lu2016-03-011-1/+1
| | | | | | | | | | | Since x86-64 __start_context calls the internal __setcontext: 5089: 00000000000417e0 145 FUNC LOCAL DEFAULT 13 __setcontext it should call __setcontext directly. * sysdeps/unix/sysv/linux/x86_64/__start_context.S (__start_context): Call __setcontext directly.
* Remove kernel-features.h conditionals on pre-3.2 kernels.Joseph Myers2016-02-2611-157/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch follows up on the increase in minimum kernel version by removing conditionals in non-x86, non-x86_64 kernel-features.h headers that are now constant for all supported kernel versions. * sysdeps/unix/sysv/linux/alpha/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional. [__LINUX_KERNEL_VERSION >= 0x030200]: Likewise. [__LINUX_KERNEL_VERSION < 0x020621]: Remove conditional code. * sysdeps/unix/sysv/linux/arm/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional. [__LINUX_KERNEL_VERSION >= 0x020624]: Likewise. [__LINUX_KERNEL_VERSION >= 0x030000]: Likewise. * sysdeps/unix/sysv/linux/hppa/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020622]: Likewise. [__LINUX_KERNEL_VERSION >= 0x030100]: Likewise. [__LINUX_KERNEL_VERSION < 0x020625]: Remove conditional code. * sysdeps/unix/sysv/linux/ia64/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional. [__LINUX_KERNEL_VERSION >= 0x030000]: Likewise. * sysdeps/unix/sysv/linux/m68k/kernel-features.h [__LINUX_KERNEL_VERSION < 0x030000]: Remove conditional code. * sysdeps/unix/sysv/linux/microblaze/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional. [__LINUX_KERNEL_VERSION < 0x020621]: Remove conditional code. [__LINUX_KERNEL_VERSION < 0x020625]: Likewise. * sysdeps/unix/sysv/linux/mips/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional. [__LINUX_KERNEL_VERSION >= 0x030100]: Likewise. [_MIPS_SIM == _ABIN32 && __LINUX_KERNEL_VERSION < 0x020623]: Remove conditional code. * sysdeps/unix/sysv/linux/powerpc/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020625]: Make code unconditional. [__LINUX_KERNEL_VERSION >= 0x030000]: Likewise. * sysdeps/unix/sysv/linux/sh/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020625]: Likewise. [__LINUX_KERNEL_VERSION >= 0x030000]: Likewise. [__LINUX_KERNEL_VERSION < 0x020625]: Remove conditional code. * sysdeps/unix/sysv/linux/sparc/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional. [__LINUX_KERNEL_VERSION >= 0x030000]: Likewise. * sysdeps/unix/sysv/linux/tile/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x030000]: Likewise.
* Remove linux/fanotify.h configure test.Joseph Myers2016-02-243-62/+0
| | | | | | | | | | | | | | | | Now we require Linux 3.2 or later kernel headers everywhere, the configure test for <linux/fanotify.h> is obsolete; this patch removes it. Tested for x86_64. * sysdeps/unix/sysv/linux/configure.ac (linux/fanotify.h): Do not test for header. * sysdeps/unix/sysv/linux/configure: Regenerated. * config.h.in (HAVE_LINUX_FANOTIFY_H): Remove #undef. * sysdeps/unix/sysv/linux/tst-fanotify.c [!HAVE_LINUX_FANOTIFY_H]: Remove conditional code. [HAVE_LINUX_FANOTIFY_H]: Make code unconditional.
* Require Linux 3.2 except on x86 / x86_64, 3.2 headers everywhere.Joseph Myers2016-02-246-12/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In <https://sourceware.org/ml/libc-alpha/2016-01/msg00885.html> I proposed a minimum Linux kernel version of 3.2 for glibc 2.24, since Linux 2.6.32 has reached EOL. In the discussion in February, some concerns were expressed about compatibility with OpenVZ containers. It's not clear that these are real issues, given OpenVZ backporting kernel features and faking the kernel version for guest software, as discussed in <https://sourceware.org/ml/libc-alpha/2016-02/msg00278.html>. It's also not clear that supporting running GNU/Linux distributions from late 2016 (at the earliest) on a kernel series from 2009 is a sensible expectation. However, as an interim step, this patch increases the requirement everywhere except x86 / x86_64 (since the controversy was only about those architectures); the special caveats and settings can easily be removed later when we're ready to increase the requirements on x86 / x86_64 (and if someone would like to raise the issue on LWN as suggested in the previous discussion, that would be welcome). 3.2 kernel headers are required everywhere by this patch. (x32 already requires 3.4 or later, so is unaffected by this patch.) As usual for such a change, this patch only changes the configure scripts and associated documentation. The intent is to follow up with removal of dead __LINUX_KERNEL_VERSION conditionals. Each __ASSUME_* or other macro that becomes dead can then be removed independently. Tested for x86_64 and x86. * sysdeps/unix/sysv/linux/configure.ac (LIBC_LINUX_VERSION): Define to 3.2.0. (arch_minimum_kernel): Likewise. * sysdeps/unix/sysv/linux/configure: Regenerated. * sysdeps/unix/sysv/linux/i386/configure.ac (arch_minimum_kernel): Define to 2.6.32. * sysdeps/unix/sysv/linux/i386/configure: Regenerated. * sysdeps/unix/sysv/linux/x86_64/64/configure.ac (arch_minimum_kernel): Define to 2.6.32. * sysdeps/unix/sysv/linux/x86_64/64/configure: Regenerated. * README: Document Linux 3.2 requirement. * manual/install.texi (Linux): Document Linux 3.2 headers requirement. * INSTALL: Regenerated.
* Add fts64_* to sysdeps/arm/nacl/libc.abilistRoland McGrath2016-02-221-0/+6
|
* [x86_64] Set DL_RUNTIME_UNALIGNED_VEC_SIZE to 8H.J. Lu2016-02-192-11/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Due to GCC bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58066 __tls_get_addr may be called with 8-byte stack alignment. Although this bug has been fixed in GCC 4.9.4, 5.3 and 6, we can't assume that stack will be always aligned at 16 bytes. Since SSE optimized memory/string functions with aligned SSE register load and store are used in the dynamic linker, we must set DL_RUNTIME_UNALIGNED_VEC_SIZE to 8 so that _dl_runtime_resolve_sse will align the stack before calling _dl_fixup: Dump of assembler code for function _dl_runtime_resolve_sse: 0x00007ffff7deea90 <+0>: push %rbx 0x00007ffff7deea91 <+1>: mov %rsp,%rbx 0x00007ffff7deea94 <+4>: and $0xfffffffffffffff0,%rsp ^^^^^^^^^^^ Align stack to 16 bytes 0x00007ffff7deea98 <+8>: sub $0x100,%rsp 0x00007ffff7deea9f <+15>: mov %rax,0xc0(%rsp) 0x00007ffff7deeaa7 <+23>: mov %rcx,0xc8(%rsp) 0x00007ffff7deeaaf <+31>: mov %rdx,0xd0(%rsp) 0x00007ffff7deeab7 <+39>: mov %rsi,0xd8(%rsp) 0x00007ffff7deeabf <+47>: mov %rdi,0xe0(%rsp) 0x00007ffff7deeac7 <+55>: mov %r8,0xe8(%rsp) 0x00007ffff7deeacf <+63>: mov %r9,0xf0(%rsp) 0x00007ffff7deead7 <+71>: movaps %xmm0,(%rsp) 0x00007ffff7deeadb <+75>: movaps %xmm1,0x10(%rsp) 0x00007ffff7deeae0 <+80>: movaps %xmm2,0x20(%rsp) 0x00007ffff7deeae5 <+85>: movaps %xmm3,0x30(%rsp) 0x00007ffff7deeaea <+90>: movaps %xmm4,0x40(%rsp) 0x00007ffff7deeaef <+95>: movaps %xmm5,0x50(%rsp) 0x00007ffff7deeaf4 <+100>: movaps %xmm6,0x60(%rsp) 0x00007ffff7deeaf9 <+105>: movaps %xmm7,0x70(%rsp) [BZ #19679] * sysdeps/x86_64/dl-trampoline.S (DL_RUNIME_UNALIGNED_VEC_SIZE): Renamed to ... (DL_RUNTIME_UNALIGNED_VEC_SIZE): This. Set to 8. (DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ... (DL_RUNTIME_RESOLVE_REALIGN_STACK): This. Updated. (DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ... (DL_RUNTIME_RESOLVE_REALIGN_STACK): This. * sysdeps/x86_64/dl-trampoline.h (DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ... (DL_RUNTIME_RESOLVE_REALIGN_STACK): This.
* Fix ldbl-128ibm nextafterl, nexttowardl sign of zero result (bug 19678).Joseph Myers2016-02-191-0/+3
| | | | | | | | | | | | | | The ldbl-128ibm implementation of nextafterl / nexttowardl returns -0 in FE_DOWNWARD mode when taking the next value below the least positive subnormal, when it should return +0. This patch fixes it to check explicitly for this case. Tested for powerpc. [BZ #19678] * sysdeps/ieee754/ldbl-128ibm/s_nextafterl.c (__nextafterl): Ensure +0.0 is returned when taking the next value below the least positive value.
* malloc: Remove NO_THREADSFlorian Weimer2016-02-191-19/+0
| | | | | No functional change. It was not possible to build without threading support before.
* Fix ldbl-128ibm powl overflow handling (bug 19674).Joseph Myers2016-02-191-17/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | The ldbl-128ibm implementation of powl has some problems in the case of overflow or underflow, which are mainly visible in non-default rounding modes. * When overflow or underflow is detected early, the correct sign of an overflowing or underflowing result is not allowed for. This is mostly hidden in the default rounding mode by the errno-setting wrappers recomputing the result (except in non-default error-handling modes such as -lieee), but visible in other rounding modes where a result that is not zero or infinity causes the wrappers not to do the recomputation. * The final scaling is done before the sign is incorporated in the result, but should be done afterwards for correct overflowing and underflowing results in directed rounding modes. This patch fixes those problems. Tested for powerpc. [BZ #19674] * sysdeps/ieee754/ldbl-128ibm/e_powl.c (__ieee754_powl): Include sign in overflowing and underflowing results when overflow or underflow is detected early. Include sign in result before rather than after scaling.
* Fix ldbl-128ibm remainderl, remquol equality tests (bug 19603).Joseph Myers2016-02-192-0/+4
| | | | | | | | | | | | | | | | | | | | | | The ldbl-128ibm implementations of remainderl and remquol have logic resulting in incorrect tests for equality of the absolute values of the arguments. Equality is tested based on the integer representations of the high and low parts, with the sign bit masked off the high part - but when this changes the sign of the high part, the sign of the low part needs to be changed as well, and failure to do this means arguments are wrongly treated as equal when they are not. This patch fixes the logic to adjust signs of low parts as needed. Tested for powerpc. [BZ #19603] * sysdeps/ieee754/ldbl-128ibm/e_remainderl.c (__ieee754_remainderl): Adjust sign of integer version of low part when taking absolute value of high part. * sysdeps/ieee754/ldbl-128ibm/s_remquol.c (__remquol): Likewise. * math/libm-test.inc (remainder_test_data): Add another test. (remquo_test_data): Likewise.
* Fix ldbl-128ibm fmodl handling of equal arguments with low part zero (bug ↵Joseph Myers2016-02-183-0/+88
| | | | | | | | | | | | | | | | | | | | | | | | | | | | 19602). The ldbl-128ibm implementation of fmodl has logic to detect when the first argument has absolute value less than or equal to the second. This logic is only correct for nonzero low parts; if the high parts are equal and the low parts are zero, then the signs of the low parts (which have no semantic effect on the value of the long double number) can result in equal values being wrongly treated as unequal, and an incorrect result being returned from fmodl. This patch fixes this by checking for the case of zero low parts. Although this does show up in tests from libm-test.inc (both tests of fmodl, and, indirectly, of remainderl / dreml), the dependence on non-semantic zero low parts means that test shouldn't be expected to reproduce it reliably; thus, this patch adds a standalone test that sets up affected values using unions. Tested for powerpc. [BZ #19602] * sysdeps/ieee754/ldbl-128ibm/e_fmodl.c (__ieee754_fmodl): Handle equal high parts and both low parts zero specially. * sysdeps/ieee754/ldbl-128ibm/test-fmodl-ldbl-128ibm.c: New test. * sysdeps/ieee754/ldbl-128ibm/Makefile [$(subdir) = math] (tests): Add test-fmodl-ldbl-128ibm.
* Fix ldbl-128ibm fmodl handling of subnormal results (bug 19595).Joseph Myers2016-02-181-9/+5
| | | | | | | | | | | | | | | | | | | | | The ldbl-128ibm implementation of fmodl has completely bogus logic for subnormal results (in this context, that means results for which the result is in the subnormal range for double, not results with absolute value below LDBL_MIN), based on code used for ldbl-128 that is correct in that case but incorrect in the ldbl-128ibm use. This patch fixes it to convert the mantissa into the correct form expected by ldbl_insert_mantissa, removing the other cases of the code that were incorrect and in one case unreachable for ldbl-128ibm. A correct exponent value is then passed to ldbl_insert_mantissa to reflect the shifted result. Tested for powerpc. [BZ #19595] * sysdeps/ieee754/ldbl-128ibm/e_fmodl.c (__ieee754_fmodl): Use common logic for all cases of shifting subnormal results. Do not insert sign bit in shifted mantissa. Always pass -1023 as biased exponent to ldbl_insert_mantissa in subnormal case.
* Fix ldbl-128ibm roundl for non-default rounding modes (bug 19594).Joseph Myers2016-02-181-36/+34
| | | | | | | | | | | | | | | | The ldbl-128ibm implementation of roundl is only correct in round-to-nearest mode (in other modes, there are incorrect results and overflow exceptions in some cases). This patch reimplements it along the lines used for floorl, ceill and truncl, using __round on the high part, and on the low part if the high part is an integer, and then adjusting in the cases where this is incorrect. Tested for powerpc. [BZ #19594] * sysdeps/ieee754/ldbl-128ibm/s_roundl.c (__roundl): Use __round on high and low parts then adjust result and use ldbl_canonicalize_int if needed.
* Fix ldbl-128ibm truncl for non-default rounding modes (bug 19593).Joseph Myers2016-02-181-39/+11
| | | | | | | | | | | | | | | | | | | | The ldbl-128ibm implementation of truncl is only correct in round-to-nearest mode (in other modes, there are incorrect results and overflow exceptions in some cases). It is also unnecessarily complicated, rounding both high and low parts to the nearest integer and then adjusting for the semantics of trunc, when it seems more natural to take the truncation of the high part (__trunc optimized inline versions can be used), and the floor or ceiling of the low part (depending on the sign of the high part) if the high part is an integer, as was done for floorl and ceill. This patch makes it use that simpler approach. Tested for powerpc. [BZ #19593] * sysdeps/ieee754/ldbl-128ibm/s_truncl.c (__truncl): Use __trunc on high part and __floor or __ceil on low part then use ldbl_canonicalize_int if needed.
* Fix ldbl-128ibm ceill for non-default rounding modes (bug 19592).Joseph Myers2016-02-181-36/+16
| | | | | | | | | | | | | | | | | | The ldbl-128ibm implementation of ceill is only correct in round-to-nearest mode (in other modes, there are incorrect results and overflow exceptions in some cases). It is also unnecessarily complicated, rounding both high and low parts to the nearest integer and then adjusting for the semantics of ceil, when it seems more natural to take the ceiling of the high part (__ceil optimized inline versions can be used), and that of the low part if the high part is an integer, as was done for floorl. This patch makes it use that simpler approach. Tested for powerpc. [BZ #19592] * sysdeps/ieee754/ldbl-128ibm/s_ceill.c (__ceill): Use __ceil on high and low parts then use ldbl_canonicalize_int if needed.
* Fix ldbl-128ibm floorl for non-default rounding modes (bug 17899).Joseph Myers2016-02-182-29/+49
| | | | | | | | | | | | | | | | | | | | | | | | The ldbl-128ibm implementation of floorl is only correct in round-to-nearest mode (in other modes, there are incorrect results and overflow exceptions in some cases going beyond the incorrect signs of zero results noted in bug 17899). It is also unnecessarily complicated, rounding both high and low parts to the nearest integer and then adjusting for the semantics of floor, when it seems more natural to take the floor of the high part (__floor optimized inline versions can be used), and that of the low part if the high part is an integer. This patch makes it use that simpler approach, with a canonicalization that works in all rounding modes (given that the only way the result can be noncanonical is if taking the floor of a negative noninteger low part increased its exponent). Tested for powerpc, where over a thousand failures are removed from test-ldouble.out (floorl problems affect many powl tests). [BZ #17899] * sysdeps/ieee754/ldbl-128ibm/math_ldbl.h (ldbl_canonicalize_int): New function. * sysdeps/ieee754/ldbl-128ibm/s_floorl.c (__floorl): Use __floor on high and low parts then use ldbl_canonicalize_int if needed.
* Add _STRING_INLINE_unaligned and string_private.hH.J. Lu2016-02-1810-11/+112
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As discussed in https://sourceware.org/ml/libc-alpha/2015-10/msg00403.html the setting of _STRING_ARCH_unaligned currently controls the external GLIBC ABI as well as selecting the use of unaligned accesses withing GLIBC. Since _STRING_ARCH_unaligned was recently changed for AArch64, this would potentially break the ABI in GLIBC 2.23, so split the uses and add _STRING_INLINE_unaligned to select the string ABI. This setting must be fixed for each target, while _STRING_ARCH_unaligned may be changed from release to release. _STRING_ARCH_unaligned is used unconditionally in glibc. But <bits/string.h>, which defines _STRING_ARCH_unaligned, isn't included with -Os. Since _STRING_ARCH_unaligned is internal to glibc and may change between glibc releases, it should be made private to glibc. _STRING_ARCH_unaligned should defined in the new string_private.h heade file which is included unconditionally from internal <string.h> for glibc build. [BZ #19462] * bits/string.h (_STRING_ARCH_unaligned): Renamed to ... (_STRING_INLINE_unaligned): This. * include/string.h: Include <string_private.h>. * string/bits/string2.h: Replace _STRING_ARCH_unaligned with _STRING_INLINE_unaligned. * sysdeps/aarch64/bits/string.h (_STRING_ARCH_unaligned): Removed. (_STRING_INLINE_unaligned): New. * sysdeps/aarch64/string_private.h: New file. * sysdeps/generic/string_private.h: Likewise. * sysdeps/m68k/m680x0/m68020/string_private.h: Likewise. * sysdeps/s390/string_private.h: Likewise. * sysdeps/x86/string_private.h: Likewise. * sysdeps/m68k/m680x0/m68020/bits/string.h (_STRING_ARCH_unaligned): Renamed to ... (_STRING_INLINE_unaligned): This. * sysdeps/s390/bits/string.h (_STRING_ARCH_unaligned): Renamed to ... (_STRING_INLINE_unaligned): This. * sysdeps/sparc/bits/string.h (_STRING_ARCH_unaligned): Renamed to ... (_STRING_INLINE_unaligned): This. * sysdeps/x86/bits/string.h (_STRING_ARCH_unaligned): Renamed to ... (_STRING_INLINE_unaligned): This.
* Use PIC relocation in ALIAS_IMPLAndrew Senkevich2016-02-171-2/+1
| | | | | | | | | Since libmvec_nonshared.a may be linked into shared objects, ALIAS_IMPL should use PIC relocation. [BZ #19590] * sysdeps/x86_64/fpu/svml_finite_alias.S (ALIAS_IMPL): Use PIC relocation.
* powerpc: Regenerate libm-test-ulpsRajalakshmi Srinivasaraghavan2016-02-041-0/+10
|
* Fix MIPS mmap negative offset handling for consistency (bug 19550).Joseph Myers2016-02-016-2/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The handling of negative offsets in MIPS mmap is inconsistent with other architectures, as shown by failure of the test posix/tst-mmap-offset for o32 and n32. The MIPS mmap syscall uses a signed argument and does a signed arithmetic shift on it, whereas the glibc semantics expected by that test are for the offset to be considered as a large positive offset. This patch makes MIPS consistent with other architectures as far as possible by using the mmap2 syscall on o32 (#including the generic implementation), and making mmap not an alias for mmap64 for n32, with a custom implementation for n32 that zero-extends the offset argument to 64-bit before calling the mmap syscall. Tested for MIPS64 (o32, n32, n64). [BZ #19550] * sysdeps/unix/sysv/linux/mips/mips32/mmap.c: New file. * sysdeps/unix/sysv/linux/mips/mips64/mmap64.c: Move to .... * sysdeps/unix/sysv/linux/mips/mips64/n64/mmap64.c: ... here. * sysdeps/unix/sysv/linux/mips/mips64/n32/mmap.c: New file. * sysdeps/unix/sysv/linux/mips/mips64/n32/syscalls.list (mmap64): New syscall entry. * sysdeps/unix/sysv/linux/mips/mips64/n64/syscalls.list (mmap): New syscall entry. * sysdeps/unix/sysv/linux/mips/mips64/syscalls.list (mmap): Remove syscall entry.
* Fix MIPS64 memcpy regression.Steve Ellcey2016-01-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The MIPS memcpy optimizations at <https://sourceware.org/ml/libc-alpha/2015-10/msg00597.html> introduced a bug causing many string function tests to fail with segfaults for n32 and n64: FAIL: string/stratcliff FAIL: string/test-bcopy FAIL: string/test-memccpy FAIL: string/test-memcmp FAIL: string/test-memcpy FAIL: string/test-memmove FAIL: string/test-mempcpy FAIL: string/test-stpncpy FAIL: string/test-strncmp FAIL: string/test-strncpy (Some failures in other directories could also be caused by this bug.) The problem is that after the check for whether a word of input is left that can be copied as a word before moving to byte copies, a load can occur in the branch delay slot, resulting in a segfault if we are at the end of a page and the following page is unmapped. I don't see how this would have passed the tests as reported in the original patch posting (different kernel configurations affecting the code setting up unmapped pages, maybe?), since the tests in question don't appear to have changed recently. This patch moves a later instruction into the delay slot, as suggested at <https://sourceware.org/ml/libc-alpha/2016-01/msg00584.html>. Tested for n32 and n64. 2016-01-28 Steve Ellcey <sellcey@imgtec.com> Joseph Myers <joseph@codesourcery.com> * sysdeps/mips/memcpy.S (MEMCPY_NAME) [USE_DOUBLE]: Avoid word load in branch delay slot when less than a word of input left.
* Remove unused variablesAndreas Schwab2016-01-274-6/+0
| | | | They are flagged by -Wunused-const-variable.
* Update localplt.data for 32-bit sparc.David S. Miller2016-01-261-0/+1
| | | | * sysdeps/unix/sysv/linux/sparc/sparc32/localplt.data: Add _Q_cmp.
* Define __sqrtl_finite on sparc 32-bit with correct symbol version.David S. Miller2016-01-253-2/+9
| | | | | | | * sysdeps/sparc/sparc32/Versions (GLIBC_2.23): Add entry for __sqrtl_finite. * sysdeps/sparc/sparc32/fpu/e_sqrtl.c (__sqrtl_finite): Define instead using versioned_symbol. * sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Fix ordering of entries.
* Adjust sparc 32-bit __sqrtl_finite version tag.David S. Miller2016-01-251-1/+1
| | | | | * sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Move __sqrtl_finite to GLIBC_2.23
* Update Alpha libm-test-ulpsRichard Henderson2016-01-251-284/+396
|