about summary refs log tree commit diff
path: root/sysdeps
Commit message (Collapse)AuthorAgeFilesLines
* [x86] Add a feature bit: Fast_Unaligned_Copy hjl/pr19583H.J. Lu2016-03-233-2/+17
| | | | | | | | | | | | | | | | | | | On AMD processors, memcpy optimized with unaligned SSE load is slower than emcpy optimized with aligned SSSE3 while other string functions are faster with unaligned SSE load. A feature bit, Fast_Unaligned_Copy, is added to select memcpy optimized with unaligned SSE load. [BZ #19583] * sysdeps/x86/cpu-features.c (init_cpu_features): Set Fast_Unaligned_Copy with Fast_Unaligned_Load for Intel processors. Set Fast_Copy_Backward for AMD Excavator processors. * sysdeps/x86/cpu-features.h (bit_arch_Fast_Unaligned_Copy): New. (index_arch_Fast_Unaligned_Copy): Likewise. * sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check Fast_Unaligned_Copy instead of Fast_Unaligned_Load.
* x86, pthread_cond_*wait: Do not depend on %eax not being clobberedNick Alcock2016-03-232-0/+2
| | | | | | | | | | | | | | | | | | | | | | The x86-specific versions of both pthread_cond_wait and pthread_cond_timedwait have (in their fall-back-to-futex-wait slow paths) calls to __pthread_mutex_cond_lock_adjust followed by __pthread_mutex_unlock_usercnt, which load the parameters before the first call but then assume that the first parameter, in %eax, will survive unaffected. This happens to have been true before now, but %eax is a call-clobbered register, and this assumption is not safe: it could change at any time, at GCC's whim, and indeed the stack-protector canary checking code clobbers %eax while checking that the canary is uncorrupted. So reload %eax before calling __pthread_mutex_unlock_usercnt. (Do this unconditionally, even when stack-protection is not in use, because it's the right thing to do, it's a slow path, and anything else is dicing with death.) * sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S: Reload call-clobbered %eax on retry path. * sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S: Likewise.
* Don't set %rcx twice before "rep movsb"H.J. Lu2016-03-221-1/+0
| | | | | * sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S (MEMCPY): Don't set %rcx twice before "rep movsb".
* Set index_arch_AVX_Fast_Unaligned_Load only for Intel processorsH.J. Lu2016-03-222-74/+88
| | | | | | | | | | | | | | | | | | | | | | | | | | Since only Intel processors with AVX2 have fast unaligned load, we should set index_arch_AVX_Fast_Unaligned_Load only for Intel processors. Move AVX, AVX2, AVX512, FMA and FMA4 detection into get_common_indeces and call get_common_indeces for other processors. Add CPU_FEATURES_CPU_P and CPU_FEATURES_ARCH_P to aoid loading GLRO(dl_x86_cpu_features) in cpu-features.c. [BZ #19583] * sysdeps/x86/cpu-features.c (get_common_indeces): Remove inline. Check family before setting family, model and extended_model. Set AVX, AVX2, AVX512, FMA and FMA4 usable bits here. (init_cpu_features): Replace HAS_CPU_FEATURE and HAS_ARCH_FEATURE with CPU_FEATURES_CPU_P and CPU_FEATURES_ARCH_P. Set index_arch_AVX_Fast_Unaligned_Load for Intel processors with usable AVX2. Call get_common_indeces for other processors with family == NULL. * sysdeps/x86/cpu-features.h (CPU_FEATURES_CPU_P): New macro. (CPU_FEATURES_ARCH_P): Likewise. (HAS_CPU_FEATURE): Use CPU_FEATURES_CPU_P. (HAS_ARCH_FEATURE): Use CPU_FEATURES_ARCH_P.
* Remove __ASSUME_GETDENTS64_SYSCALL.Joseph Myers2016-03-222-114/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes the __ASSUME_GETDENTS64_SYSCALL macro, as its definition is constant given the new kernel version requirements (and was constant anyway before those requirements except for MIPS n32). Note that the "#ifdef __NR_getdents64" conditional *is* still needed, because MIPS n64 only has the getdents syscall (being a 64-bit ABI, that syscall is 64-bit; the difference between the two on 64-bit architectures is where d_type goes). If MIPS n64 were to gain the getdents64 syscall and we wanted to use it conditionally on the kernel version at runtime we'd have to revert this patch, but I think that's unlikely (and in any case, we could follow the simpler approach of undefining __NR_getdents64 if the syscall can't be assumed, just like we do for accept4 / recvmmsg / sendmmsg syscalls on architectures where socketcall support came first). Most of the getdents.c changes are reindentation. Tested for x86_64 and x86 that installed stripped shared libraries are unchanged by the patch. * sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_GETDENTS64_SYSCALL): Remove macro. * sysdeps/unix/sysv/linux/getdents.c [!__ASSUME_GETDENTS64_SYSCALL]: Remove conditional code. [!have_no_getdents64_defined]: Likewise. (__GETDENTS): Remove __have_no_getdents64 conditional.
* Remove __ASSUME_SIGNALFD4.Joseph Myers2016-03-212-26/+1
| | | | | | | | | | | | | | | | Current Linux kernel version requirements mean the signalfd4 syscall can always be assumed to be available. This patch removes __ASSUME_SIGNALFD4 and associated conditionals. Tested for x86_64 and x86 that installed stripped shared libraries are unchanged by the patch. * sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_SIGNALFD4): Remove macro. * sysdeps/unix/sysv/linux/signalfd.c: Do not include <kernel-features.h>. (signalfd) [__NR_signalfd4]: Make code unconditional. (signalfd) [!__ASSUME_SIGNALFD4]: Remove conditional code.
* posix: Fix posix_spawn implict check styleAdhemerval Zanella2016-03-211-1/+1
| | | | | | | | | | This patch fixes the implicit check style add in 2a69f853c for the general convention one. Checked on x86_64. * sysdeps/unix/sysv/linux/spawni.c (__spawnix): Fix implict checks style.
* Use JUMPTARGET in x86-64 pthreadH.J. Lu2016-03-213-7/+3
| | | | | | | | | | | | | When PLT may be used, JUMPTARGET should be used instead calling the function directly. * sysdeps/unix/sysv/linux/x86_64/cancellation.S (__pthread_enable_asynccancel): Use JUMPTARGET to call __pthread_unwind. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S (__condvar_cleanup2): Use JUMPTARGET to call _Unwind_Resume. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S (__condvar_cleanup1): Likewise.
* posix: Fix posix_spawn invalid memory accessAdhemerval Zanella2016-03-201-1/+1
| | | | | | | | | | | | | Current Linux posix_spawn spawn do not test if the pid argument is valid before trying to update it for success case. This patch fixes it. Tested on x86_64 and i686. * sysdeps/unix/sysv/linux/spawni.c (__spawnix): Fix invalid memory access where posix_spawn success and pid argument is null. * posix/tst-spawn.c (do_test): Add posix_spawn null pid argument for success case.
* hurd: Add c++-types expected resultSamuel Thibault2016-03-201-0/+67
| | | | * sysdeps/mach/hurd/i386/c++-types.data: New file.
* hurd: Allow inlining IO locksSamuel Thibault2016-03-201-0/+3
| | | | * sysdeps/mach/hurd/libc-lock.h (_IO_lock_inexpensive): Define to 1.
* hurd: Do not hide rtld symbols which need to be preemptedSamuel Thibault2016-03-202-0/+43
| | | | | | | | | * sysdeps/generic/dl-fcntl.h: New file, adds attribute_hidden to __open and __fcntl. * sysdeps/mach/hurd/dl-fcntl.h: New file, adds attribute_hidden to __fcntl only. * include/fcntl.h [IS_IN (rtld)]: Include <dl-fcntl.h> instead of adding attribute_hidden to __open and __fcntl.
* hurd: Break errnos.d / libc-modules.h dependency loopSamuel Thibault2016-03-201-2/+4
| | | | | | | | Generating errnos.d does not actually need libc-modules.h. * sysdeps/mach/hurd/Makefile ($(common-objpfx)errnos.d): Strip "-include $(common-objpfx)libc-modules.h" from CPPFLAGS, and do not depend on libc-modules.h,
* Remove __ASSUME_EVENTFD2, move eventfd to syscalls.list.Joseph Myers2016-03-173-52/+1
| | | | | | | | | | | | | | | | Given current Linux kernel version requirements, we can assume the presence of the eventfd2 syscall. This means that __ASSUME_EVENTFD2 can be removed, and a syscalls.list entry suffices for eventfd instead of needing a .c file. This patch implements those changes. Tested for x86_64 and x86 (not that that means much, given the lack of testsuite coverage for eventfd). * sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_EVENTFD2): Remove macro. * sysdeps/unix/sysv/linux/eventfd.c: Remove file. * sysdeps/unix/sysv/linux/syscalls.list (eventfd): New syscall entry.
* Remove __ASSUME_FALLOCATE.Joseph Myers2016-03-172-36/+12
| | | | | | | | | | | | | | | | Given current Linux kernel version requirements, we can always assume the fallocate syscall to be available. This patch removes __ASSUME_FALLOCATE and a test for whether __NR_fallocate is defined. Tested for x86_64 and x86 that installed stripped shared libraries are unchanged by the patch. * sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_FALLOCATE): Remove macro. * sysdeps/unix/sysv/linux/wordsize-64/posix_fallocate.c: Do not include <kernel-features.h>. [!__ASSUME_FALLOCATE]: Remove conditional code. (posix_fallocate) [__NR_fallocate]: Make code unconditional.
* Use JUMPTARGET in x86-64 mathvecH.J. Lu2016-03-1638-130/+130
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When PLT may be used, JUMPTARGET should be used instead calling the function directly. * sysdeps/x86_64/fpu/multiarch/svml_d_cos2_core_sse4.S (_ZGVbN2v_cos_sse4): Use JUMPTARGET to call cos. * sysdeps/x86_64/fpu/multiarch/svml_d_cos4_core_avx2.S (_ZGVdN4v_cos_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core_avx512.S (_ZGVdN4v_cos): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core_sse4.S (_ZGVbN2v_exp_sse4): Use JUMPTARGET to call exp. * sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core_avx2.S (_ZGVdN4v_exp_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core_avx512.S (_ZGVdN4v_exp): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_log2_core_sse4.S (_ZGVbN2v_log_sse4): Use JUMPTARGET to call log. * sysdeps/x86_64/fpu/multiarch/svml_d_log4_core_avx2.S (_ZGVdN4v_log_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_log8_core_avx512.S (_ZGVdN4v_log): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core_sse4.S (_ZGVbN2vv_pow_sse4): Use JUMPTARGET to call pow. * sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core_avx2.S (_ZGVdN4vv_pow_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core_avx512.S (_ZGVdN4vv_pow): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_sin2_core_sse4.S (_ZGVbN2v_sin_sse4): Use JUMPTARGET to call sin. * sysdeps/x86_64/fpu/multiarch/svml_d_sin4_core_avx2.S (_ZGVdN4v_sin_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core_avx512.S (_ZGVdN4v_sin): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core_sse4.S (_ZGVbN2vvv_sincos_sse4): Use JUMPTARGET to call sin and cos. * sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core_avx2.S (_ZGVdN4vvv_sincos_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.S (_ZGVdN4vvv_sincos): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_cosf16_core_avx512.S (_ZGVdN8v_cosf): Use JUMPTARGET to call cosf. * sysdeps/x86_64/fpu/multiarch/svml_s_cosf4_core_sse4.S (_ZGVbN4v_cosf_sse4): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_cosf8_core_avx2.S (_ZGVdN8v_cosf_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core_avx512.S (_ZGVdN8v_expf): Use JUMPTARGET to call expf. * sysdeps/x86_64/fpu/multiarch/svml_s_expf4_core_sse4.S (_ZGVbN4v_expf_sse4): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_expf8_core_avx2.S (_ZGVdN8v_expf_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S (_ZGVdN8v_logf): Use JUMPTARGET to call logf. * sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S (_ZGVbN4v_logf_sse4): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S (_ZGVdN8v_logf_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_powf16_core_avx512.S (_ZGVdN8vv_powf): Use JUMPTARGET to call powf. * sysdeps/x86_64/fpu/multiarch/svml_s_powf4_core_sse4.S (_ZGVbN4vv_powf_sse4): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_powf8_core_avx2.S (_ZGVdN8vv_powf_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S (_ZGVdN8vv_powf): Use JUMPTARGET to call sinf and cosf. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core_sse4.S (_ZGVbN4vvv_sincosf_sse4): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core_avx2.S (_ZGVdN8vvv_sincosf_avx2): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sinf16_core_avx512.S (_ZGVdN8v_sinf): Use JUMPTARGET to call sinf. * sysdeps/x86_64/fpu/multiarch/svml_s_sinf4_core_sse4.S (_ZGVbN4v_sinf_sse4): Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sinf8_core_avx2.S (_ZGVdN8v_sinf_avx2): Likewise. * sysdeps/x86_64/fpu/svml_d_wrapper_impl.h (WRAPPER_IMPL_SSE2): Use JUMPTARGET to call callee. (WRAPPER_IMPL_SSE2_ff): Likewise. (WRAPPER_IMPL_SSE2_fFF): Likewise. (WRAPPER_IMPL_AVX): Likewise. (WRAPPER_IMPL_AVX_ff): Likewise. (WRAPPER_IMPL_AVX_fFF): Likewise. (WRAPPER_IMPL_AVX512): Likewise. (WRAPPER_IMPL_AVX512_ff): Likewise. * sysdeps/x86_64/fpu/svml_s_wrapper_impl.h (WRAPPER_IMPL_SSE2): Likewise. (WRAPPER_IMPL_SSE2_ff): Likewise. (WRAPPER_IMPL_SSE2_fFF): Likewise. (WRAPPER_IMPL_AVX): Likewise. (WRAPPER_IMPL_AVX_ff): Likewise. (WRAPPER_IMPL_AVX_fFF): Likewise. (WRAPPER_IMPL_AVX512): Likewise. (WRAPPER_IMPL_AVX512_ff): Likewise. (WRAPPER_IMPL_AVX512_fFF): Likewise.
* Fix hurd buildSamuel Thibault2016-03-161-1/+1
| | | | | | | | * sysdeps/mach/hurd/openat.c (__openat): Add missing ellipsis. * resolv/gai_sigqueue.c (__gai_sigqueue): Add missing internal_function qualifier. * /rt/aio_sigqueue.c (__aio_sigqueue): Add missing attribute_hidden internal_function qualifiers.
* Fix building glibc master with NDEBUG and --with-cpu.Carlos O'Donell2016-03-152-1/+2
| | | | | | | | | | When building on i686, x86_64, and arm, and with NDEBUG, or --with-cpu there are various variables and functions which are unused based on these settings. This patch marks all such variables with __attribute__((unused)) to avoid the compiler warnings when building with the aformentioned options.
* Remove __ASSUME_PPOLL.Joseph Myers2016-03-152-29/+1
| | | | | | | | | | | | | | | | | | With current kernel version requirements, the ppoll Linux syscall can be assumed to be present on all architectures; this patch removes the __ASSUME_PPOLL macro and conditionals on it and on whether __NR_ppoll is defined. (Note that the same can't yet be done for pselect, because MicroBlaze only wired that up in the syscall table in 3.15.) Tested for x86_64 and x86 that installed stripped shared libraries are unchanged by the patch. * sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_PPOLL): Remove macro. * sysdeps/unix/sysv/linux/ppoll.c: Do not include <kernel-features.h>. [__NR_ppoll]: Make code unconditional. [!__ASSUME_PPOLL]: Remove conditional code.
* Adjust kernel-features.h defaults for socket syscalls.Joseph Myers2016-03-1517-174/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adjusts the defaults for kernel-features.h macros relating to availability of accept4, recvmmsg and sendmmsg. It is not intended to affect which macros end up getting defined in any configuration. At present, all architectures with syscalls for those functions need to define __ASSUME_*_SYSCALL macros; in particular, any new architecture needs its own kernel-features.h file for that purpose, though it may not otherwise need such a header. Those macros are then used together with __ASSUME_SOCKETCALL to define macros for whether the functions in question are available. This patch changes the defaults so that the syscalls are assumed to be available by default with recent-enough kernels, and it is the responsibility of architecture headers to undefine the macros if they are unavailable in supported kernels at least as recent as the version where the architecture-independent functionality was introduced. The __ASSUME_<function> macros are defaulted similarly instead of being defined based on other macros (defining based on other macros would no longer work because the #undefs appear after the generic header is included), so where the syscall being unavailable means the function is unavailable this means the architecture header has to undefine the __ASSUME_<function> macro; this only affects __ASSUME_ACCEPT4 for ia64, as other cases where the syscalls were added late enough to be relevant with current kernel version requirements are all on socketcall architectures. As a consequence, the AArch64 and Nios II kernel-features.h header files are removed, and others simplified. When the minimum kernel version becomes 4.3 or later on all architectures, the syscalls in question can just be assumed unconditionally, permitting further simplification. Tested for x86_64, x86 and powerpc (that installed shared libraries are unchanged by the patch, and testsuite for x86_64 and x86). * sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Define unconditionally. (__ASSUME_ACCEPT4): Likewise. [__LINUX_KERNEL_VERSION >= 0x020621] (__ASSUME_RECVMMSG_SYSCALL): Define. [__LINUX_KERNEL_VERSION >= 0x020621] (__ASSUME_RECVMMSG): Likewise. [__LINUX_KERNEL_VERSION >= 0x030000] (__ASSUME_SENDMMSG_SYSCALL): Likewise. [__LINUX_KERNEL_VERSION >= 0x030000] (__ASSUME_SENDMMSG): Likewise. * sysdeps/unix/sysv/linux/aarch64/kernel-features.h: Remove file. * sysdeps/unix/sysv/linux/nios2/kernel-features.h: Likewise. * sysdeps/unix/sysv/linux/alpha/kernel-features.h (__ASSUME_RECVMMSG_SYSCALL): Do not define. (__ASSUME_ACCEPT4_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/arm/kernel-features.h (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_ACCEPT4_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/hppa/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Likewise. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/i386/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020621] (__ASSUME_RECVMMSG_SYSCALL): Likewise. [__LINUX_KERNEL_VERSION >= 0x030000] (__ASSUME_SENDMMSG_SYSCALL): Likewise. (__ASSUME_ACCEPT4_SYSCALL): Undefine if [__LINUX_KERNEL_VERSION < 0x040300] instead of defining if [__LINUX_KERNEL_VERSION >= 0x040300]. * sysdeps/unix/sysv/linux/ia64/kernel-features.h (__ASSUME_RECVMMSG_SYSCALL): Do not define. (__ASSUME_SENDMMSG_SYSCALL): Likewise. (__ASSUME_ACCEPT4_SYSCALL): Undefine if [__LINUX_KERNEL_VERSION < 0x030300] instead of defining if [__LINUX_KERNEL_VERSION >= 0x030300]. [__LINUX_KERNEL_VERSION < 0x030300] (__ASSUME_ACCEPT4): Undefine. * sysdeps/unix/sysv/linux/m68k/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Undefine if [__LINUX_KERNEL_VERSION < 0x040300] instead of defining if [__LINUX_KERNEL_VERSION >= 0x040300]. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/microblaze/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Do not define. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Undefine if [__LINUX_KERNEL_VERSION < 0x030300] instead of defining if [__LINUX_KERNEL_VERSION >= 0x030300]. * sysdeps/unix/sysv/linux/mips/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Do not define. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/powerpc/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Likewise. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/s390/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Undefine if [__LINUX_KERNEL_VERSION < 0x040300] instead of defining if [__LINUX_KERNEL_VERSION >= 0x040300]. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/sh/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Do not define. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/sparc/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Likewise. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/tile/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Likewise. (__ASSUME_RECVMMSG_SYSCALL): Likewise. (__ASSUME_SENDMMSG_SYSCALL): Likewise. * sysdeps/unix/sysv/linux/x86_64/kernel-features.h (__ASSUME_ACCEPT4_SYSCALL): Likewise. [__LINUX_KERNEL_VERSION >= 0x020621] (__ASSUME_RECVMMSG_SYSCALL): Likewise. [__LINUX_KERNEL_VERSION >= 0x030000] (__ASSUME_SENDMMSG_SYSCALL): Likewise.
* Update glibc headers for Linux 4.5.Joseph Myers2016-03-143-0/+4
| | | | | | | | | | | | | | | This patch updates the glibc headers with the defines MADV_FREE, IPV6_HDRINCL and EPOLLEXCLUSIVE that are added in Linux 4.5. Tested for x86_64 and x86 (testsuite, and that installed stripped shared libraries are unchanged by the patch). * bits/mman-linux.h [__USE_MISC] (MADV_FREE): New macro. * sysdeps/unix/sysv/linux/hppa/bits/mman.h [__USE_MISC] (MADV_FREE): Likewise. * sysdeps/unix/sysv/linux/bits/in.h (IPV6_HDRINCL): Likewise. * sysdeps/unix/sysv/linux/sys/epoll.h (enum EPOLL_EVENTS): Add EPOLLEXCLUSIVE.
* Fix flag test in waitid compatibility layerSamuel Thibault2016-03-131-1/+1
| | | | | * sysdeps/posix/waitid.c (OUR_WAITID): Test against WSTOPPED instead of WUNTRACED.
* powerpc: Rearrange cfi_offset callsRajalakshmi Srinivasaraghavan2016-03-116-34/+34
| | | | | This patch rearranges cfi_offset() calls after the last store so as to avoid extra DW_CFA_advance opcodes in unwind information.
* Add _arch_/_cpu_ to index_*/bit_* in x86 cpu-features.hH.J. Lu2016-03-103-151/+159
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | index_* and bit_* macros are used to access cpuid and feature arrays o struct cpu_features. It is very easy to use bits and indices of cpuid array on feature array, especially in assembly codes. For example, sysdeps/i386/i686/multiarch/bcopy.S has HAS_CPU_FEATURE (Fast_Rep_String) which should be HAS_ARCH_FEATURE (Fast_Rep_String) We change index_* and bit_* to index_cpu_*/index_arch_* and bit_cpu_*/bit_arch_* so that we can catch such error at build time. [BZ #19762] * sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h (EXTRA_LD_ENVVARS): Add _arch_ to index_*/bit_*. * sysdeps/x86/cpu-features.c (init_cpu_features): Likewise. * sysdeps/x86/cpu-features.h (bit_*): Renamed to ... (bit_arch_*): This for feature array. (bit_*): Renamed to ... (bit_cpu_*): This for cpu array. (index_*): Renamed to ... (index_arch_*): This for feature array. (index_*): Renamed to ... (index_cpu_*): This for cpu array. [__ASSEMBLER__] (HAS_FEATURE): Add and use field. [__ASSEMBLER__] (HAS_CPU_FEATURE)): Pass cpu to HAS_FEATURE. [__ASSEMBLER__] (HAS_ARCH_FEATURE)): Pass arch to HAS_FEATURE. [!__ASSEMBLER__] (HAS_CPU_FEATURE): Replace index_##name and bit_##name with index_cpu_##name and bit_cpu_##name. [!__ASSEMBLER__] (HAS_ARCH_FEATURE): Replace index_##name and bit_##name with index_arch_##name and bit_arch_##name.
* mips: terminate the FDE before the return trampoline in makecontextAurelien Jarno2016-03-091-0/+7
| | | | | | | | | | | | | | | | | In makecontext the FDE needs to be terminated before the return trampoline otherwise backtrace called within a context created by makecontext yields infinite backtrace. This bug has been present for a long time, stdlib/tst-makecontext did not fail until recent commit e535ce25. Tested on mips-linux-gnu and mips64el-linux-gnuabi64 and mips-linux-gnu, no regression. This fixes stdlib/tst-makecontext on MIPS. Changelog: [BZ #19792] * sysdeps/unix/sysv/linux/mips/makecontext.S (__makecontext): Terminate FDE before return label.
* Fix ldbl-128ibm nearbyintl in non-default rounding modes (bug 19790).Joseph Myers2016-03-093-109/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ldbl-128ibm implementation of nearbyintl uses logic that only works in round-to-nearest mode. This contrasts with rintl, which works in all rounding modes. Now, arguably nearbyintl could simply be aliased to rintl, given that spurious "inexact" is generally allowed for ldbl-128ibm, even for the underlying arithmetic operations. But given that the only point of nearbyintl is to avoid "inexact", this patch follows the more conservative approach of adding conditionals to the rintl implementation to make it suitable for use to implement nearbyintl, then builds it for nearbyintl with USE_AS_NEARBYINTL defined. The test test-nearbyint-except-2 shows up issues when traps on "inexact" are enabled, which turn out to be problems with the powerpc fenv_private.h implementation (two functions that should disable exception traps potentially failing to do so in some cases); this patch duly fixes that as well (I don't see any other existing cases where this would be user-visible; there isn't much use of *_NOEX, *hold* etc. in libm that requires exceptions to be discarded and not trapped on). Tested for powerpc. [BZ #19790] * sysdeps/ieee754/ldbl-128ibm/s_rintl.c [USE_AS_NEARBYINTL] (rintl): Define as macro. [USE_AS_NEARBYINTL] (__rintl): Likewise. (__rintl) [USE_AS_NEARBYINTL]: Use SET_RESTORE_ROUND_NOEX instead of fesetround. Ensure results are evaluated before end of scope. * sysdeps/ieee754/ldbl-128ibm/s_nearbyintl.c: Define USE_AS_NEARBYINTL and include s_rintl.c. * sysdeps/powerpc/fpu/fenv_private.h (libc_feholdsetround_ppc): Disable exception traps in new environment. (libc_feholdsetround_ppc_ctx): Likewise.
* Fix tst-audit10 build when -mavx512f is not supported.Roland McGrath2016-03-082-3/+4
|
* Define _HAVE_STRING_ARCH_mempcpy to 1 for x86H.J. Lu2016-03-081-0/+3
| | | | | | | | Since x86 has an optimized mempcpy and GCC can inline mempcpy on x86, define _HAVE_STRING_ARCH_mempcpy to 1 for x86. [BZ #19759] * sysdeps/x86/bits/string.h (_HAVE_STRING_ARCH_mempcpy): New.
* powerpc: Remove uses of operand modifier (%s) in inline asmGabriel F. T. Gomes2016-03-081-6/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The operand modifier %s on powerpc is an undocumented internal implementation detail of GCC. Besides that, the GCC community wants to remove it. This patch rewrites the expressions that use this modifier with logically equivalent expressions that don't require it. Explanation for the substitution: The %s modifier takes an immediate operand and prints 32 less such immediate. Thus, in the previous code, the expression resulted in: 32 - __builtin_ffs(e) where e was guaranteed to have exactly a single bit set, by the following expressions: (e & (e-1) == 0) : e has at most one bit set. (e != 0) : e is not zero, thus it has at least one bit set. Since we guarantee that there is exactly only one bit set, the following statement is true: 32 - __builtin_ffs(e) == __builtin_clz(e) Thus, we can replace __builtin_ffs with __builtin_clz and remove the %s operand modifier.
* powerpc: Fix dl-procinfo HWCAPCarlos Eduardo Seo2016-03-082-8/+6
| | | | | | HWCAP-related code should had been updated when the 32 bits of HWCAP were used. This patch updates the code in dl-procinfo.h to loop through all the 32 bits in HWCAP and updates _dl_powerpc_cap_flags accordingly.
* Fix ldbl-128ibm remainderl equality test for zero low part (bug 19677).Joseph Myers2016-03-086-60/+144
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ldbl-128ibm implementation of remainderl has logic resulting in incorrect tests for equality of the absolute values of the arguments in the case of zero low parts. If the low parts are both zero but with different signs, this can wrongly cause equal arguments to be treated as different, resulting in turn in incorrect signs of zero result in nondefault rounding modes arising from the subtractions done when the arguments are not equal. This patch fixes the logic to convert -0 low parts into +0 before the comparison (remquo already has separate logic to deal with signs of zero results, so doesn't need such a change). Tests are added for remainderl and remquol similar to that for fmodl, and based on a refactoring of it, since the bug depends on low parts which should not be relied upon in tests not setting the representation explicitly (although in fact the bug shows up in test-ldouble with current GCC). Tested for powerpc. [BZ #19677] * sysdeps/ieee754/ldbl-128ibm/e_remainderl.c (__ieee754_remainderl): Put zero low parts in canonical form. * sysdeps/ieee754/ldbl-128ibm/test-fmodrem-ldbl-128ibm.c: New file. Based on sysdeps/ieee754/ldbl-128ibm/test-fmodl-ldbl-128ibm.c. * sysdeps/ieee754/ldbl-128ibm/test-fmodl-ldbl-128ibm.c: Replace with wrapper round test-fmodrem-ldbl-128ibm.c. * sysdeps/ieee754/ldbl-128ibm/test-remainderl-ldbl-128ibm.c: New file. * sysdeps/ieee754/ldbl-128ibm/test-remquol-ldbl-128ibm.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/Makefile (tests): Add test-remainderl-ldbl-128ibm and test-remquol-ldbl-128ibm.
* tst-audit4, tst-audit10: Compile AVX/AVX-512 code separately [BZ #19269]Florian Weimer2016-03-075-55/+112
| | | | | This ensures that GCC will not use unsupported instructions before the run-time check to ensure support.
* posix: New Linux posix_spawn{p} implementationAdhemerval Zanella2016-03-0722-1/+437
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements a new posix_spawn{p} implementation for Linux. The main difference is it uses the clone syscall directly with CLONE_VM and CLONE_VFORK flags and a direct allocated stack. The new stack and start function solves most the vfork limitation (possible parent clobber due stack spilling). The remaning issue are related to signal handling: 1. That no signal handlers must run in child context, to avoid corrupt parent's state. 2. Child must synchronize with parent to enforce stack deallocation and to possible return execv issues. The first one is solved by blocking all signals in child, even NPTL-internal ones (SIGCANCEL and SIGSETXID). The second issue is done by a stack allocation in parent and a synchronization with using a pipe or waitpid (in case or error). The pipe has the advantage of allowing the child signal an exec error (checked with new tst-spawn2 test). There is an inherent race condition in pipe2 usage for architectures that do not support the syscall directly. In such cases the a pipe plus fctnl is used instead and it may lead to file descriptor leak in parent (as decribed by fcntl documentation). The child process stack is allocate with a mmap with MAP_STACK flag using default architecture stack size. Although it is slower than use a stack buffer from parent, it allows some slack for the compatibility code to run scripts with no shebang (which may use a buffer with size depending of argument list count). Performance should be similar to the vfork default posix implementation and way faster than fork path (vfork on mostly linux ports are basically clone with CLONE_VM plus CLONE_VFORK). The only difference is the syscalls required for the stack allocation/deallocation. It fixes BZ#10354, BZ#14750, and BZ#18433. Tested on i386, x86_64, powerpc64le, and aarch64. [BZ #14750] [BZ #10354] [BZ #18433] * include/sched.h (__clone): Add hidden prototype. (__clone2): Likewise. * include/unistd.h (__dup): Likewise. * posix/Makefile (tests): Add tst-spawn2. * posix/tst-spawn2.c: New file. * sysdeps/posix/dup.c (__dup): Add hidden definition. * sysdeps/unix/sysv/linux/aarch64/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/alpha/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/arm/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/hppa/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/i386/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/ia64/clone2.S (__clone): Likewise. * sysdeps/unix/sysv/linux/m68k/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/microblaze/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/mips/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/nios2/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/sh/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/tile/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/x86_64/clone.S (__clone): Likewise. * sysdeps/unix/sysv/linux/nptl-signals.h (____nptl_is_internal_signal): New function. * sysdeps/unix/sysv/linux/spawni.c: New file.
* Group AVX512 functions in .text.avx512 sectionH.J. Lu2016-03-062-2/+2
| | | | | | | * sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S: Replace .text with .text.avx512. * sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S: Likewise.
* Add placeholder libnsl.abilist and libutil.abilist filesAurelien Jarno2016-03-072-0/+0
| | | | | | Changelog: * sysdeps/generic/libnsl.abilist: New file. * sysdeps/generic/libutil.abilist: New file.
* Use HAS_ARCH_FEATURE with Fast_Rep_StringH.J. Lu2016-03-069-9/+9
| | | | | | | | | | | | | | | | | | | | | HAS_ARCH_FEATURE, not HAS_CPU_FEATURE, should be used with Fast_Rep_String. [BZ #19762] * sysdeps/i386/i686/multiarch/bcopy.S (bcopy): Use HAS_ARCH_FEATURE with Fast_Rep_String. * sysdeps/i386/i686/multiarch/bzero.S (__bzero): Likewise. * sysdeps/i386/i686/multiarch/memcpy.S (memcpy): Likewise. * sysdeps/i386/i686/multiarch/memcpy_chk.S (__memcpy_chk): Likewise. * sysdeps/i386/i686/multiarch/memmove_chk.S (__memmove_chk): Likewise. * sysdeps/i386/i686/multiarch/mempcpy.S (__mempcpy): Likewise. * sysdeps/i386/i686/multiarch/mempcpy_chk.S (__mempcpy_chk): Likewise. * sysdeps/i386/i686/multiarch/memset.S (memset): Likewise. * sysdeps/i386/i686/multiarch/memset_chk.S (__memset_chk): Likewise.
* Replace PREINIT_FUNCTION@PLT with *%rax in callH.J. Lu2016-03-041-1/+1
| | | | | | | | | Since we have loaded address of PREINIT_FUNCTION into %rax, we can avoid extra branch to PLT slot. [BZ #19745] * sysdeps/x86_64/crti.S (_init): Replace PREINIT_FUNCTION@PLT with *%rax in call.
* Replace @PLT with @GOTPCREL(%rip) in callH.J. Lu2016-03-041-2/+4
| | | | | | | | | Since __libc_start_main is called very early, lazy binding isn't relevant here. Use indirect branch via GOT to avoid extra branch to PLT slot. [BZ #19745] * sysdeps/x86_64/start.S (_start): __libc_start_main@PLT with *__libc_start_main@GOTPCREL(%rip) in call.
* Add a comment in sysdeps/x86_64/MakefileH.J. Lu2016-03-041-0/+3
| | | | | | Mention recursive calls when ENTRY is used in _mcount.S. * sysdeps/x86_64/Makefile (sysdep_noprof): Add a comment.
* x86-64: Fix memcpy IFUNC selectionH.J. Lu2016-03-041-13/+14
| | | | | | | | | | | | | | | | | Chek Fast_Unaligned_Load, instead of Slow_BSF, and also check for Fast_Copy_Backward to enable __memcpy_ssse3_back. Existing selection order is updated with following selection order: 1. __memcpy_avx_unaligned if AVX_Fast_Unaligned_Load bit is set. 2. __memcpy_sse2_unaligned if Fast_Unaligned_Load bit is set. 3. __memcpy_sse2 if SSSE3 isn't available. 4. __memcpy_ssse3_back if Fast_Copy_Backward bit it set. 5. __memcpy_ssse3 [BZ #18880] * sysdeps/x86_64/multiarch/memcpy.S: Check Fast_Unaligned_Load, instead of Slow_BSF, and also check for Fast_Copy_Backward to enable __memcpy_ssse3_back.
* Or bit_Prefer_MAP_32BIT_EXEC in EXTRA_LD_ENVVARSH.J. Lu2016-03-031-1/+1
| | | | | | | | | We should turn on bit_Prefer_MAP_32BIT_EXEC in EXTRA_LD_ENVVARS without overriding other bits. [BZ #19758] * sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h (EXTRA_LD_ENVVARS): Or bit_Prefer_MAP_32BIT_EXEC.
* 2016-03-03 Paul Pluzhnikov <ppluzhnikov@google.com>Paul Pluzhnikov2016-03-031-13/+42
| | | | | | [BZ #19490] * sysdeps/x86_64/_mcount.S (_mcount): Add unwind descriptor. (__fentry__): Likewise
* Copy x86_64 _mcount.op from _mcount.oH.J. Lu2016-03-031-0/+1
| | | | | | | | No need to compile x86_64 _mcount.S with -pg. We can just copy the normal static object. * gmon/Makefile (noprof): Add $(sysdep_noprof). * sysdeps/x86_64/Makefile (sysdep_noprof): Add _mcount.
* Call x86-64 __mcount_internal/__sigjmp_save directlyH.J. Lu2016-03-012-12/+0
| | | | | | | | | | | | | | | Since __mcount_internal and __sigjmp_save are internal to x86-64 libc.so: 3532: 0000000000104530 289 FUNC LOCAL DEFAULT 13 __mcount_internal 3391: 0000000000034170 38 FUNC LOCAL DEFAULT 13 __sigjmp_save they can be called directly without PLT. * sysdeps/x86_64/_mcount.S (C_LABEL(_mcount)): Call __mcount_internal directly. (C_LABEL(__fentry__)): Likewise. * sysdeps/x86_64/setjmp.S __sigsetjmp): Call __sigjmp_save directly.
* Call x86-64 __setcontext directlyH.J. Lu2016-03-011-1/+1
| | | | | | | | | | | Since x86-64 __start_context calls the internal __setcontext: 5089: 00000000000417e0 145 FUNC LOCAL DEFAULT 13 __setcontext it should call __setcontext directly. * sysdeps/unix/sysv/linux/x86_64/__start_context.S (__start_context): Call __setcontext directly.
* Remove kernel-features.h conditionals on pre-3.2 kernels.Joseph Myers2016-02-2611-157/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch follows up on the increase in minimum kernel version by removing conditionals in non-x86, non-x86_64 kernel-features.h headers that are now constant for all supported kernel versions. * sysdeps/unix/sysv/linux/alpha/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional. [__LINUX_KERNEL_VERSION >= 0x030200]: Likewise. [__LINUX_KERNEL_VERSION < 0x020621]: Remove conditional code. * sysdeps/unix/sysv/linux/arm/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional. [__LINUX_KERNEL_VERSION >= 0x020624]: Likewise. [__LINUX_KERNEL_VERSION >= 0x030000]: Likewise. * sysdeps/unix/sysv/linux/hppa/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020622]: Likewise. [__LINUX_KERNEL_VERSION >= 0x030100]: Likewise. [__LINUX_KERNEL_VERSION < 0x020625]: Remove conditional code. * sysdeps/unix/sysv/linux/ia64/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional. [__LINUX_KERNEL_VERSION >= 0x030000]: Likewise. * sysdeps/unix/sysv/linux/m68k/kernel-features.h [__LINUX_KERNEL_VERSION < 0x030000]: Remove conditional code. * sysdeps/unix/sysv/linux/microblaze/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional. [__LINUX_KERNEL_VERSION < 0x020621]: Remove conditional code. [__LINUX_KERNEL_VERSION < 0x020625]: Likewise. * sysdeps/unix/sysv/linux/mips/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional. [__LINUX_KERNEL_VERSION >= 0x030100]: Likewise. [_MIPS_SIM == _ABIN32 && __LINUX_KERNEL_VERSION < 0x020623]: Remove conditional code. * sysdeps/unix/sysv/linux/powerpc/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020625]: Make code unconditional. [__LINUX_KERNEL_VERSION >= 0x030000]: Likewise. * sysdeps/unix/sysv/linux/sh/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020625]: Likewise. [__LINUX_KERNEL_VERSION >= 0x030000]: Likewise. [__LINUX_KERNEL_VERSION < 0x020625]: Remove conditional code. * sysdeps/unix/sysv/linux/sparc/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x020621]: Make code unconditional. [__LINUX_KERNEL_VERSION >= 0x030000]: Likewise. * sysdeps/unix/sysv/linux/tile/kernel-features.h [__LINUX_KERNEL_VERSION >= 0x030000]: Likewise.
* Remove linux/fanotify.h configure test.Joseph Myers2016-02-243-62/+0
| | | | | | | | | | | | | | | | Now we require Linux 3.2 or later kernel headers everywhere, the configure test for <linux/fanotify.h> is obsolete; this patch removes it. Tested for x86_64. * sysdeps/unix/sysv/linux/configure.ac (linux/fanotify.h): Do not test for header. * sysdeps/unix/sysv/linux/configure: Regenerated. * config.h.in (HAVE_LINUX_FANOTIFY_H): Remove #undef. * sysdeps/unix/sysv/linux/tst-fanotify.c [!HAVE_LINUX_FANOTIFY_H]: Remove conditional code. [HAVE_LINUX_FANOTIFY_H]: Make code unconditional.
* Require Linux 3.2 except on x86 / x86_64, 3.2 headers everywhere.Joseph Myers2016-02-246-12/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In <https://sourceware.org/ml/libc-alpha/2016-01/msg00885.html> I proposed a minimum Linux kernel version of 3.2 for glibc 2.24, since Linux 2.6.32 has reached EOL. In the discussion in February, some concerns were expressed about compatibility with OpenVZ containers. It's not clear that these are real issues, given OpenVZ backporting kernel features and faking the kernel version for guest software, as discussed in <https://sourceware.org/ml/libc-alpha/2016-02/msg00278.html>. It's also not clear that supporting running GNU/Linux distributions from late 2016 (at the earliest) on a kernel series from 2009 is a sensible expectation. However, as an interim step, this patch increases the requirement everywhere except x86 / x86_64 (since the controversy was only about those architectures); the special caveats and settings can easily be removed later when we're ready to increase the requirements on x86 / x86_64 (and if someone would like to raise the issue on LWN as suggested in the previous discussion, that would be welcome). 3.2 kernel headers are required everywhere by this patch. (x32 already requires 3.4 or later, so is unaffected by this patch.) As usual for such a change, this patch only changes the configure scripts and associated documentation. The intent is to follow up with removal of dead __LINUX_KERNEL_VERSION conditionals. Each __ASSUME_* or other macro that becomes dead can then be removed independently. Tested for x86_64 and x86. * sysdeps/unix/sysv/linux/configure.ac (LIBC_LINUX_VERSION): Define to 3.2.0. (arch_minimum_kernel): Likewise. * sysdeps/unix/sysv/linux/configure: Regenerated. * sysdeps/unix/sysv/linux/i386/configure.ac (arch_minimum_kernel): Define to 2.6.32. * sysdeps/unix/sysv/linux/i386/configure: Regenerated. * sysdeps/unix/sysv/linux/x86_64/64/configure.ac (arch_minimum_kernel): Define to 2.6.32. * sysdeps/unix/sysv/linux/x86_64/64/configure: Regenerated. * README: Document Linux 3.2 requirement. * manual/install.texi (Linux): Document Linux 3.2 headers requirement. * INSTALL: Regenerated.
* Add fts64_* to sysdeps/arm/nacl/libc.abilistRoland McGrath2016-02-221-0/+6
|
* [x86_64] Set DL_RUNTIME_UNALIGNED_VEC_SIZE to 8H.J. Lu2016-02-192-11/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Due to GCC bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58066 __tls_get_addr may be called with 8-byte stack alignment. Although this bug has been fixed in GCC 4.9.4, 5.3 and 6, we can't assume that stack will be always aligned at 16 bytes. Since SSE optimized memory/string functions with aligned SSE register load and store are used in the dynamic linker, we must set DL_RUNTIME_UNALIGNED_VEC_SIZE to 8 so that _dl_runtime_resolve_sse will align the stack before calling _dl_fixup: Dump of assembler code for function _dl_runtime_resolve_sse: 0x00007ffff7deea90 <+0>: push %rbx 0x00007ffff7deea91 <+1>: mov %rsp,%rbx 0x00007ffff7deea94 <+4>: and $0xfffffffffffffff0,%rsp ^^^^^^^^^^^ Align stack to 16 bytes 0x00007ffff7deea98 <+8>: sub $0x100,%rsp 0x00007ffff7deea9f <+15>: mov %rax,0xc0(%rsp) 0x00007ffff7deeaa7 <+23>: mov %rcx,0xc8(%rsp) 0x00007ffff7deeaaf <+31>: mov %rdx,0xd0(%rsp) 0x00007ffff7deeab7 <+39>: mov %rsi,0xd8(%rsp) 0x00007ffff7deeabf <+47>: mov %rdi,0xe0(%rsp) 0x00007ffff7deeac7 <+55>: mov %r8,0xe8(%rsp) 0x00007ffff7deeacf <+63>: mov %r9,0xf0(%rsp) 0x00007ffff7deead7 <+71>: movaps %xmm0,(%rsp) 0x00007ffff7deeadb <+75>: movaps %xmm1,0x10(%rsp) 0x00007ffff7deeae0 <+80>: movaps %xmm2,0x20(%rsp) 0x00007ffff7deeae5 <+85>: movaps %xmm3,0x30(%rsp) 0x00007ffff7deeaea <+90>: movaps %xmm4,0x40(%rsp) 0x00007ffff7deeaef <+95>: movaps %xmm5,0x50(%rsp) 0x00007ffff7deeaf4 <+100>: movaps %xmm6,0x60(%rsp) 0x00007ffff7deeaf9 <+105>: movaps %xmm7,0x70(%rsp) [BZ #19679] * sysdeps/x86_64/dl-trampoline.S (DL_RUNIME_UNALIGNED_VEC_SIZE): Renamed to ... (DL_RUNTIME_UNALIGNED_VEC_SIZE): This. Set to 8. (DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ... (DL_RUNTIME_RESOLVE_REALIGN_STACK): This. Updated. (DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ... (DL_RUNTIME_RESOLVE_REALIGN_STACK): This. * sysdeps/x86_64/dl-trampoline.h (DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ... (DL_RUNTIME_RESOLVE_REALIGN_STACK): This.