about summary refs log tree commit diff
path: root/sysdeps
Commit message (Collapse)AuthorAgeFilesLines
...
* Move most libmvec test contents from .c to .h files.Joseph Myers2017-02-1516-82/+222
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The libmvec tests put substantive, architecture-specific contents in .c files such as test-double-vlen4.c, so making those files architecture-specific and causing issues for generating such files automatically when splitting up tests by function. This patch moves all the substantive contents to .h files, so the .c files only include the .h file and then libm-test.c. This allows for automatic generation of per-function .c files in future. The .h files in turn #include or #include_next the architecture-independent file and add the architecture-specific definitions to that. (Splitting by function should in fact allow the TEST_VECTOR_* macros to be replaced by sysdeps makefile information on which functions to test in each case, removing the need for gen-libm-have-vector-test.sh as well as removing the need for some of the architecture-specific headers.) Tested for x86_64. * sysdeps/x86_64/fpu/test-double-vlen2.c: Move most contents to, and include ... * sysdeps/x86_64/fpu/test-double-vlen2.h: ... here. New file. * sysdeps/x86_64/fpu/test-double-vlen4-avx2.c: Move most contents to, and include ... * sysdeps/x86_64/fpu/test-double-vlen4-avx2.h: ... here. New file. * sysdeps/x86_64/fpu/test-double-vlen4.c: Move most contents to, and include ... * sysdeps/x86_64/fpu/test-double-vlen4.h: ... here. New file. * sysdeps/x86_64/fpu/test-double-vlen8.c: Move most contents to, and include ... * sysdeps/x86_64/fpu/test-double-vlen8.h: ... here. New file. * sysdeps/x86_64/fpu/test-float-vlen16.c: Move most contents to, and include ... * sysdeps/x86_64/fpu/test-float-vlen16.h: ... here. New file. * sysdeps/x86_64/fpu/test-float-vlen4.c: Move most contents to, and include ... * sysdeps/x86_64/fpu/test-float-vlen4.h: ... here. New file. * sysdeps/x86_64/fpu/test-float-vlen8-avx2.c: Move most contents to, and include ... * sysdeps/x86_64/fpu/test-float-vlen8-avx2.h: ... here. New file. * sysdeps/x86_64/fpu/test-float-vlen8.c: Move most contents to, and include ... * sysdeps/x86_64/fpu/test-float-vlen8.h: ... here. New file.
* ldbl-128: Fix y0 and y1 for -Inf input [BZ #21130]Gabriel F. T. Gomes2017-02-122-12/+2
| | | | | | | | | | | | | | | | | The Bessel functions of the second type (Yn) are not defined for negative input and should return NAN with the "invalid" exception raised, in these cases. However, current code checks for infinity and return zero, regardless of the sign. This error is exposed for long double when linking with -lieee. Without this flag, the error is not exposed, because the wrappers for these functions, which use __kernel_standard functionality, return the correct value. Tested for powerpc64le. [BZ #21130] * sysdeps/ieee754/ldbl-128/e_j0l.c (__ieee754_y0l): Return NAN with the "invalid" exception raised when x is -Inf. * sysdeps/ieee754/ldbl-128/e_j1l.c (__ieee754_y1l): Likewise.
* x86-64: Verify that _dl_runtime_resolve preserves vector registersH.J. Lu2017-02-099-4/+405
| | | | | | | | | | | | | | | | | | | | | | | | | | | On x86-64, _dl_runtime_resolve must preserve the first 8 vector registers. Add 3 _dl_runtime_resolve tests to verify that SSE, AVX and AVX512 registers are preserved. * sysdeps/x86_64/Makefile (tests): Add tst-sse, tst-avx and tst-avx512. (test-extras): Add tst-avx-aux and tst-avx512-aux. (extra-test-objs): Add tst-avx-aux.o and tst-avx512-aux.o. (modules-names): Add tst-ssemod, tst-avxmod and tst-avx512mod. ($(objpfx)tst-sse): New rule. ($(objpfx)tst-avx): Likewise. ($(objpfx)tst-avx512): Likewise. (CFLAGS-tst-avx-aux.c): New. (CFLAGS-tst-avxmod.c): Likewise. (CFLAGS-tst-avx512-aux.c): Likewise. (CFLAGS-tst-avx512mod.c): Likewise. * sysdeps/x86_64/tst-avx-aux.c: New file. * sysdeps/x86_64/tst-avx.c: Likewise. * sysdeps/x86_64/tst-avx512-aux.c: Likewise. * sysdeps/x86_64/tst-avx512.c: Likewise. * sysdeps/x86_64/tst-avx512mod.c: Likewise. * sysdeps/x86_64/tst-avxmod.c: Likewise. * sysdeps/x86_64/tst-sse.c: Likewise. * sysdeps/x86_64/tst-ssemod.c: Likewise.
* Move w_exp to libm-compat-call-autoGabriel F. T. Gomes2017-02-0810-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the "_compat" suffix to the wrappers of the function exp, which use _LIB_VERSION / matherr / __kernel_standard functionality. Tested for powerpc64le, s390, and x86_64. * math/Makefile (libm-calls): Move w_exp... (libm-compat-calls-auto): Here. * math/w_expl.c: Add suffix "_compat" to filename. * sysdeps/ia64/fpu/w_expl.c: Likewise. * sysdeps/ia64/fpu/w_expf.c: Likewise. * sysdeps/ia64/fpu/w_exp.c: Likewise. * sysdeps/ieee754/dbl-64/w_exp.c: Likewise. * sysdeps/ieee754/flt-32/w_expf.c: Likewise. * sysdeps/ieee754/ldbl-128/w_expl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/w_expl.c: Likewise. * sysdeps/ieee754/ldbl-96/w_expl.c: Likewise. * math/w_expl_compat.c: New file, copied from above. * sysdeps/ia64/fpu/w_exp_compat.c: Likewise. * sysdeps/ia64/fpu/w_expf_compat.c: Likewise. * sysdeps/ia64/fpu/w_expl_compat.c: Likewise. * sysdeps/ieee754/dbl-64/w_exp_compat.c: Likewise. * sysdeps/ieee754/flt-32/w_expf_compat.c: Likewise. * sysdeps/ieee754/ldbl-128/w_expl_compat.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/w_expl_compat.c: Likewise. * sysdeps/ieee754/ldbl-96/w_expl_compat.c: Likewise. * sysdeps/ieee754/ldbl-64-128/w_expl.c: Add suffix "_compat" to filename. * sysdeps/ieee754/ldbl-opt/w_exp.c: Likewise. * sysdeps/ieee754/ldbl-64-128/w_expl_compat.c: New file, copied from above and adjusted for the new filenames. * sysdeps/ieee754/ldbl-opt/w_exp_compat.c: Likewise.
* Move w_lgamma_r to libm-compat-calls-autoGabriel F. T. Gomes2017-02-085-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the suffix "_compat" to lgamma_r wrappers and make some adjustments to #includes and Makefiles. This is a step towards deprecation of wrappers that use _LIB_VERSION / matherr / __kernel_standard functionality. Tested for powerpc64le, s390, and x86_64. * math/Makefile (libm-calls): Move w_lgammaF_r... (libm-compat-calls-auto): Here. * math/w_lgamma_r.c: Add suffix "_compat" to filename. * math/w_lgammaf_r.c: Likewise. * math/w_lgammal_r.c: Likewise. * sysdeps/ia64/fpu/w_lgammal_r.c: Likewise. * sysdeps/ia64/fpu/w_lgammaf_r.c: Likewise. * sysdeps/ia64/fpu/w_lgamma_r.c: Likewise. * math/w_lgamma_r_compat.c: New file, copied from above. * math/w_lgammaf_r_compat.c: Likewise. * math/w_lgammal_r_compat.c: Likewise. * sysdeps/ia64/fpu/w_lgamma_r_compat.c: Likewise. * sysdeps/ia64/fpu/w_lgammaf_r_compat.c: Likewise. * sysdeps/ia64/fpu/w_lgammal_r_compat.c: Likewise. * sysdeps/ieee754/ldbl-opt/w_lgamma_r.c: Add suffix "_compat" to filename. * sysdeps/ieee754/ldbl-opt/w_lgammal_r.c: Likewise. * sysdeps/ieee754/ldbl-opt/w_lgamma_r_compat.c: New file copied from above and adjusted for the new filenames. * sysdeps/ieee754/ldbl-opt/w_lgammal_r_compat.c: Likewise.
* aarch64: fix errno address calculation in SYSCALL_ERROR_HANDLERAdhemerval Zanella2017-02-081-1/+1
| | | | | | | | | | | | | | | This patch fixes the last regression in LTP lite scenario (mmap16) comparing to lp64 in my source trees [1, 2]. The fix has been suggested back in 2015 [3] but was never applied. Checked on aarch64-linux-gnu. * sysdeps/unix/sysv/linux/aarch64/sysdep.h: use PTR_REG() for offset calculation in SYSCALL_ERROR_HANDLER(). [1] https://github.com/norov/glibc/tree/dev9 [2] https://github.com/norov/linux/tree/ilp32-20170203 [3] https://sourceware.org/ml/libc-alpha/2015-03/msg00587.html
* Add Linux PTRACE_EVENT_STOPKir Kolyshkin2017-02-087-14/+28
| | | | | | | | | | | | | | | | | | | | | | | Add PTRACE_EVENT_STOP value to Linux's sys/ptrace.h, modify related comments accordingly. This constant initially appeared in Linux 3.1 (kernel commit 3544d72a, "ptrace: implement PTRACE_SEIZE") but its value has changed later in Linux 3.4 (kernel commit 5cdf389a, "ptrace: renumber PTRACE_EVENT_STOP so that future new options and events can match"). The comment is also taken from the above commit. This constant is used by e.g. strace, CRIU, Mozilla RR. * sysdeps/unix/sysv/linux/aarch64/sys/ptrace.h (__ptrace_eventcodes): Add PTRACE_EVENT_STOP. * sysdeps/unix/sysv/linux/ia64/sys/ptrace.h: Likewise. * sysdeps/unix/sysv/linux/powerpc/sys/ptrace.h: Likewise. * sysdeps/unix/sysv/linux/s390/sys/ptrace.h: Likewise. * sysdeps/unix/sysv/linux/sparc/sys/ptrace.h: Likewise. * sysdeps/unix/sysv/linux/sys/ptrace.h: Likewise. * sysdeps/unix/sysv/linux/tile/sys/ptrace.h: Likewise.
* Fix powf inaccuracy (bug 21112).Joseph Myers2017-02-071-2/+2
| | | | | | | | | | | | | | | | | | | | Bug 21112 reports a case where powf is substantially inaccurate. This results from a multiplication where cp_h*p_h is required to be exact, and p_h is masked to have only 12 leading nonzero bits in its mantissa, but the value of cp_h has the 13th bit nonzero, leading to inexact multiplication results in some cases that can result in large errors in the final result of powf. This patch fixes this by using a value of cp_h correctly rounded to nearest to 12 bits, with a corresponding updated value of cp_l. Tested for x86_64 and x86. [BZ #21112] * sysdeps/ieee754/flt-32/e_powf.c (cp_h): Use value with trailing 12 bits zero. (cp_l): Update for new value of cp_h. * math/auto-libm-test-in: Add another test of pow. * math/auto-libm-test-out-pow: Regenerated.
* powerpc: Set minimum kernel version for powerpc64leRajalakshmi Srinivasaraghavan2017-02-072-0/+31
| | | | This patch sets the minimum kernel version required for ppc64le as 3.10.0.
* powerpc: Use latest optimizations for internal function callsRajalakshmi Srinivasaraghavan2017-02-072-3/+3
| | | | | Some of the power8 strings optimizations are not updated to use the latest version of other string optimizations
* powerpc: Improve strcmp performance for shorter stringsRajalakshmi Srinivasaraghavan2017-02-072-44/+16
| | | | | | | | | | For strings >16B and <32B existing algorithm takes more time than default implementation when strings are placed closed to end of page. This is due to byte by byte access for handling page cross. This is improved by following >32B code path where the address is adjusted to aligned memory before doing load doubleword operation instead of loading bytes. Tested on powerpc64 and powerpc64le.
* nptl: Remove COLORING_INCREMENTAdhemerval Zanella2017-02-061-5/+0
| | | | | | | | | | | This patch removes the COLORING_INCREMENT define and usage on allocatestack.c. It has not been used since 564cd8b67ec487f (glibc-2.3.3) by any architecture. The idea is to simplify the code by removing obsolete code. * nptl/allocatestack.c [COLORING_INCREMENT] (nptl_ncreated): Remove. (allocate_stack): Remove COLORING_INCREMENT usage. * nptl/stack-aliasing.h (COLORING_INCREMENT). Likewise. * sysdeps/i386/i686/stack-aliasing.h (COLORING_INCREMENT): Likewise.
* sparc: Remove unused assignment in __cloneIvo Raisr2017-02-062-2/+0
| | | | | | | | | | | | | It is no longer needed to preserve the flags parameter to `clone' since the commit c579f48edba88380635ab98cb612030e3ed8691e (Remove cached PID/TID in clone). Testing was performed successfully on sparcv9/Linux. [BZ #21075] * sysdeps/unix/sysv/linux/sparc/sparc64/clone.S (__clone): Remove unused assignment. * sysdeps/unix/sysv/linux/sparc/sparc32/clone.S (__clone): Likewise.
* Add __glibc_unlikely hint in lll_trylock, lll_cond_trylock.Stefan Liebler2017-02-061-2/+2
| | | | | | | | | | | | | | | | The macros lll_trylock, lll_cond_trylock are extended by an __glibc_unlikely hint. Now the trylock macros are based on the same assumption about a free/busy lock as lll_lock. With the hint gcc emits code in e.g. pthread_mutex_trylock which does not use jumps if the lock is free. Without the hint it had to jump away if the lock is free. Tested on s390x, ppc. ChangeLog: * sysdeps/nptl/lowlevellock.h (lll_trylock, lll_cond_trylock): Add __glibc_unlikely hint.
* Remove i686, x86_64, and powerpc strtok implementationsAdhemerval Zanella2017-02-068-1075/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Based on comments on previous attempt to address BZ#16640 [1], the idea is not support invalid use of strtok (the original bug report proposal). This leader to a new strtok optimized strtok implementation [2]. The idea of this patch is to fix BZ#16640 to align all the implementations to a same contract. However, with newer strtok code it is better to get remove the old assembly ones instead of fix them. For x86 is a gain in all cases since the new implementation can potentially use sse2/sse42 implementation for strspn and strcspn. This shows a better performance on both i686 and x86_64 using the string benchtests. On powerpc64 the gains are mixed, where only for larger inputs or keys some gains are showns (based on benchtest it seems that it shows some gains for keys larger than 10 and inputs larger than 32). I would prefer to remove the optimized implementation based on first code simplicity and second because some more gain could be optimized using a better optimized strcspn/strspn code (as for x86). However if powerpc arch maintainers prefer I can send a v2 with the assembly code adjusted instead. Checked on x86_64-linux-gnu, i686-linux-gnu, and powerpc64le-linux-gnu. [BZ #16640] * sysdeps/i386/i686/strtok.S: Remove file. * sysdeps/i386/i686/strtok_r.S: Likewise. * sysdeps/i386/strtok.S: Likewise. * sysdeps/i386/strtok_r.S: Likewise. * sysdeps/powerpc/powerpc64/strtok.S: Likewise. * sysdeps/powerpc/powerpc64/strtok_r.S: Likewise. * sysdeps/x86_64/strtok.S: Likewise. * sysdeps/x86_64/strtok_r.S: Likewise. [1] https://sourceware.org/ml/libc-alpha/2016-10/msg00411.html [2] https://sourceware.org/ml/libc-alpha/2016-12/msg00461.html
* Consolidate arm and mips posix_fadvise implementationsAdhemerval Zanella2017-02-064-42/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As noted by c1f0601389db64d9, previous posix_fadvise consolidation broke on mips o32. As stated in commit message, MIPS o32 only defines __NR_fadvise64 and it is behaves like __NR_fadvise64_64. This patches consolidates both ARM and mips o32 version by fixing the ARM used option (__NR_fadvise64_64 withouth the alignment required by abi) and added another option, __ASSUME_FADVISE64_AS_64_64, which is used on mips o32. When this option is used, posix_fadvise will use __NR_fadvise64_64 behavior (by defining or not __ASSUME_FADVISE64_64_6ARG). For mips, if __NR_fadvise64_64 is not defined, __NR_fadvise will be used. I also updated the posix_fadvise comments to explain better the different kernel abi used in the supported architectures. I checked with a mips o32 and verified that posix_fadvise.o is indeed using 7 argument syscall with the expected argument position. I also checked on i686-linux-gnu and arm-gnu-eabihf. * sysdeps/unix/sysv/linux/arm/posix_fadvise.c: Remove file. * sysdeps/unix/sysv/linux/mips/mips32/posix_fadvise.c: Likewise. * sysdeps/unix/sysv/linux/mips/kernel-features.h (__ASSUME_FADVISE64_AS_64_64): Define. * sysdeps/unix/sysv/linux/posix_fadvise.c [__NR_fadvise64]: Add !defined __ASSUME_FADVISE64_AS_64_64 to use syscall issue. [!__NR_fadvise64 && __ASSUME_FADVISE64_64_6ARG]: Remove __ALIGNMENT_ARG usage. [!__NR_fadvise64 && !__ASSUME_FADVISE64_64_6ARG]: Define __NR_fadvise64_64 if it is not defined.
* sparc: Remove optimized math routines which cause testsuite failures.David S. Miller2017-02-0327-721/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | famx{,f}/fmin{,f} and 32-bit lrint cause math testsuite failures either because they generate incorrect results or they fail to signal the proper exceptions. * sysdeps/sparc/sparc64/fpu/multiarch/s_fmax-vis3.S: Remove file. * sysdeps/sparc/sparc64/fpu/multiarch/s_fmax.S: Likewise. * sysdeps/sparc/sparc64/fpu/multiarch/s_fmaxf-vis3.S: Likewise. * sysdeps/sparc/sparc64/fpu/multiarch/s_fmaxf.S: Likewise. * sysdeps/sparc/sparc64/fpu/multiarch/s_fmin-vis3.S: Likewise. * sysdeps/sparc/sparc64/fpu/multiarch/s_fmin.S: Likewise. * sysdeps/sparc/sparc64/fpu/multiarch/s_fminf-vis3.S: Likewise. * sysdeps/sparc/sparc64/fpu/multiarch/s_fminf.S: Likewise. * sysdeps/sparc/sparc64/fpu/multiarch/Makefile (libm-sysdep_routines): Update. * sysdeps/sparc/sparc32/sparcv9/fpu/s_fmax.S: Remove file. * sysdeps/sparc/sparc32/sparcv9/fpu/s_fmaxf.S: Likewise. * sysdeps/sparc/sparc32/sparcv9/fpu/s_fmin.S: Likewise. * sysdeps/sparc/sparc32/sparcv9/fpu/s_fminf.S: Likewise. * sysdeps/sparc/sparc32/sparcv9/fpu/s_lrint.S: Likewise. * sysdeps/sparc/sparc64/fpu/s_fmax.S: Likewise. * sysdeps/sparc/sparc64/fpu/s_fmaxf.S: Likewise. * sysdeps/sparc/sparc64/fpu/s_fmin.S: Likewise. * sysdeps/sparc/sparc64/fpu/s_fminf.S: Likewise. * sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/s_fmax-vis3.S: Likewise. * sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/s_fmax.S: Likewise. * sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/s_fmaxf-vis3.S: Likewise. * sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/s_fmaxf.S: Likewise. * sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/s_fmin-vis3.S: Likewise. * sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/s_fmin.S: Likewise. * sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/s_fminf-vis3.S: Likewise. * sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/s_fminf.S: Likewise. * sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/Makefile (libm-sysdep_routines): Update.
* Allow IFUNC relocation against unrelocated shared libraryH.J. Lu2017-02-022-2/+2
| | | | | | | | | | | | | | | | | IFUNC relocation against definition in unrelocated shared library will lead to segfault when the IFUNC function is called. This patch allows such IFUNC relocations with a warning. This isn't a real fix for https://sourceware.org/bugzilla/show_bug.cgi?id=21041 It simply allows the program to load. The program will segfault when longjmp is called. * sysdeps/i386/dl-machine.h (elf_machine_rel): Replace _dl_fatal_printf with _dl_error_printf for IFUNC relocation against unrelocated shared library. * sysdeps/x86_64/dl-machine.h (elf_machine_rela): Likewise.
* Drop GLIBC_TUNABLES for setxid programs when tunables is disabled (bz #21073)Siddhesh Poyarekar2017-02-021-0/+7
| | | | | | | | | | | | | A setxid program that uses a glibc with tunables disabled may pass on GLIBC_TUNABLES as is to its child processes. If the child process ends up using a different glibc that has tunables enabled, it will end up getting access to unsafe tunables. To fix this, remove GLIBC_TUNABLES from the environment for setxid process. * sysdeps/generic/unsecvars.h: Add GLIBC_TUNABLES. * elf/tst-env-setuid-tunables.c (test_child_tunables)[!HAVE_TUNABLES]: Verify that GLIBC_TUNABLES is removed in a setgid process.
* alpha: Use saturating arithmetic in memchrRichard Henderson2017-02-011-1/+4
|
* m68k: fix 64bit atomic opsAndreas Schwab2017-02-011-6/+8
|
* Add ipc_priv.h header for Nios II to set __IPC_64 to zero.Chung-Lin Tang2017-01-311-0/+21
|
* Add VZEROUPPER to memset-vec-unaligned-erms.S [BZ #21081]H.J. Lu2017-01-301-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | Since memset-vec-unaligned-erms.S has VDUP_TO_VEC0_AND_SET_RETURN at function entry, memset optimized for AVX2 and AVX512 will always use ymm/zmm register. VZEROUPPER should be placed before ret in L(stosb): movq %rdx, %rcx movzbl %sil, %eax movq %rdi, %rdx rep stosb movq %rdx, %rax ret since it can be reached from L(stosb_more_2x_vec): cmpq $REP_STOSB_THRESHOLD, %rdx ja L(stosb) [BZ #21081] * sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S (L(stosb)): Add VZEROUPPER before ret.
* Bug 20116: Fix use after free in pthread_create()Carlos O'Donell2017-01-282-15/+11
| | | | | | | | | | | | | | | | | | The commit documents the ownership rules around 'struct pthread' and when a thread can read or write to the descriptor. With those ownership rules in place it becomes obvious that pd->stopped_start should not be touched in several of the paths during thread startup, particularly so for detached threads. In the case of detached threads, between the time the thread is created by the OS kernel and the creating thread checks pd->stopped_start, the detached thread might have already exited and the memory for pd unmapped. As a regression test we add a simple test which exercises this exact case by quickly creating detached threads with large enough stacks to ensure the thread stack cache is bypassed and the stacks are unmapped. Before the fix the testcase segfaults, after the fix it works correctly and completes without issue. For a detailed discussion see: https://www.sourceware.org/ml/libc-alpha/2017-01/msg00505.html
* Bug 21053: sh: Reduce namespace pollution from sys/ucontext.hJames Clarke2017-01-243-68/+66
| | | | | | | | | | | | | | | | | | | | | | | The problem is basically that sys/ucontext.h is defining R0..R15 which happens to conflict with some packages like Firefox when trying to build on SH. The very same problem existed on arm back then [1] and it was fixed by renaming R0..R15 to REG_R0..REG_R15. This patch imploy a similar strategy for SH. Checked on sh4-linux-gnu with run-built-tests=no and I also got reports that it fixes Firefox build on Debian sh4. * sysdeps/unix/sysv/linux/sh/sh3/ucontext_i.sym: Use new REG_R* constants instead of the old R* ones. * sysdeps/unix/sysv/linux/sh/sh4/ucontext_i.sym: Likewise. * sysdeps/unix/sysv/linux/sh/sys/ucontext.h (NGPREG): Rename... (NGREG): ... to this, to fit in with other architectures. (gpregset_t): Use new NGREG macro. [__USE_GNU]: Remove condition; all architectures other than tile are unconditional. (R*): Rename to REG_R*.
* Remove very old libm-test-ulps entries.Joseph Myers2017-01-205-208/+0
| | | | | | | | | | | | | | | | | | | | | | I noticed that some libm-test-ulps files still had long-obsolete entries for *_tonearest functions, which will no longer be used since functions with FE_TONEAREST explicitly set aren't tested separately from those functions with it as the default rounding mode any more. This patch removes those obsolete entries. However, as they are a sign of libm-test-ulps not having been regenerated from scratch for a long time, I strongly advise people testing on those platforms to remove / truncate the libm-test-ulps file, run "make regen-ulps" and commit the regenerated-from-scratch file. (Ideally any failures of libm tests still present after regeneration would be investigated / fixed - there are several open "math" bugs spread across these platforms - but simply regenerating from scratch improves things.) * sysdeps/hppa/fpu/libm-test-ulps: Remove *_tonearest entries. * sysdeps/ia64/fpu/libm-test-ulps: Likewise. * sysdeps/m68k/m680x0/fpu/libm-test-ulps: Likewise. * sysdeps/microblaze/libm-test-ulps: Likewise. * sysdeps/sh/libm-test-ulps: Likewise.
* powerpc: Fix adapt_count update in __lll_unlock_elisionTulio Magno Quites Machado Filho2017-01-201-1/+1
| | | | | Commit e9a96ea1aca4ebaa7c86e8b83b766f118d689d0f had an error that prevents adapt_count from being updated in __lll_unlock_elision.
* S390: Adjust lock elision code after review.Stefan Liebler2017-01-204-43/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch adjusts s390 specific lock elision code after review of the following patches: -S390: Use own tbegin macro instead of __builtin_tbegin. (8bfc4a2ab4bebdf86c151665aae8a266e2f18fb4) -S390: Use new __libc_tbegin_retry macro in elision-lock.c. (53c5c3d5ac238901c13f28a73ba05b0678094e80) -S390: Optimize lock-elision by decrementing adapt_count at unlock. (dd037fb3df286b7c2d0b0c6f8d02a2dd8a8e8a08) The futex value is not tested before starting a transaction, __glibc_likely is used instead of __builtin_expect and comments are adjusted. ChangeLog: * sysdeps/unix/sysv/linux/s390/htm.h: Adjust comments. * sysdeps/unix/sysv/linux/s390/elision-unlock.c: Likewise. * sysdeps/unix/sysv/linux/s390/elision-lock.c: Adjust comments. (__lll_lock_elision): Do not test futex before starting a transaction. Use __glibc_likely instead of __builtin_expect. * sysdeps/unix/sysv/linux/s390/elision-trylock.c: Adjust comments. (__lll_trylock_elision): Do not test futex before starting a transaction. Use __glibc_likely instead of __builtin_expect.
* Restore clock_* librt exports for MicroBlaze (bug 21061).Joseph Myers2017-01-191-0/+31
| | | | | | | | | | | | | | | | | | | | | | | | MicroBlaze had clock_* functions exported from librt in glibc 2.18 and 2.19, as confirmed in <https://sourceware.org/ml/libc-alpha/2017-01/msg00369.html>, and they then disappeared in 2.20, presumably as a result of the fix <https://sourceware.org/ml/libc-alpha/2014-02/msg00598.html> for a Versions.def bug that had resulted in their unintended inclusion in 2.18 (followed by removal of the Versions.def mechanism that allowed such bugs). As they were released in that library, they should be considered part of the GLIBC_2.18 ABI and so restored for the sake of any binaries that expect them in that library. This patch restores them by adding a MicroBlaze version of clock-compat.c that overrides SHLIB_COMPAT. Tested (compilation only) with build-many-glibcs.py (where this fixes the librt ABI test failure; elf/check-execstack still fails and still needs architecture maintainer attention to fix it or XFAIL it with an appropriate explanatory comment). [BZ #21061] * sysdeps/unix/sysv/linux/microblaze/clock-compat.c: New file.
* Fix ARM fpu_control.h for assemblers requiring VFP insn names (bug 21047).Joseph Myers2017-01-191-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bug 21047 reports that the clang assembler disallows the ARM implementations of _FPU_GETCW and _FPU_SETCW. These are deliberately written the way they are, using generic coprocessor instructions (from the days when VFP was just one possible coprocessor for ARM) that have the right encodings, to handle the case of the instructions being used runtime-conditionally inside glibc, where use of these macros is not meant to result in either the assembler requiring VFP to be enabled at assembly time or in it marking the object as using VFP. However, more recent ARM ARM versions have restricted the definitions of the coprocessor instructions and reportedly the clang assembler follows that in disallowing those names for VFP instructions. In the non-__SOFTFP__ case - which in fact is the only case where these macro definitions can be used outside the build of glibc itself - using VFP instruction names is of course fine, since we know that VFP is enabled for that compilation. Thus, this patch uses the current VFP names for these instructions in that case to improve compatibility for this header file. Tested for hard-float and soft-float builds of glibc, including that installed stripped shared libraries are unchanged by the patch. [BZ #21047] * sysdeps/arm/fpu_control.h [!__SOFTFP__] (_FPU_GETCW): Use VFP name for instruction. [!__SOFTFP__] (_FPU_SETCW): Likewise.
* Make soft-float powerpc swapcontext restore the signal mask (bug 21045).Joseph Myers2017-01-161-1/+2
| | | | | | | | | | | | | | | | | | | | | | The soft-float powerpc version of swapcontext does not restore the signal mask, resulting in stdlib/tst-setcontext2 failing: after getcontext after setcontext after swapcontext FAIL: SIGUSR2 is blocked after swapcontext. This patch fixes this by adjusting the arguments passed to __sigprocmask so that it restores the saved signal mask as well as saving the existing one. (For hard-float, this code is only used for a compat symbol, not for the current version of swapcontext.) Tested for soft-float powerpc. [BZ #21045] * sysdeps/unix/sysv/linux/powerpc/powerpc32/swapcontext-common.S (__CONTEXT_FUNC_NAME): Pass address of signal mask to be restored to __sigprocmask.
* tile: Check for pointer add overflow in memchrChris Metcalf2017-01-162-0/+8
| | | | | | | | | | | | As was done in b224637928e9, check for large size causing an overflow in the loop that walks over the array. Branching out of line here is the fastest approach for handling this problem, since tile can bundle the instructions to compute the branch test in parallel with doing the required memchr loop setup computation. Unfortunately, the existing saturated ops (e.g. tilegx addxsc) are all signed saturing ops, so don't help with unsigned saturation.
* tile: pass __IPC_64 as zero for SysV IPC callsChris Metcalf2017-01-161-0/+21
| | | | | | | | In 1e5834c38a22 ("Refactor Linux ipc_priv header") a different approach to passing __IPC_64 as zero was created. The tile architecture also needs to pass __IPC_64 as zero since it does not set CONFIG_ARCH_WANT_IPC_PARSE_VERSION in the kernel. So create a minimal ipc_priv.h that specifies __IPC_64 as zero.
* Clear list of acquired robust mutexes in the child process after forking.Torvald Riegel2017-01-131-6/+14
| | | | | | | | | | | | | | Robust mutexes acquired at the time of a call to fork() do not remain acquired by the forked child process. We have to clear the list of acquired robust mutexes before registering this list with the kernel; otherwise, if some of the robust mutexes are process-shared, the parent process can alter the child's robust mutex list, which can lead to deadlocks or even modification of memory that may not be occupied by a mutex anymore. [BZ #19402] * sysdeps/nptl/fork.c (__libc_fork): Clear list of acquired robust mutexes.
* robust mutexes: Fix broken x86 assembly by removing itTorvald Riegel2017-01-136-780/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | lll_robust_unlock on i386 and x86_64 first sets the futex word to FUTEX_WAITERS|0 before calling __lll_unlock_wake, which will set the futex word to 0. If the thread is killed between these steps, then the futex word will be FUTEX_WAITERS|0, and the kernel (at least current upstream) will not set it to FUTEX_OWNER_DIED|FUTEX_WAITERS because 0 is not equal to the TID of the crashed thread. The lll_robust_lock assembly code on i386 and x86_64 is not prepared to deal with this case because the fastpath tries to only CAS 0 to TID and not FUTEX_WAITERS|0 to TID; the slowpath simply waits until it can CAS 0 to TID or the futex_word has the FUTEX_OWNER_DIED bit set. This issue is fixed by removing the custom x86 assembly code and using the generic C code instead. However, instead of adding more duplicate code to the custom x86 lowlevellock.h, the code of the lll_robust* functions is inlined into the single call sites that exist for each of these functions in the pthread_mutex_* functions. The robust mutex paths in the latter have been slightly reorganized to make them simpler. This patch is meant to be easy to backport, so C11-style atomics are not used. [BZ #20985] * nptl/Makefile: Adapt. * nptl/pthread_mutex_cond_lock.c (LLL_ROBUST_MUTEX_LOCK): Remove. (LLL_ROBUST_MUTEX_LOCK_MODIFIER): New. * nptl/pthread_mutex_lock.c (LLL_ROBUST_MUTEX_LOCK): Remove. (LLL_ROBUST_MUTEX_LOCK_MODIFIER): New. (__pthread_mutex_lock_full): Inline lll_robust* functions and adapt. * nptl/pthread_mutex_timedlock.c (pthread_mutex_timedlock): Inline lll_robust* functions and adapt. * nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_full): Likewise. * sysdeps/nptl/lowlevellock.h (__lll_robust_lock_wait, __lll_robust_lock, lll_robust_cond_lock, __lll_robust_timedlock_wait, __lll_robust_timedlock, __lll_robust_unlock): Remove. * sysdeps/unix/sysv/linux/i386/lowlevellock.h (lll_robust_lock, lll_robust_cond_lock, lll_robust_timedlock, lll_robust_unlock): Remove. * sysdeps/unix/sysv/linux/x86_64/lowlevellock.h (lll_robust_lock, lll_robust_cond_lock, lll_robust_timedlock, lll_robust_unlock): Remove. * sysdeps/unix/sysv/linux/sparc/lowlevellock.h (__lll_robust_lock_wait, __lll_robust_lock, lll_robust_cond_lock, __lll_robust_timedlock_wait, __lll_robust_timedlock, __lll_robust_unlock): Remove. * nptl/lowlevelrobustlock.c: Remove file. * nptl/lowlevelrobustlock.sym: Likewise. * sysdeps/unix/sysv/linux/i386/lowlevelrobustlock.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/lowlevelrobustlock.S: Likewise.
* powerpc: Regenerate ULPsTulio Magno Quites Machado Filho2017-01-131-14/+14
| | | | | | | After this update, math/test-ildouble, math/test-ldouble and math/test-ldouble-finite pass on hard float, POWER < 7 builds. Tested on powerpc, powerpc64 and powerpc64le.
* Fix MIPS o32 posix_fadvise.Joseph Myers2017-01-121-0/+4
| | | | | | | | | | | | | | | | | | | | | | | The posix_fadvise consolidation broke posix_fadvise for MIPS o32, so resulting in posix/tst-posix_fadvise failing. MIPS o32 (and the other ABIs) has only the posix_fadvise64 syscall, which acts like posix_fadvise64_64 (in the o32 case, because of the alignment argument it's actually a 7-argument syscall). The generic posix_fadvise implementation presumes that if __NR_fadvise64 is defined, it's for the case where a single len argument is passed to the syscall rather than two syscall arguments in the case of a 32-bit system. The generic posix_fadvise64 works fine for this case (defining __NR_fadvise64_64 to __NR_fadvise64 as needed). ARM has a posix_fadvise.c that uses __posix_fadvise64_l64 in posix_fadvise, and that approach also works for MIPS o32, so this patch makes MIPS o32 include the ARM file. Tested for MIPS o32. * sysdeps/unix/sysv/linux/mips/mips32/posix_fadvise.c: New file.
* New pthread rwlock that is more scalable.Torvald Riegel2017-01-1015-147/+147
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This replaces the pthread rwlock with a new implementation that uses a more scalable algorithm (primarily through not using a critical section anymore to make state changes). The fast path for rdlock acquisition and release is now basically a single atomic read-modify write or CAS and a few branches. See nptl/pthread_rwlock_common.c for details. * nptl/DESIGN-rwlock.txt: Remove. * nptl/lowlevelrwlock.sym: Remove. * nptl/Makefile: Add new tests. * nptl/pthread_rwlock_common.c: New file. Contains the new rwlock. * nptl/pthreadP.h (PTHREAD_RWLOCK_PREFER_READER_P): Remove. (PTHREAD_RWLOCK_WRPHASE, PTHREAD_RWLOCK_WRLOCKED, PTHREAD_RWLOCK_RWAITING, PTHREAD_RWLOCK_READER_SHIFT, PTHREAD_RWLOCK_READER_OVERFLOW, PTHREAD_RWLOCK_WRHANDOVER, PTHREAD_RWLOCK_FUTEX_USED): New. * nptl/pthread_rwlock_init.c (__pthread_rwlock_init): Adapt to new implementation. * nptl/pthread_rwlock_rdlock.c (__pthread_rwlock_rdlock_slow): Remove. (__pthread_rwlock_rdlock): Adapt. * nptl/pthread_rwlock_timedrdlock.c (pthread_rwlock_timedrdlock): Adapt. * nptl/pthread_rwlock_timedwrlock.c (pthread_rwlock_timedwrlock): Adapt. * nptl/pthread_rwlock_trywrlock.c (pthread_rwlock_trywrlock): Adapt. * nptl/pthread_rwlock_tryrdlock.c (pthread_rwlock_tryrdlock): Adapt. * nptl/pthread_rwlock_unlock.c (pthread_rwlock_unlock): Adapt. * nptl/pthread_rwlock_wrlock.c (__pthread_rwlock_wrlock_slow): Remove. (__pthread_rwlock_wrlock): Adapt. * nptl/tst-rwlock10.c: Adapt. * nptl/tst-rwlock11.c: Adapt. * nptl/tst-rwlock17.c: New file. * nptl/tst-rwlock18.c: New file. * nptl/tst-rwlock19.c: New file. * nptl/tst-rwlock2b.c: New file. * nptl/tst-rwlock8.c: Adapt. * nptl/tst-rwlock9.c: Adapt. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/hppa/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/sparc/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/x86/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * nptl/nptl-printers.py (): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-rwlock-printers.py: Adapt. * nptl/test-rwlockattr-printers.c: Adapt. * nptl/test-rwlockattr-printers.py: Adapt.
* Update MicroBlaze localplt.data.Joseph Myers2017-01-091-1/+2
| | | | | | | | | | | | | This patch updates the MicroBlaze localplt.data based on the results of a build with build-many-glibcs.py. This is simply an empirical update; quite possibly the port could be optimized to remove more local PLT entry usage. Tested (compilation tests) with build-many-glibcs.py. * sysdeps/unix/sysv/linux/microblaze/localplt.data (__pread64): Add libc.so PLT entry. (__tls_get_addr): Make ld.so PLT entry optional.
* Fix MIPS n64 readahead (bug 21026).Joseph Myers2017-01-051-0/+2
| | | | | | | | | | | | | | | | As noted in bug 20126, MIPS n64 uses an incorrect implementation of readahead intended for 32-bit systems. This patch adds a syscalls.list entry to fix this. An updated version of the consolidation patch <https://sourceware.org/ml/libc-alpha/2016-09/msg00527.html> could remove this syscalls.list entry again. Tested with compilation (only) for mips64; the nature of the syscall doesn't allow for a glibc test to detect this issue. [BZ #21026] * sysdeps/unix/sysv/linux/mips/mips64/n64/syscalls.list (readahead): New syscall entry.
* Move wrappers to libm-compat-calls-autoGabriel F. T. Gomes2017-01-04123-61/+59
| | | | | | | | | | This commit moves one step towards the deprecation of wrappers that use _LIB_VERSION / matherr / __kernel_standard functionality, by adding the suffix '_compat' to their filenames and adjusting Makefiles and #includes accordingly. New template wrappers that do not use such functionality will be added by future patches and will be first used by the float128 wrappers.
* Fix MicroBlaze bits/setjmp.h for C++.Joseph Myers2017-01-041-1/+1
| | | | | | | | | | | | | | | For MicroBlaze, setjmp/check-installed-headers-cxx fails with: ../setjmp/setjmp.h:34:8: error: '__jmp_buf_tag' has a field '__jmp_buf_tag::__jmpbuf' whose type depends on the type '<unnamed struct>' which has no linkage [-Werror=subobject-linkage] This patch fixes this in the same way as for some other architectures: the struct used for the internal __jmp_buf type is given the tag __jmp_buf_internal_tag. Tested (compilation tests) with build-many-glibcs.py. * sysdeps/microblaze/bits/setjmp.h (__jmp_buf): Give struct tag __jmp_buf_internal_tag.
* Make MIPS soft-fp preserve NaN payloads for NAN2008.Joseph Myers2017-01-042-2/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | This corresponds to a patch applied to libgcc. In glibc it doesn't actually affect much (only fma, I think). The MIPS sfp-machine.h files have an _FP_CHOOSENAN implementation which emulates hardware semantics of not preserving signaling NaN payloads for an operation with two NaN arguments (although that doesn't suffice to avoid sNaN payload preservation in any case with just one NaN argument). However, those are only hardware semantics in the legacy NaN case; in the NAN2008 case, the architecture documentation says hardware preserves payloads in such cases. Furthermore, this implementation assumes legacy NaN semantics, so in the NAN2008 case the implementation actually has the effect of preserving sNaN payloads but not preserving qNaN payloads, when both should be preserved. This patch fixes the code just to copy from the first argument. Tested for mips64 soft-float. * sysdeps/mips/mips32/sfp-machine.h (_FP_CHOOSENAN): Always preserve NaN payload if [__mips_nan2008]. * sysdeps/mips/mips64/sfp-machine.h (_FP_CHOOSENAN): Likewise.
* Fix MicroBlaze __backtrace get_frame_size namespace (bug 21022).Joseph Myers2017-01-041-1/+1
| | | | | | | | | | | | Many linknamespace tests fail for MicroBlaze because __backtrace (as brought in by libc_fatal.c) uses an inline function get_frame_size which is not declared static. This patch fixes it to be declared static. Tested (compilation tests) with build-many-glibcs.py. [BZ #21022] * sysdeps/microblaze/backtrace.c (get_frame_size): Make static.
* Update i386 libm-test-ulps.Joseph Myers2017-01-031-4/+4
| | | | | | | | | | When testing changes to i386 libm functions (that are shadowed for i686 builds by i686 versions) recently, I saw that the plain i386 libm-test-ulps (as opposed to the i686 multiarch version) needed updating for tests that had been added since it was last updated. This patch updates it accordingly. * sysdeps/i386/fpu/libm-test-ulps: Update.
* Remove duplicate strcat implementationsAdhemerval Zanella2017-01-035-62/+3
| | | | | | | | | | | | | | | | | Since commit 6e46de42fe16 default strcat implementation is essentially the same for specialized ia64 and powerpc ones. This patch removes the redundant implementation and adjust powerpc64 ifunc code to use the default one. Checked on powerpc32-linux-gnu (default and power4) and ia64-linux build and on powerpc64le-linux-gnu. * sysdeps/ia64/strcat.c: Remove file. * sysdeps/powerpc/strcat.c: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcat-power7.c: Use default C implementation. * sysdeps/powerpc/powerpc64/multiarch/strcat-power8.c: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcat-ppc64.c: Likewise.
* powerpc: Fix write-after-destroy in lock elision [BZ #20822]Tulio Magno Quites Machado Filho2017-01-033-12/+20
| | | | | | The update of *adapt_count after the release of the lock causes a race condition when thread A unlocks, thread B continues and destroys the mutex, and thread A writes to *adapt_count.
* Fix x86 strncat optimized implementation for large sizesAdhemerval Zanella2017-01-032-0/+4
| | | | | | | | | | | | | | | | | | | | | | Similar to BZ#19387, BZ#21014, and BZ#20971, both x86 sse2 strncat optimized assembly implementations do not handle the size overflow correctly. The x86_64 one is in fact an issue with strcpy-sse2-unaligned, but that is triggered also with strncat optimized implementation. This patch uses a similar strategy used on 3daef2c8ee4df2, where saturared math is used for overflow case. Checked on x86_64-linux-gnu and i686-linux-gnu. It fixes BZ #19390. [BZ #19390] * string/test-strncat.c (test_main): Add tests with SIZE_MAX as maximum string size. * sysdeps/i386/i686/multiarch/strcat-sse2.S (STRCAT): Avoid overflow in pointer addition. * sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S (STRCPY): Likewise.
* Fix MIPS n32 lseek, lseek64 (bug 21019).Joseph Myers2017-01-022-0/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The lseek consolidation broke lseek64 for MIPS n32, so resulting in io/test-lfs failing with an incorrect return from ftello64. This configuration uses the lseek syscall with a 64-bit return value; as the C syscall macros return long, they cannot be used in this case and so an assembly implementation is needed; accordingly, this patch adds lseek64 back to syscalls.list for this configuration. lseek was also broken, truncating the result without checking for overflow. lseek however was already broken before the consolidation; it aliased lseek64 so would return an out-of-range value, resulting in architecturally undefined behavior in the caller if it tried to use a non-sign-extended value with a 32-bit instruction. This patch adds a custom lseek implementation in C for n32, which calls __lseek64 to get the 64-bit value then checks for overflow. Because the prior lseek breakage did not show in test results, and the lseek64 breakage showed only indirectly through tests of ftello64, test coverage was clearly inadequate. This patch extends io/test-lfs.c to test the lseek64 return value (at a point where it has already seeked over 2GB into a file), and then to test the lseek return value (with the latter's expectations depending on whether off_t is smaller than off64_t). Tested for mips64 n32. Also tested test-lfs for x86_64 and x86, where as expected it passes. [BZ #21019] * sysdeps/unix/sysv/linux/mips/mips64/n32/syscalls.list (lseek64): New syscall entry. * sysdeps/unix/sysv/linux/mips/mips64/n32/lseek.c: New file. * io/test-lfs.c (do_test): Test offset returned from lseek64 and lseek.
* Correct MIPS math-tests.h condition for sNaN payload preservation.Joseph Myers2017-01-021-1/+1
| | | | | | | | | | | | | | | | | | | | | | Testing for MIPS soft float shows that the issue with NaN payload preservation applies to soft float as well as hard float: the sfp-machine.h emulates hardware non-preservation semantics, although only for the case of two NaN arguments. This patch duly changes the MIPS math-tests.h to expect such non-preservation for soft float as well as hard float. The issue in the NAN2008 case for which I posted <https://gcc.gnu.org/ml/gcc-patches/2017-01/msg00034.html>, of sNaN payloads being preserved but qNaN payloads not being preserved, is not currently an issue for glibc tests because we don't have any tests that check for qNaN payloads being preserved by arithmetic, so a simple __mips_nan2008 conditional suffices without needing compiler version checks in the __mips_nan2008 case. Tested for mips64 soft float. * sysdeps/mips/math-tests.h (SNAN_TESTS_PRESERVE_PAYLOAD): Do not condition on [__mips_hard_float].