about summary refs log tree commit diff
path: root/sysdeps
Commit message (Collapse)AuthorAgeFilesLines
* test-errno-linux: quotactl can fail with EPERM in containersFlorian Weimer2017-11-021-38/+38
| | | | Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86: Add sysdeps/x86/sysdep.hH.J. Lu2017-11-013-85/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a new header file, sysdeps/x86/sysdep.h, for common assembly code macros between i386 and x86-64. Tested on i686 and x86-64. There are no differences in outputs of "readelf -a" and "objdump -dw" on all glibc shared objects before and after the patch. * sysdeps/i386/sysdep.h: Include <sysdeps/x86/sysdep.h> instead of <sysdeps/generic/sysdep.h>. (ALIGNARG): Removed. (ASM_SIZE_DIRECTIVE): Likewise. (ENTRY): Likewise. (END): Likewise. (ENTRY_CHK): Likewise. (END_CHK): Likewise. (syscall_error): Likewise. (mcount): Likewise. (PSEUDO_END): Likewise. (L): Likewise. (atom_text_section): Likewise. * sysdeps/x86/sysdep.h: New file. * sysdeps/x86_64/sysdep.h: Include <sysdeps/x86/sysdep.h> instead of <sysdeps/generic/sysdep.h>. (ALIGNARG): Removed. (ASM_SIZE_DIRECTIVE): Likewise. (ENTRY): Likewise. (END): Likewise. (ENTRY_CHK): Likewise. (END_CHK): Likewise. (syscall_error): Likewise. (mcount): Likewise. (PSEUDO_END): Likewise. (L): Likewise. (atom_text_section): Likewise.
* Remove useless #ifdefs from Linux sig*.c syscallsYury Norov2017-10-314-32/+4
| | | | | | | | | | | | | | | | | | | sigprocmask.c, sigtimedwait.c, sigwait.c and sigwaitinfo.c files from sysdeps/unix/sysv/linux include nptl-signals.h via nptl/pthreadP.h, and so SIGCANCEL and SIGSETXID become defined unconditionally. But later in the code, there are some checks weither symbols defined, which is useless. This patch removes useless checks. Checked on x86_64-linux-gnu. * sysdeps/unix/sysv/linux/sigprocmask.c: Remove useless #ifdefs. * sysdeps/unix/sysv/linux/sigtimedwait.c: Likewise. * sysdeps/unix/sysv/linux/sigwait.c: Likewise. * sysdeps/unix/sysv/linux/sigwaitinfo.c: Likewise. Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> Reviewed-by: Andreas Schwab <schwab@suse.de> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Consolidate Linux sigpending() implementationYury Norov2017-10-314-141/+0
| | | | | | | | | | | | | | | | | | ia64, s390-64, sparc64 and x86_64 host their own implementation of sigpending() in corresponding files, but they are identical to generic linux file despite few comments. This patch removes that files, so the implementation of sigpending() is taken from sysdeps/unix/sysv/linux for all ports. Build-tested on x86_64. * sysdeps/unix/sysv/linux/ia64/sigpending.c: Remove file. * sysdeps/unix/sysv/linux/s390/s390-64/sigpending.c: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/sigpending.c: Likewise. * sysdeps/unix/sysv/linux/x86_64/sigpending.c: Likewise. Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* [PowerPC64] sysdep.h doesn't need to be included in multiarch filesAlan Modra2017-10-3196-193/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the .c/.S file neither uses nor modifies macros defined in sysdep.h there is no point to #include it. The same goes for math_ldbl_opt.h except that it includes shlib-compat.h, and if compat_symbol is redefined we need to include shlib-compat.h first. * sysdeps/powerpc/powerpc64/fpu/multiarch/e_expf-power8.S: Don't include sysdep.h. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_ceilf-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_ceilf-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_cosf-power8.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_cosf-ppc64.c: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_finite-power7.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_finite-power8.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_floor-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_floor-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_roundf-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_roundf-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_sinf-power8.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_sinf-ppc64.c: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_truncf-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_truncf-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memchr-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memchr-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcmp-power4.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcmp-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcmp-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcpy-a2.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcpy-cell.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcpy-power4.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcpy-power6.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcpy-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memcpy-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memmove-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/mempcpy-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memrchr-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memrchr-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memset-power4.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memset-power6.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memset-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/memset-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/rawmemchr-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/stpcpy-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/stpncpy-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/stpncpy-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcasecmp-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcasecmp-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcasecmp_l-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcasestr-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strchr-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strchr-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strchr-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strchrnul-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strchrnul-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcmp-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcmp-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcmp-power9.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcmp-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcpy-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strcspn-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strlen-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strlen-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strlen-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncase-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncmp-power4.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncmp-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncmp-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncmp-power9.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncmp-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncpy-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncpy-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strnlen-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strnlen-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strrchr-power7.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strrchr-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strspn-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strstr-power7.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_floorf-ppc64.S: Don't include sysdep.h and math_ldbl_opt.h. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_ceil-power5+.S: Don't include sysdep.h and math_ldbl_opt.h. Include shlib-compat.h. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_ceil-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_copysign-power6.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_copysign-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_floorf-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isinf-power7.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isinf-power8.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-power5.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-power6.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-power6x.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-power7.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-power8.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llrint-power6x.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llrint-power8.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llrint-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llround-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llround-power6x.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llround-power8.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llround-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_llroundf-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_round-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_round-ppc64.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_trunc-power5+.S: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_trunc-ppc64.S: Likewise.
* [PowerPC64] strncase_l-power7.c should use strncase_l.cAlan Modra2017-10-311-2/+4
| | | | | | | | | | This is another one where we'll be wanting the base symbols for powerpc64le rather than just a power7 variant. * sysdeps/powerpc/powerpc64/multiarch/strncase_l-power7.c: Include string/strncase_l.c, not string/strncase.c. (USE_IN_EXTENDED_LOCALE_MODEL): Don't define. (libc_hidden_def): Redefine.
* [PowerPC64] Tidy strcasecmp_l-power7.S symbolsAlan Modra2017-10-311-1/+3
| | | | | | | | | | | The routine being assembled here is strcasecmp_l, so ask for that via __STRCMP and STRCMP defines. That change means tweaking the power7 override. Needed for later powerpc64le changes where we want the base symbols, not just a power7 variant. * sysdeps/powerpc/powerpc64/multiarch/strcasecmp_l-power7.S: (__STRCMP, STRCMP, __strcasecmp_l): Define. (__strcasecmp): Don't define.
* [PowerPC64] Wrap str{,n}cmp-power{8,9}.S in IS_IN(libc)Alan Modra2017-10-314-0/+8
| | | | | | | | | | | These functions aren't used in ld.so at the moment since we don't have strcmp or strncmp ifuncs for them there. Remove the ld.so bloat. * sysdeps/powerpc/powerpc64/multiarch/strcmp-power8.S: Wrap in IS_IN (libc). * sysdeps/powerpc/powerpc64/multiarch/strcmp-power9.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncmp-power8.S: Likewise. * sysdeps/powerpc/powerpc64/multiarch/strncmp-power9.S: Likewise.
* [PowerPC64] Remove duplicate define in stpncpy-power8.SAlan Modra2017-10-311-2/+0
| | | | | | | | USE_AS_STPNCPY is defined by sysdeps/powerpc/powerpc64/power8/stpncpy.S, included by this file. * sysdeps/powerpc/powerpc64/multiarch/stpncpy-power8.S: Don't define USE_AS_STPNCPY.
* [PowerPC64] Don't define __GI_ variant of isnan for static libAlan Modra2017-10-311-3/+5
| | | | | | | | | | | | | | It seems to me that libc.a should not contain any of the __GI_ symbols, and certainly --enable-multi-arch ought to not add to the list. At the end of this patch series we have the following in both --enable-multi-arch and --disable-multi-arch libc.a: 0000000000000000 T __GI___readdir64 0000000000000000 T __GI___fxstatat64 0000000000000000 T __GI_getrlimit 0000000000000000 T __GI___getrlimit * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-ppc64.S (hidden_def): Redefine only when SHARED.
* sysdeps/x86/libc-start.c: Add /* !SHARED */H.J. Lu2017-10-301-1/+1
| | | | * sysdeps/x86/libc-start.c: Add /* !SHARED */.
* Reformat sysdeps/x86/libc-start.cH.J. Lu2017-10-301-3/+3
| | | | * sysdeps/x86/libc-start.c: Reformat.
* i586: Use conditional branches in strcpy.S [BZ #22353]H.J. Lu2017-10-301-17/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | i586 strcpy.S used a clever trick with LEA to implement jump table: /* ECX has the last 2 bits of the address of source - 1. */ andl $3, %ecx call 2f 2: popl %edx /* 0xb is the distance between 2: and 1:. */ leal 0xb(%edx,%ecx,8), %ecx jmp *%ecx .align 8 1: /* ECX == 0 */ orb (%esi), %al jz L(end) stosb xorl %eax, %eax incl %esi /* ECX == 1 */ orb (%esi), %al jz L(end) stosb xorl %eax, %eax incl %esi /* ECX == 2 */ orb (%esi), %al jz L(end) stosb xorl %eax, %eax incl %esi /* ECX == 3 */ L(1): movl (%esi), %ecx leal 4(%esi),%esi This fails if there are instruction length changes before L(1):. This patch replaces it with conditional branches: cmpb $2, %cl je L(Src2) ja L(Src3) cmpb $1, %cl je L(Src1) L(Src0): which have similar performance and work with any instruction lengths. Tested on i586 and i686 with and without --disable-multi-arch. [BZ #22353] * sysdeps/i386/i586/strcpy.S (STRCPY): Use conditional branches. (1): Renamed to ... (L(Src0)): This. (L(Src1)): New. (L(Src2)): Likewise. (L(1)): Renamed to ... (L(Src3)): This.
* i386: Regenerate libm-test-ulps for for gcc 7H.J. Lu2017-10-271-6/+6
| | | | | | | Regenerate libm-test-ulps for gcc 7 with "-m32 -O2 -march=i586". * sysdeps/i386/fpu/libm-test-ulps: Regenerated for GCC 7 with "-O2 -march=i586".
* powerpc: Replace lxvd2x/stxvd2x with lvx/stvx in P7's memcpy/memmoveRajalakshmi Srinivasaraghavan2017-10-252-96/+96
| | | | | | | | | | | | | | | | POWER9 DD2.1 and earlier has an issue where some cache inhibited vector load traps to the kernel, causing a performance degradation. To handle this in memcpy and memmove, lvx/stvx is used for aligned addresses instead of lxvd2x/stxvd2x. Reference: https://patchwork.ozlabs.org/patch/814059/ * sysdeps/powerpc/powerpc64/power7/memcpy.S: Replace lxvd2x/stxvd2x with lvx/stvx. * sysdeps/powerpc/powerpc64/power7/memmove.S: Likewise. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Replace "if if " with "if " in commentsH.J. Lu2017-10-255-6/+6
| | | | | | | | | | | | | * include/alloc_buffer.h: Replace "if if " with "if " in comments. * sysdeps/mips/memcpy.S: Likkewise. * sysdeps/mips/memset.S: Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S: Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core_sse4.S: Likewise. * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core_avx2.S: Likewise.
* Update x86 fix-fp-int-compare-invalid.h for GCC 8.Joseph Myers2017-10-241-2/+6
| | | | | | | | | | | | | | | | | | The glibc implementation of iseqsig relies on ordered comparison operators raising the "invalid" exception for quiet NaN operands, with a workaround on platforms where a GCC bug means that exception is not raised. For x86, that bug has now been fixed for GCC 8, so this patch disables the workaround in that case. If and when the corresponding bugs for powerpc and s390 are fixed, the headers for those platforms should of course be updated similarly. Tested for x86_64 and x86, including with GCC mainline. Note that other failures appear with GCC mainline because of spurious use of ordered comparison instructions for unordered operations <https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82692>. * sysdeps/x86/fpu/fix-fp-int-compare-invalid.h (FIX_COMPARE_INVALID): Define to 0 if [__GNUC_PREREQ (8, 0)].
* posix: Do not use WNOHANG in waitpid call for Linux posix_spawnAdhemerval Zanella2017-10-231-5/+5
| | | | | | | | | | | | | | | | | | | | As shown in some buildbot issues on aarch64 and powerpc, calling clone (VFORK) and waitpid (WNOHANG) does not guarantee the child is ready to be collected. This patch changes the call back to 0 as before fe05e1cb6d64 fix. This change can lead to the scenario 4.3 described in the commit, where the waitpid call can hang undefinitely on the call. However this is also a very unlikely and also undefinied situation where both the caller is trying to terminate a pid before posix_spawn returns and the race pid reuse is triggered. I don't see how to correct handle this specific situation within posix_spawn. Checked on x86_64-linux-gnu, aarch64-linux-gnu and powerpc64-linux-gnu. * sysdeps/unix/sysv/linux/spawni.c (__spawnix): Use 0 instead of WNOHANG in waitpid call.
* aarch64: Add missing math Makefile for recent commitSzabolcs Nagy2017-10-231-0/+8
| | | | | Without -fno-math-errno, the builtins just do a call instead of inlining a single instruction.
* aarch64: Implement math acceleration via builtinsMichael Collison2017-10-2330-288/+250
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch converts asm statements into builtins for AArch64. As an example for the file sysdeps/aarch64/fpu/s_ceil.c, we convert the function from double __ceil (double x) { double result; asm ("frintp\t%d0, %d1" : "=w" (result) : "w" (x) ); return result; } into double __ceil (double x) { return __builtin_ceil (x); } Tested on aarch64-linux-gnu with gcc-4.9.4 and gcc-6. * sysdeps/aarch64/fpu/e_sqrt.c (ieee754_sqrt): Replace asm statements with __builtin_sqrt. * sysdeps/aarch64/fpu/e_sqrtf.c (ieee754_sqrtf): Replace asm statements with __builtin_sqrtf. * sysdeps/aarch64/fpu/s_ceil.c (__ceil): Replace asm statements with __builtin_ceil. * sysdeps/aarch64/fpu/s_ceilf.c (__ceilf): Replace asm statements with __builtin_ceilf. * sysdeps/aarch64/fpu/s_floor.c (__floor): Replace asm statements with __builtin_floor. * sysdeps/aarch64/fpu/s_floorf.c (__floorf): Replace asm statements with __builtin_floorf. * sysdeps/aarch64/fpu/s_fma.c (__fma): Replace asm statements with __builtin_fma. * sysdeps/aarch64/fpu/s_fmaf.c (__fmaf): Replace asm statements with __builtin_fmaf. * sysdeps/aarch64/fpu/s_fmax.c (__fmax): Replace asm statements with __builtin_fmax. * sysdeps/aarch64/fpu/s_fmaxf.c (__fmaxf): Replace asm statements with __builtin_fmaxf. * sysdeps/aarch64/fpu/s_fmin.c (__fmin): Replace asm statements with __builtin_fmin. * sysdeps/aarch64/fpu/s_fminf.c (__fminf): Replace asm statements with __builtin_fminf. * sysdeps/aarch64/fpu/s_frint.c: Delete file. * sysdeps/aarch64/fpu/s_frintf.c: Delete file. * sysdeps/aarch64/fpu/s_llrint.c (__llrint): Replace asm statements with builtin_rint and conversion to int. * sysdeps/aarch64/fpu/s_llrintf.c (__llrintf): Likewise. * sysdeps/aarch64/fpu/s_llround.c (__llround): Replace asm statements with builtin_llround. * sysdeps/aarch64/fpu/s_llroundf.c (__llroundf): Likewise. * sysdeps/aarch64/fpu/s_lrint.c (__lrint): Replace asm statements with builtin_rint and conversion to long int. * sysdeps/aarch64/fpu/s_lrintf.c (__lrintf): Likewise. * sysdeps/aarch64/fpu/s_lround.c (__lround): Replace asm statements with builtin_lround. * sysdeps/aarch64/fpu/s_lroundf.c (__lroundf): Replace asm statements with builtin_lroundf. * sysdeps/aarch64/fpu/s_nearbyint.c (__nearbyint): Replace asm statements with __builtin_nearbyint. * sysdeps/aarch64/fpu/s_nearbyintf.c (__nearbyintf): Replace asm statements with __builtin_nearbyintf. * sysdeps/aarch64/fpu/s_rint.c (__rint): Replace asm statements with __builtin_rint. * sysdeps/aarch64/fpu/s_rintf.c (__rintf): Replace asm statements with __builtin_rintf. * sysdeps/aarch64/fpu/s_round.c (__round): Replace asm statements with __builtin_round. * sysdeps/aarch64/fpu/s_roundf.c (__roundf): Replace asm statements with __builtin_roundf. * sysdeps/aarch64/fpu/s_trunc.c (__trunc): Replace asm statements with __builtin_trunc. * sysdeps/aarch64/fpu/s_truncf.c (__truncf): Replace asm statements with __builtin_truncf. * sysdeps/aarch64/fpu/Makefile: Build e_sqrt[f].c with -fno-math-errno.
* PowerPC64 power8 strncpy cfi fixesAlan Modra2017-10-231-13/+14
| | | | | | | | | | | | | | | | | | | | | cfi info for stack adjust needs to be on the insn doing the adjust. cfi describing register saves can be anywhere after the save insn but before the reg is altered. Fewer locations with cfi result in smaller cfi programs and possibly slightly faster exception handling. Thus the LR cfi_offset move. The idea behind ajusting sp after restoring regs is to break a register dependency chain, in this case not be using r1 immediately after it is modified. The missing LR cfi_restore meant that code after the blr, unaligned_lt_16 and other labels, would have cfi that said LR was at cfa+16, but that code is reached without LR being saved. * sysdeps/powerpc/powerpc64/power8/strncpy.S: Move LR cfi. Adjust stack after restoring regs. Add missing LR cfi_restore. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com>
* PowerPC64 power7 strncpy stack handling and cfiAlan Modra2017-10-231-10/+13
| | | | | | | | | | | | | This patch moves the frame setup and teardown to immediately around the single memset call, as has been done for power8. I've also decreased FRAMESIZE to that needed to save the two callee-saved registers used. Plus added cfi. * sysdeps/powerpc/powerpc64/power7/strncpy.S: Decrease FRAMESIZE. Move LR save and frame setup/teardown and LR restore to immediately around memset call. Provide cfi. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com>
* i386: Replace assembly versions of e_powf with generic e_powf.cH.J. Lu2017-10-228-401/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch replaces i386 assembly versions of e_powf with generic e_powf.c. For workload-spec2017.wrf, on Nehalem, it improves performance by: Before After Improvement reciprocal-throughput 230.855 78.3358 194% latency 231.685 94.1259 146% On Skylake, it improves performance by: Before After Improvement reciprocal-throughput 239.858 47.4713 405% latency 247.57 93.8798 163% On IvyBridge with --disable-multi-arch, it improves performance by: Before After Improvement reciprocal-throughput 269.078 63.3758 324% latency 271.473 102.091 165% * sysdeps/i386/fpu/e_powf.S: Removed. * sysdeps/i386/fpu/e_powf_log2_data.c: Likewise. * sysdeps/i386/fpu/w_powf.c: Likewise. * sysdeps/i386/fpu/libm-test-ulps: Updated for generic e_powf.c. * sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise. * sysdeps/i386/i686/fpu/multiarch/Makefile (libm-sysdep_routines): Add e_powf-sse2. (CFLAGS-e_powf-sse2.c): New. * sysdeps/i386/i686/fpu/multiarch/e_powf-sse2.c: New file. * sysdeps/i386/i686/fpu/multiarch/e_powf.c: Likewise.
* i386: Replace assembly versions of e_log2f with generic e_log2f.cH.J. Lu2017-10-228-72/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch replaces i386 assembly versions of e_log2f with generic e_log2f.c. For workload-spec2017.wrf, on Nehalem, it improves performance by: Before After Improvement reciprocal-throughput 92.3845 30.8752 199% latency 112.855 54.8645 105% On Skylake, it improves performance by: Before After Improvement reciprocal-throughput 98.7488 22.7507 334% latency 118.01 51.6083 128% On IvyBridge with --disable-multi-arch, it improves performance by: Before After Improvement reciprocal-throughput 106.635 28.8596 269% latency 129.888 56.9187 128% * sysdeps/i386/fpu/e_log2f.S: Removed. * sysdeps/i386/fpu/e_log2f_data.c: Likewise. * sysdeps/i386/fpu/w_log2f.c: Likewise. * sysdeps/i386/fpu/libm-test-ulps: Updated for generic e_log2f.c. * sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise. * sysdeps/i386/i686/fpu/multiarch/Makefile (libm-sysdep_routines): Add e_log2f-sse2. (CFLAGS-e_log2f-sse2.c): New. * sysdeps/i386/i686/fpu/multiarch/e_log2f-sse2.c: New file. * sysdeps/i386/i686/fpu/multiarch/e_log2f.c: Likewise.
* x86-64: Add powf with FMAH.J. Lu2017-10-223-1/+49
| | | | | | | | | | | | | | For workload-spec2017.wrf, on Skylake, it improves performance by: Before After Improvement reciprocal-throughput 35.4713 27.3842 29% latency 82.4537 66.3175 24% * sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines): Add e_powf-fma. (CFLAGS-e_powf-fma.c): New. * sysdeps/x86_64/fpu/multiarch/e_powf-fma.c: New file. * sysdeps/x86_64/fpu/multiarch/e_powf.c: Likewise.
* x86-64: Add log2f with FMAH.J. Lu2017-10-223-1/+45
| | | | | | | | | | | | | | For workload-spec2017.wrf, on Skylake, it improves performance by: Before After Improvement reciprocal-throughput 16.5937 14.0789 17% latency 41.7755 35.3586 18% * sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines): Add e_log2f-fma. (CFLAGS-e_log2f-fma.c): New. * sysdeps/x86_64/fpu/multiarch/e_log2f-fma.c: New file. * sysdeps/x86_64/fpu/multiarch/e_log2f.c: Likewise.
* x86-64: Add logf with FMAH.J. Lu2017-10-223-1/+45
| | | | | | | | | | | | | | For workload-spec2017.wrf, on Skylake, it improves performance by: Before After Improvement reciprocal-throughput 16.1534 13.8874 16% latency 41.9642 34.3072 22% * sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines): Add e_logf-fma. (CFLAGS-e_logf-fma.c): New. * sysdeps/x86_64/fpu/multiarch/e_logf-fma.c: New file. * sysdeps/x86_64/fpu/multiarch/e_logf.c: Likewise.
* i386: Replace assembly versions of e_logf with generic e_logf.cH.J. Lu2017-10-229-143/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch replaces i386 assembly versions of e_logf with generic e_logf.c. For workload-spec2017.wrf, on Nehalem, it improves performance by: Before After Improvement reciprocal-throughput 73.3865 40.0454 83% latency 90.0985 54.4479 65% On Skylake, it improves performance by: Before After Improvement reciprocal-throughput 75.1384 22.1452 239% latency 91.9441 50.7925 81% On IvyBridge with --disable-multi-arch, it improves performance by: Before After Improvement reciprocal-throughput 84.5575 28.7879 193% latency 103.971 57.5231 80% * sysdeps/i386/fpu/e_logf.S: Removed. * sysdeps/i386/fpu/e_logf_data.c: Likewise. * sysdeps/i386/fpu/w_logf.c: Likewise. * sysdeps/i386/i686/fpu/e_logf.S: Likewise. * sysdeps/i386/fpu/libm-test-ulps: Updated for generic e_logf.c. * sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise. * sysdeps/i386/i686/fpu/multiarch/Makefile (libm-sysdep_routines): Add e_logf-sse2. (CFLAGS-e_logf-sse2.c): New. * sysdeps/i386/i686/fpu/multiarch/e_logf-sse2.c: New file. * sysdeps/i386/i686/fpu/multiarch/e_logf.c: Likewise.
* i386: Replace assembly versions of e_exp2f with generic e_exp2f.cH.J. Lu2017-10-227-54/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch replaces i386 assembly versions of e_exp2f with generic e_exp2f.c. For workload-spec2017.wrf, on Nehalem, it improves performance by: Before After Improvement reciprocal-throughput 112.996 40.0454 182% latency 126.581 54.4479 132% On Skylake, it improves performance by: Before After Improvement reciprocal-throughput 113.14 39.447 186% latency 136.068 55.684 144% On IvyBridge with --disable-multi-arch, it improves performance by: Before After Improvement reciprocal-throughput 132.521 40.3759 228% latency 145.791 58.4587 149% * sysdeps/i386/fpu/e_exp2f.S: Removed. * sysdeps/i386/fpu/w_exp2f.c: Likewise. * sysdeps/i386/fpu/libm-test-ulps: Updated for generic e_exp2f.c. * sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise. * sysdeps/i386/i686/fpu/multiarch/Makefile (libm-sysdep_routines): Add e_exp2f-sse2. (CFLAGS-e_exp2f-sse2.c): New. * sysdeps/i386/i686/fpu/multiarch/e_exp2f-sse2.c: New file. * sysdeps/i386/i686/fpu/multiarch/e_exp2f.c: Likewise.
* x86-64: Add exp2f with FMAH.J. Lu2017-10-223-1/+42
| | | | | | | | | | | | | | For workload-spec2017.wrf, on Skylake, it improves performance by: Before After Improvement reciprocal-throughput 13.0291 11.2225 16% latency 44.5154 37.5766 18% * sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines): Add e_exp2f-fma. (CFLAGS-e_exp2f-fma.c): New. * sysdeps/x86_64/fpu/multiarch/e_exp2f-fma.c: New file. * sysdeps/x86_64/fpu/multiarch/e_exp2f.c: Likewise.
* i386: Replace assembly versions of e_expf with generic e_expf.cH.J. Lu2017-10-2212-442/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch replaces i386 assembly versions of e_expf with generic e_expf.c. For workload-spec2017.wrf, on Nehalem, it improves performance by: Before After Improvement reciprocal-throughput 55.5724 40.2664 38% latency 80.0687 60.8517 31% On Skylake, it improves performance by: Before After Improvement reciprocal-throughput 62.4056 39.4188 58% latency 85.5496 59.6377 43% On IvyBridge with --disable-multi-arch, it improves performance by: Before After Improvement reciprocal-throughput 133.707 40.3778 231% latency 149.191 63.2515 135% * sysdeps/i386/fpu/e_exp2f_data.c: Removed. * sysdeps/i386/fpu/e_expf.S: Likewise. * sysdeps/i386/fpu/math_errf.c: Likewise. * sysdeps/i386/fpu/w_expf.c: Likewise. * sysdeps/i386/i686/fpu/multiarch/e_expf-ia32.S: Likewise. * sysdeps/i386/i686/fpu/multiarch/e_expf-sse2.S: Likewise. * sysdeps/i386/i686/fpu/multiarch/w_expf.c: Likewise. * sysdeps/i386/fpu/libm-test-ulps: Updated for generic e_expf.c. * sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise. * sysdeps/i386/i686/fpu/multiarch/Makefile (libm-sysdep_routines): Remove e_expf-ia32. (CFLAGS-e_expf-sse2.c): New. * sysdeps/i386/i686/fpu/multiarch/e_expf-sse2.c: New file. * sysdeps/i386/i686/fpu/multiarch/e_expf.c: Rewritten.
* x86-64: Replace assembly versions of e_expf with generic e_expf.cH.J. Lu2017-10-227-529/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch replaces x86-64 assembly versions of e_expf with generic e_expf.c. For workload-spec2017.wrf, on Nehalem, it improves performance by: Before After Improvement reciprocal-throughput 36.039 20.7749 73% latency 58.8096 40.8715 43% On Skylake, it improves Before After Improvement reciprocal-throughput 18.4436 11.1693 65% latency 47.5162 37.5411 26% * sysdeps/x86_64/fpu/e_expf.S: Removed. * sysdeps/x86_64/fpu/multiarch/e_expf-fma.S: Likewise. * sysdeps/x86_64/fpu/w_expf.c: Likewise. * sysdeps/x86_64/fpu/libm-test-ulps: Updated for generic e_expf.c. * sysdeps/x86_64/fpu/multiarch/Makefile (CFLAGS-e_expf-fma.c): New. * sysdeps/x86_64/fpu/multiarch/e_expf-fma.c: New file. * sysdeps/x86_64/fpu/multiarch/e_expf.c (__redirect_ieee754_expf): Renamed to ... (__redirect_expf): This. (SYMBOL_NAME): Changed to expf. (__ieee754_expf): Renamed to ... (__expf): This. (__GI___expf): This. (__ieee754_expf): Add strong_alias. (__expf_finite): Likewise. (__expf): New. Include <sysdeps/ieee754/flt-32/e_expf.c>.
* Add bits/floatn.h defines for more _FloatN / _FloatNx types.Joseph Myers2017-10-205-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The bits/floatn.h header currently only has defines relating to _Float128. This patch adds defines relating to other _FloatN / _FloatNx types. The approach taken is to add defines for all _FloatN / _FloatNx types known to GCC, and to put them in a common bits/floatn-common.h header included at the end of all the individual bits/floatn.h headers. If in future some defines become different for different glibc configurations, they will move out into the separate bits/floatn.h headers. Some defines are expected always to be the same across glibc ports. Corresponding defines are nevertheless put in this header. The intent is that where there are conditionals (in headers or in non-installed files) that can just repeat the same or nearly the same logic for each floating-point type, they should do so, even if in fact the cases for some types could be unconditionally present or absent because the same conditionals are true or false for all glibc configurations. This should make the glibc code with such conditionals easier to read, because the reader can just see that the same conditionals are repeated for each type, rather than seeing different conditionals for different types and needing to reason, at each location with such differences, why those differences are indeed correct there. (Cases involving per-format rather than per-type logic are more likely still to need differences in how they handle different types.) Having such defines and conditionals also helps in incremental preparation for adding _Float32 / _Float64 / _Float32x / _Float64x function aliases. I intend subsequent patches to add such conditionals corresponding to those already present for _Float128, as well as making more architecture-specific function implementations use common macros to define aliases in preparation for adding such _FloatN / _FloatNx aliases. Tested for x86_64. * bits/floatn-common.h: New file. * math/Makefile (headers): Add bits/floatn-common.h. * bits/floatn.h: Include <bits/floatn-common.h>. * sysdeps/ia64/bits/floatn.h: Likewise. * sysdeps/ieee754/ldbl-128/bits/floatn.h: Likewise. * sysdeps/mips/ieee754/bits/floatn.h: Likewise. * sysdeps/powerpc/bits/floatn.h: Likewise. * sysdeps/x86/bits/floatn.h: Likewise.
* posix: Fix improper assert in Linux posix_spawn (BZ#22273)Adhemerval Zanella2017-10-201-6/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As noted by Florian Weimer, current Linux posix_spawn implementation can trigger an assert if the auxiliary process is terminated before actually setting the err member: 340 /* Child must set args.err to something non-negative - we rely on 341 the parent and child sharing VM. */ 342 args.err = -1; [...] 362 new_pid = CLONE (__spawni_child, STACK (stack, stack_size), stack_size, 363 CLONE_VM | CLONE_VFORK | SIGCHLD, &args); 364 365 if (new_pid > 0) 366 { 367 ec = args.err; 368 assert (ec >= 0); Another possible issue is killing the child between setting the err and actually calling execve. In this case the process will not ran, but posix_spawn also will not report any error: 269 270 args->err = 0; 271 args->exec (args->file, args->argv, args->envp); As suggested by Andreas Schwab, this patch removes the faulty assert and also handles any signal that happens before fork and execve as the spawn was successful (and thus relaying the handling to the caller to figure this out). Different than Florian, I can not see why using atomics to set err would help here, essentially the code runs sequentially (due CLONE_VFORK) and I think it would not be legal the compiler evaluate ec without checking for new_pid result (thus there is no need to compiler barrier). Summarizing the possible scenarios on posix_spawn execution, we have: 1. For default case with a success execution, args.err will be 0, pid will not be collected and it will be reported to caller. 2. For default failure case, args.err will be positive and the it will be collected by the waitpid. An error will be reported to the caller. 3. For the unlikely case where the process was terminated and not collected by a caller signal handler, it will be reported as succeful execution and not be collected by posix_spawn (since args.err will be 0). The caller will need to actually handle this case. 4. For the unlikely case where the process was terminated and collected by caller we have 3 other possible scenarios: 4.1. The auxiliary process was terminated with args.err equal to 0: it will handled as 1. (so it does not matter if we hit the pid reuse race since we won't possible collect an unexpected process). 4.2. The auxiliary process was terminated after execve (due a failure in calling it) and before setting args.err to -1: it will also be handle as 1. but with the issue of not be able to report the caller a possible execve failures. 4.3. The auxiliary process was terminated after args.err is set to -1: this is the case where it will be possible to hit the pid reuse case where we will need to collected the auxiliary pid but we can not be sure if it will be expected one. I think for this case we need to actually change waitpid to use WNOHANG to avoid hanging indefinitely on the call and report an error to caller since we can't differentiate between a default failure as 2. and a possible pid reuse race issue. Checked on x86_64-linux-gnu. * sysdeps/unix/sysv/linux/spawni.c (__spawnix): Handle the case where the auxiliary process is terminated by a signal before calling _exit or execve.
* x86-64: Use fxsave/xsave/xsavec in _dl_runtime_resolve [BZ #21265]H.J. Lu2017-10-208-306/+230
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In _dl_runtime_resolve, use fxsave/xsave/xsavec to preserve all vector, mask and bound registers. It simplifies _dl_runtime_resolve and supports different calling conventions. ld.so code size is reduced by more than 1 KB. However, use fxsave/xsave/xsavec takes a little bit more cycles than saving and restoring vector and bound registers individually. Latency for _dl_runtime_resolve to lookup the function, foo, from one shared library plus libc.so: Before After Change Westmere (SSE)/fxsave 345 866 151% IvyBridge (AVX)/xsave 420 643 53% Haswell (AVX)/xsave 713 1252 75% Skylake (AVX+MPX)/xsavec 559 719 28% Skylake (AVX512+MPX)/xsavec 145 272 87% Ryzen (AVX)/xsavec 280 553 97% This is the worst case where portion of time spent for saving and restoring registers is bigger than majority of cases. With smaller _dl_runtime_resolve code size, overall performance impact is negligible. On IvyBridge, differences in build and test time of binutils with lazy binding GCC and binutils are noises. On Westmere, differences in bootstrap and "makc check" time of GCC 7 with lazy binding GCC and binutils are also noises. [BZ #21265] * sysdeps/x86/cpu-features-offsets.sym (XSAVE_STATE_SIZE_OFFSET): New. * sysdeps/x86/cpu-features.c: Include <libc-pointer-arith.h>. (get_common_indeces): Set xsave_state_size, xsave_state_full_size and bit_arch_XSAVEC_Usable if needed. (init_cpu_features): Remove bit_arch_Use_dl_runtime_resolve_slow and bit_arch_Use_dl_runtime_resolve_opt. * sysdeps/x86/cpu-features.h (bit_arch_Use_dl_runtime_resolve_opt): Removed. (bit_arch_Use_dl_runtime_resolve_slow): Likewise. (bit_arch_Prefer_No_AVX512): Updated. (bit_arch_MathVec_Prefer_No_AVX512): Likewise. (bit_arch_XSAVEC_Usable): New. (STATE_SAVE_OFFSET): Likewise. (STATE_SAVE_MASK): Likewise. [__ASSEMBLER__]: Include <cpu-features-offsets.h>. (cpu_features): Add xsave_state_size and xsave_state_full_size. (index_arch_Use_dl_runtime_resolve_opt): Removed. (index_arch_Use_dl_runtime_resolve_slow): Likewise. (index_arch_XSAVEC_Usable): New. * sysdeps/x86/cpu-tunables.c (TUNABLE_CALLBACK (set_hwcaps)): Support XSAVEC_Usable. Remove Use_dl_runtime_resolve_slow. * sysdeps/x86_64/Makefile (tst-x86_64-1-ENV): New if tunables is enabled. * sysdeps/x86_64/dl-machine.h (elf_machine_runtime_setup): Replace _dl_runtime_resolve_sse, _dl_runtime_resolve_avx, _dl_runtime_resolve_avx_slow, _dl_runtime_resolve_avx_opt, _dl_runtime_resolve_avx512 and _dl_runtime_resolve_avx512_opt with _dl_runtime_resolve_fxsave, _dl_runtime_resolve_xsave and _dl_runtime_resolve_xsavec. * sysdeps/x86_64/dl-trampoline.S (DL_RUNTIME_UNALIGNED_VEC_SIZE): Removed. (DL_RUNTIME_RESOLVE_REALIGN_STACK): Check STATE_SAVE_ALIGNMENT instead of VEC_SIZE. (REGISTER_SAVE_BND0): Removed. (REGISTER_SAVE_BND1): Likewise. (REGISTER_SAVE_BND3): Likewise. (REGISTER_SAVE_RAX): Always defined to 0. (VMOV): Removed. (_dl_runtime_resolve_avx): Likewise. (_dl_runtime_resolve_avx_slow): Likewise. (_dl_runtime_resolve_avx_opt): Likewise. (_dl_runtime_resolve_avx512): Likewise. (_dl_runtime_resolve_avx512_opt): Likewise. (_dl_runtime_resolve_sse): Likewise. (_dl_runtime_resolve_sse_vex): Likewise. (USE_FXSAVE): New. (_dl_runtime_resolve_fxsave): Likewise. (USE_XSAVE): Likewise. (_dl_runtime_resolve_xsave): Likewise. (USE_XSAVEC): Likewise. (_dl_runtime_resolve_xsavec): Likewise. * sysdeps/x86_64/dl-trampoline.h (_dl_runtime_resolve_avx512): Removed. (_dl_runtime_resolve_avx512_opt): Likewise. (_dl_runtime_resolve_avx): Likewise. (_dl_runtime_resolve_avx_opt): Likewise. (_dl_runtime_resolve_sse): Likewise. (_dl_runtime_resolve_sse_vex): Likewise. (_dl_runtime_resolve_fxsave): New. (_dl_runtime_resolve_xsave): Likewise. (_dl_runtime_resolve_xsavec): Likewise.
* m68k: Update elf_machine_load_address for static PIEH.J. Lu2017-10-201-0/+6
| | | | | | | | | When --enable-static-pie is used to configure glibc, we need to use _dl_relocate_static_pie to compute load address in static PIE. * sysdeps/m68k/dl-machine.h (elf_machine_load_address): Use _dl_relocate_static_pie instead of _dl_start to compute load address in static PIE.
* m68k: Check PIC instead of SHARED in start.SH.J. Lu2017-10-201-1/+1
| | | | | | | Since start.o may be compiled as PIC, we should check PIC instead of SHARED. * sysdeps/m68k/start.S (_start): Check PIC instead of SHARED.
* sysconf: Fix missing definition of UIO_MAXIOV on Linux [BZ #22321]Florian Weimer2017-10-204-1/+72
| | | | | | | After commit 37f802f86400684c8d13403958b2c598721d6360 (Remove __need_IOV_MAX and __need_FOPEN_MAX), UIO_MAXIOV is no longer supplied (indirectly) through <bits/stdio_lim.h>, so sysdeps/posix/sysconf.c no longer sees the definition.
* i386: Regenerate libm-test-ulpsH.J. Lu2017-10-191-2/+2
| | | | | | Regenerate libm-test-ulps for --disable-multi-arch. * sysdeps/i386/fpu/libm-test-ulps: Regenerated.
* Add MIPS bits/floatn.h.Joseph Myers2017-10-191-0/+80
| | | | | | | | | | | | | | This patch adds a MIPS-specific bits/floatn.h header. This header is identical to the ldbl-128 version except for the comment at the top; the purpose is to ensure that a 32-bit MIPS build installs a header that is the same as in a 64-bit MIPS build and so properly shows _Float128 support to be available for 64-bit compilations, on the general principle of an installation for one multilib providing headers also suitable for other multilibs. Tested with build-many-glibcs.py. * sysdeps/mips/ieee754/bits/floatn.h: New file.
* Install correct bits/long-double.h for MIPS64 (bug 22322).Joseph Myers2017-10-191-0/+0
| | | | | | | | | | | | | | | | | | | | | | | | Similar to bug 21987 for SPARC, MIPS64 wrongly installs the ldbl-128 version of bits/long-double.h, meaning incorrect results when using headers installed from a 64-bit installation for a 32-bit build. (I haven't actually seen this cause build failures before its interaction with bits/floatn.h did so - installed headers wrongly expecting _Float128 to be available in a 32-bit configuration.) This patch fixes the bug by moving the MIPS header to sysdeps/mips/ieee754, which comes before sysdeps/ieee754/ldbl-128 in the sysdeps directory ordering. (bits/floatn.h will need a similar fix - duplicating the ldbl-128 version for MIPS will suffice - for headers from a 32-bit installation to be correct for 64-bit builds.) Tested with build-many-glibcs.py (compilers build for mips64-linux-gnu, where there was previously a libstdc++ build failure as at <https://sourceware.org/ml/libc-testresults/2017-q4/msg00130.html>). [BZ #22322] * sysdeps/mips/bits/long-double.h: Move to .... * sysdeps/mips/ieee754/bits/long-double.h: ... here.
* x86-64: Don't set GLRO(dl_platform) to NULL [BZ #22299]H.J. Lu2017-10-195-4/+103
| | | | | | | | | | | | | | | | | | | | | | | Since ld.so expands $PLATFORM with GLRO(dl_platform), don't set GLRO(dl_platform) to NULL. [BZ #22299] * sysdeps/x86/cpu-features.c (init_cpu_features): Don't set GLRO(dl_platform) to NULL. * sysdeps/x86_64/Makefile (tests): Add tst-platform-1. (modules-names): Add tst-platformmod-1 and x86_64/tst-platformmod-2. (CFLAGS-tst-platform-1.c): New. (CFLAGS-tst-platformmod-1.c): Likewise. (CFLAGS-tst-platformmod-2.c): Likewise. (LDFLAGS-tst-platformmod-2.so): Likewise. ($(objpfx)tst-platform-1): Likewise. ($(objpfx)tst-platform-1.out): Likewise. (tst-platform-1-ENV): Likewise. ($(objpfx)x86_64/tst-platformmod-2.os): Likewise. * sysdeps/x86_64/tst-platform-1.c: New file. * sysdeps/x86_64/tst-platformmod-1.c: Likewise. * sysdeps/x86_64/tst-platformmod-2.c: Likewise.
* Add _Float128 function aliases.Joseph Myers2017-10-1820-2/+868
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for *f128 function aliases on platforms where long double has the binary128 format (and thus GCC 7 provides the _Float128 type with the same ABI as long double but as a distinct type in terms of C type compatibility). This is the same API as provided in glibc 2.26 for powerpc64le / x86_64 / x86 / ia64 where _Float128 has a different format from long double, with the bulk of the API coming from TS 18661-3. All the functions alias the corresponding long double functions, and __* function names are not provided since those are only needed once for each floating-point format, not more than once for different types with the same format (so for example, -ffinite-math-only maps foof128 to __fool_finite, while type-generic macros end up calling e.g. __issignalingl for _Float128 arguments on such platforms). The preparation for this feature was done in previous patches, so this one just needs to add the relevant makefile and header definitions, and update macro definitions of libm_alias_ldouble_other_r, to turn on the feature, and update documentation and ABI baselines. Tested (a) for x86_64, (b) for aarch64, (c) with build-many-glibcs.py with both GCC 6 and GCC 7. * sysdeps/ieee754/ldbl-128/Makeconfig: New file. * sysdeps/ieee754/ldbl-128/bits/floatn.h: Likewise. * sysdeps/ieee754/ldbl-128/float128-abi.h: Likewise. * sysdeps/generic/libm-alias-ldouble.h: Include <bits/floatn.h>. [__HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128] (libm_alias_ldouble_other_r): Also create _Float128 alias. * sysdeps/ieee754/ldbl-opt/libm-alias-ldouble.h: Include <bits/floatn.h>. [__HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128] (libm_alias_ldouble_other_r): Also create _Float128 alias. * manual/math.texi (Mathematics): Document additional architecture support for _Float128. * sysdeps/unix/sysv/linux/aarch64/libc.abilist: Update. * sysdeps/unix/sysv/linux/aarch64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/alpha/libc.abilist: Likewise. * sysdeps/unix/sysv/linux/alpha/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/mips/mips64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/mips/mips64/n32/libc.abilist: Likewise. * sysdeps/unix/sysv/linux/mips/mips64/n64/libc.abilist: Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/libc.abilist: Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/libc.abilist: Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/libc.abilist: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/libc.abilist: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/libm.abilist: Likewise.
* [AARCH64] Rewrite elf_machine_load_address using _DYNAMIC symbolSzabolcs Nagy2017-10-181-34/+5
| | | | | | | | | | | | | | | | | | | | This patch rewrites aarch64 elf_machine_load_address to use special _DYNAMIC symbol instead of _dl_start. The static address of _DYNAMIC symbol is stored in the first GOT entry. Here is the change which makes this solution work (part of binutils 2.24): https://sourceware.org/ml/binutils/2013-06/msg00248.html i386, x86_64 targets use the same method to do this as well. The original implementation relies on a trick that R_AARCH64_ABS32 relocation being resolved at link time and the static address fits in the 32bits. However, in LP64, normally, the address is defined to be 64 bit. Here is the C version one which should be portable in all cases. * sysdeps/aarch64/dl-machine.h (elf_machine_load_address): Use _DYNAMIC symbol to calculate load address.
* powerpc: fix check-before-set in SET_RESTORE_ROUNDPaul Clarke2017-10-181-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A performance regression was introduced by commit 84d74e427a771906830800e574a72f8d25a954b8 "powerpc: Cleanup fenv_private.h". In the powerpc implementation of SET_RESTORE_ROUND, there is the following code in the "SET" function (slightly simplified): -- old.fenv = fegetenv_register (); new.l = (old.l & _FPU_MASK_TRAPS_RN) | r; (1) if (new.l != old.l) (2) { if ((old.l & _FPU_ALL_TRAPS) != 0) (void) __fe_mask_env (); fesetenv_register (new.fenv); (3) -- Line (1) sets the value of "new" to the current value of FPSCR, but masks off summary bits, exceptions, non-IEEE mode, and rounding mode, then ORs in the new rounding mode. Line (2) compares this new value to the current value in order to avoid setting a new value in the FPSCR (line (3)) unless something significant has changed (exception enables or rounding mode). The summary bits are not germane to the comparison, but are cleared in "new" and preserved in "old", resulting in false negative comparisons, and unnecessarily setting the FPSCR in those cases with associated negative performance impacts. The solution is to treat the summaries identically for "new" and "old": - save them in SET - leave them alone otherwise - restore the saved values in RESTORE Also minor changes: - expand _FPU_MASK_RN to 64bit hex, to match other MASKs - treat bit 52 (left-to-right) as reserved (since it is) * sysdeps/powerpc/fpu/fenv_private.h (_FPU_MASK_TRAPS_RN): (_FPU_MASK_FRAC_INEX_RET_CC): Fix masks to more properly handle summary bits. (_FPU_MASK_RN): Expand _FPU_MASK_RN to 64bit hex. (_FPU_MASK_NOT_RN_NI): Treat bit 52 (left-to-right) as reserved. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com>
* posix: Add p{readv,writev}2 flags to generic uio-ext.hAdhemerval Zanella2017-10-171-2/+1
| | | | | * bits/uio-ext.h (RWF_HIPRI, RWF_DSYNC, RWF_SYNC, RWF_NOWAIT): New defines.
* Add common ifunc-init.h headerAdhemerval Zanella2017-10-172-40/+55
| | | | | | | | | | This patch moves the generic definition from x86_64 init-arch to a common header ifunc-init.h. No functional changes is expected. Checked on a x86_64-linux-gnu build. * sysdeps/generic/ifunc-init.h: New file. * sysdeps/x86/init-arch.h: Use generic ifunc-init.h.
* Move some float128 symbol version definitions.Joseph Myers2017-10-162-109/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With support for _Float128 functions on platforms where that type has the same ABI as long double, as well as on platforms where it is ABI-distinct, those functions will need to be exported from glibc's shared libraries at appropriate symbol versions in each case. This patch avoids duplication of lists of symbols to export by moving the symbols other than __* to math/Versions and stdlib/Versions. There, they are conditional on <float128-abi.h> defining FLOAT128_VERSION and a default version of that header is added that does not define that macro. Enabling the float128 function aliases will then include adding a sysdeps/ieee754/ldbl-128/float128-abi.h that defines FLOAT128_VERSION to GLIBC_2.27. Symbols __* remain in sysdeps/ieee754/float128/Versions; those symbols should be present only once per floating-point format, not once per type. Note that if any platforms currently lacking support for a type with binary128 format get glibc support for such a type in future (whether only as _Float128, or also as a new long double format), and new libm functions (present for all types) have been added by then, additional macros will be needed to allow such functions to get a version of the form "GLIBC_2.28 if the platform had _Float128 support by then, or the later version at which that platform had _Float128 support added". This is not however a preexisting condition, but would have applied equally to the existing support for _Float128 as an ABI-distinct type. New all-type libm functions should just be added to the appropriate symbol version (currently GLIBC_2.27) for all types, with such special-case handling for _Float128 versions (and _Float64x as well in future) waiting until someone actually wants to add support for _Float128 to an existing platform after a release in which that platform and a post-2.26 libm function had support but that platform lacked _Float128 support. Tested with build-many-glibcs.py that installed stripped shared libraries are unchanged by this patch. Also tested in conjunction with the remaining changes to enable float128 aliases. * sysdeps/generic/float128-abi.h: New file. * sysdeps/ieee754/float128/Versions (FLOAT128_VERSION): Move non-__prefixed symbols to .... * math/Versions: ... here. Include <float128-abi.h>. * stdlib/Versions ... and here. Include <float128-abi.h>
* Support strtof128 etc. aliases.Joseph Myers2017-10-162-0/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for building strtof128, wcstof128, strtof128_l and wcstof128_l as aliases, in the case of __HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128. Tested with build-many-glibcs.py that installed stripped shared libraries are unchanged by this patch. Also tested together with changes to enable float128 aliases. * stdlib/strtold.c: Include <bits/floatn.h> [__HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128] (strtof128): Define and later undefine as macro. Define as weak alias if [!USE_WIDE_CHAR]. [__HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128] (wcstof128): Define and later undefine as macro. Define as weak alias if [USE_WIDE_CHAR]. * sysdeps/ieee754/ldbl-128/strtold_l.c [__HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128] (strtof128_l): Define and later undefine as macro. Define as weak alias if [!USE_WIDE_CHAR]. [__HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128] (wcstof128_l): Define and later undefine as macro. Define as weak alias if [USE_WIDE_CHAR]. * sysdeps/ieee754/ldbl-64-128/strtold_l.c: Include <bits/floatn.h>. [__HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128] (strtof128_l): Define and later undefine as macro. Define as weak alias if [!USE_WIDE_CHAR]. [__HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128] (wcstof128_l): Define and later undefine as macro. Define as weak alias if [USE_WIDE_CHAR].
* Use libm_alias_ldouble_other in ldbl-64-128/s_nextafterl.c.Joseph Myers2017-10-131-0/+3
| | | | | | | | | | | | | | | | This patch makes ldbl-64-128/s_nextafterl.c restore the default weak_alias definition and use libm_alias_ldouble_other (having undefined and redefined weak_alias for the include of ldbl-128/s_nextafterl.c, so the libm_alias_ldouble use in the latter file is ineffective). Tested with build-many-glibcs.py that installed stripped shared libraries are unchanged by this patch. Also tested together with changes to enable float128 aliases. * sysdeps/ieee754/ldbl-64-128/s_nextafterl.c (weak_alias): Undefine and restore default definition. Use libm_alias_ldouble_other.