about summary refs log tree commit diff
Commit message (Collapse)AuthorAgeFilesLines
* Remove unused function _dl_tls_setupFlorian Weimer2016-12-215-59/+9
| | | | | | Commit 7a5e3d9d633c828d84a9535f26b202a6179978e7 (elf: Assume TLS is initialized in _dl_map_object_from_fd) removed the last call of _dl_tls_setup, but did not remove the function itself.
* x86_64: tst-quad1pie, tst-quad2pie: compile with -fPIE [BZ #7065]Nick Alcock2016-12-212-0/+9
| | | | | | With stack protection enabled, these files have external symbol references for the first time, so the fact that they are not compiled with -fPIE and are then linked into a -pie binary starts to hurt.
* Move all tests out of the csu subdirectoryNick Alcock2016-12-216-7/+25
| | | | | | | | | | Stack-protection on .o files in csu/ must be suppressed for the sake of library startup code. This also suppresses stack-protection in tests (which are also covered by CFLAGS-.o), though this is neither necessary nor desirable. So impose the rule that .o files in csu/ are necessarily C startup code, and move the few tests in there into misc/ instead.
* manual: Convert @tables of variables to @vtables.Rical Jasan2016-12-2117-172/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Texinfo @vindex commands add entries to the Variable and Constant Macro Index. Similarly, @items in @vtables are automatically indexed. A number of @tables exist where all @items are @vindexed or all @items are variables, but not indexed, suggesting an optimization by converting such @tables to @vtables and dropping the @vindex. Using a @vtable provides a context for processing @items whereby it can be known the @items should have header and standards annotations. This commit converts @tables of such @items to @vtables in order to establish a framework for automated processing. A pleasant consequence of these changes is that @items previously lacking a @vindex are present in the Variable and Constant Macro Index now. @vindex entries previously detected by summary.awk will still be detected as @items with appropriate annotations. The @vtable of the NSS databases is converted to a @table because 1) those @items are not variables (and will no longer appear in the Variable and Constant Macro Index) and 2) they do not need header and standards annotations, so the incorrect context is fixed. * manual/nss.texi: Change incorrect @vtable to @table. * manual/arith.texi: Convert @tables of variables to @vtables and remove unnecessary indexing. * manual/filesys.texi: Likewise. * manual/llio.texi: Likewise. * manual/memory.texi: Likewise. * manual/process.texi: Likewise. * manual/resource.texi: Likewise. * manual/search.texi: Likewise. * manual/signal.texi: Likewise. * manual/socket.texi: Likewise. * manual/stdio.texi: Likewise. * manual/sysinfo.texi: Likewise. * manual/syslog.texi: Likewise. * manual/terminal.texi: Likewise. * manual/time.texi: Likewise. * manual/users.texi: Likewise.
* Add roundeven, roundevenf, roundevenl.Joseph Myers2016-12-2146-7/+1360
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TS 18661-1 defines roundeven functions that round a floating-point number to the nearest integer, in that floating-point type, with ties rounding to even (whereas the round functions round ties away from zero). As with other such functions, they raise no exceptions apart from "invalid" for signaling NaNs. There was a previous user request for this functionality in glibc in <https://sourceware.org/ml/libc-help/2015-02/msg00005.html>. This patch implements these functions for glibc. The implementations use integer bit-manipulation (or roundeven on the high and low parts, in the IBM long double case). It's possible that there may be faster approaches on some architectures (in particular, on AArch64 the frintn instruction should do exactly what's required); I'll leave it to architecture maintainers or others interested to implement such architecture-specific versions if desired. (Where architectures have instructions to round to nearest integer in the current rounding mode, implementations saving and restoring the rounding mode - and dealing with exceptions if those instructions generate "inexact" - are also possible, though their performance depends on the cost of manipulating exceptions / rounding mode state.) Tested for x86_64, x86, mips64 and powerpc. * math/bits/mathcalls.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (roundeven): New declaration. * math/tgmath.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (roundeven): New macro. * math/Versions (roundeven): New libm symbol at version GLIBC_2.25. (roundevenf): Likewise. (roundevenl): Likewise. * math/Makefile (libm-calls): Add s_roundevenF. * math/libm-test.inc (roundeven_test_data): New array. (roundeven_test): New function. (main): Call roundeven_test. * math/test-tgmath.c (NCALLS): Increase to 134. (F(compile_test)): Call roundeven. (F(roundeven)): New function. * manual/arith.texi (Rounding Functions): Document roundeven, roundevenf and roundevenl. * manual/libm-err-tab.pl (@all_functions): Add roundeven. * include/math.h (roundeven): Use libm_hidden_proto. * sysdeps/ieee754/dbl-64/s_roundeven.c: New file. * sysdeps/ieee754/dbl-64/wordsize-64/s_roundeven.c: Likewise. * sysdeps/ieee754/flt-32/s_roundevenf.c: Likewise. * sysdeps/ieee754/ldbl-128/s_roundevenl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_roundevenl.c: Likewise. * sysdeps/ieee754/ldbl-96/s_roundevenl.c: Likewise. * sysdeps/ieee754/ldbl-opt/Makefile (libnldbl-calls): Add roundeven. (CFLAGS-nldbl-roundeven.c): New variable. * sysdeps/ieee754/ldbl-opt/nldbl-roundeven.c: New file. * sysdeps/nacl/libm.abilist: Update. * sysdeps/unix/sysv/linux/aarch64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/alpha/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/arm/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/hppa/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/i386/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/ia64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/m68k/coldfire/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/m68k/m680x0/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/microblaze/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/mips/mips32/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/mips/mips64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/nios2/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/fpu/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/libm-le.abilist: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/sh/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/tile/tilegx/tilegx32/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/tile/tilegx/tilegx64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/tile/tilepro/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/x86_64/64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/x86_64/x32/libm.abilist: Likewise.
* Add preprocessor indentation for llogb macro in tgmath.h.Joseph Myers2016-12-202-1/+6
| | | | | * math/tgmath.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (llogb): Add preprocessor indentation inside #if.
* Replace use of snprintf with strfrom in libm testsGabriel F. T. Gomes2016-12-205-42/+72
| | | | | | | | In order to support float128 tests, the calls to snprintf, which does not support the type __float128, are replaced with calls to strfrom{f,d,l}. Tested for powerpc64le, s390, and x64_64.
* S390: Optimize lock-elision by decrementing adapt_count at unlock.Stefan Liebler2016-12-205-54/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch decrements the adapt_count while unlocking the futex instead of before aquiring the futex as it is done on power, too. Furthermore a transaction is only started if the futex is currently free. This check is done after starting the transaction, too. If the futex is not free and the transaction nesting depth is one, we can simply end the started transaction instead of aborting it. The implementation of this check was faulty as it always ended the started transaction. By using the fallback path, the the outermost transaction was aborted. Now the outermost transaction is aborted directly. This patch also adds some commentary and aligns the code in elision-trylock.c to the code in elision-lock.c as possible. ChangeLog: * sysdeps/unix/sysv/linux/s390/lowlevellock.h (__lll_unlock_elision, lll_unlock_elision): Add adapt_count argument. * sysdeps/unix/sysv/linux/s390/elision-lock.c: (__lll_lock_elision): Decrement adapt_count while unlocking instead of before locking. * sysdeps/unix/sysv/linux/s390/elision-trylock.c (__lll_trylock_elision): Likewise. * sysdeps/unix/sysv/linux/s390/elision-unlock.c: (__lll_unlock_elision): Likewise.
* S390: Use new __libc_tbegin_retry macro in elision-lock.c.Stefan Liebler2016-12-203-28/+64
| | | | | | | | | | | | | | | | | | | | This patch implements __libc_tbegin_retry macro which is equivalent to gcc builtin __builtin_tbegin_retry, except the changes which were applied to __libc_tbegin in the previous patch. If tbegin aborts with _HTM_TBEGIN_TRANSIENT. Then this macros restores the fpc, fprs and automatically retries up to retry_cnt tbegins. Further saving of the state is omitted as it is already saved in the first round. Before retrying a further transaction, the transaction-abort-assist instruction is used to support the cpu. This macro is now used in function __lll_lock_elision. ChangeLog: * sysdeps/unix/sysv/linux/s390/htm.h(__libc_tbegin_retry): New macro. * sysdeps/unix/sysv/linux/s390/elision-lock.c (__lll_lock_elision): Use __libc_tbegin_retry macro.
* S390: Use own tbegin macro instead of __builtin_tbegin.Stefan Liebler2016-12-206-25/+174
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch defines __libc_tbegin, __libc_tend, __libc_tabort and __libc_tx_nesting_depth in htm.h which replaces the direct usage of equivalent gcc builtins. We have to use an own inline assembly instead of __builtin_tbegin, as tbegin has to filter program interruptions which can't be done with the builtin. Before this change, e.g. a segmentation fault within a transaction, leads to a coredump where the instruction pointer points behind the tbegin instruction instead of real failing one. Now the transaction aborts and the code should be reexecuted by the fallback path without transactions. The segmentation fault will produce a coredump with the real failing instruction. The fpc is not saved before starting the transaction. If e.g. the rounging mode is changed and the transaction is aborting afterwards, the builtin will not restore the fpc. This is now done with the __libc_tbegin macro. Now the call saved fprs have to be saved / restored in the __libc_tbegin macro. Using the gcc builtin had forced the saving / restoring of fprs at begin / end of e.g. __lll_lock_elision function. The new macro saves these fprs before tbegin instruction and only restores them on a transaction abort. Restoring is not needed on a successfully started transaction. The used inline assembly does not clobber the fprs / vrs! Clobbering the latter ones would force the compiler to save / restore the call saved fprs as those overlap with the vrs, but they only need to be restored if the transaction fails. Thus the user of the tbegin macros has to compile the file / function with -msoft-float. It prevents gcc from using fprs / vrs. ChangeLog: * sysdeps/unix/sysv/linux/s390/Makefile (elision-CFLAGS): Add -msoft-float. * sysdeps/unix/sysv/linux/s390/htm.h: New File. * sysdeps/unix/sysv/linux/s390/elision-lock.c: Use __libc_t* transaction macros instead of __builtin_t*. * sysdeps/unix/sysv/linux/s390/elision-trylock.c: Likewise. * sysdeps/unix/sysv/linux/s390/elision-unlock.c: Likewise.
* S390: Use C11-like atomics instead of plain memory accesses in lock elision ↵Stefan Liebler2016-12-203-12/+34
| | | | | | | | | | | | | | | | | | code. This uses atomic operations to access lock elision metadata that is accessed concurrently (ie, adapt_count fields). The size of the data is less than a word but accessed only with atomic loads and stores. See also x86 commit ca6e601a9d4a72b3699cca15bad12ac1716bf49a: "Use C11-like atomics instead of plain memory accesses in x86 lock elision." ChangeLog: * sysdeps/unix/sysv/linux/s390/elision-lock.c (__lll_lock_elision): Use atomics to load / store adapt_count. * sysdeps/unix/sysv/linux/s390/elision-trylock.c (__lll_trylock_elision): Likewise.
* Do not require memset elimination in explicit_bzero testFlorian Weimer2016-12-203-9/+29
| | | | | | | | Some targets fail to apply dead store elimination to the memset call in setup_ordinary_clear. Before this commit, this causes the test case to fail. Instead, the test case now logs lack of memset elimination as an informational message.
* Add fmaxmag, fminmag functions.Joseph Myers2016-12-2044-6/+677
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TS 18661-1 defines fmaxmag and fminmag functions that return the argument with maximum / minimum magnitude (acting like fmax / fmin if the arguments have the same magnitude or either argument is a NaN). These correspond to the IEEE 754-2008 operations maxNumMag and minNumMag. This patch implements these functions for glibc. They are implemented with type-generic templates. Tests are based on those for fmax and fmin. Tested for x86_64, x86, mips64 and powerpc. * math/bits/mathcalls.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (fmaxmag): New declaration. (fminmag): Likewise. * math/tgmath.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (fmaxmag): New macro. [__GLIBC_USE (IEC_60559_BFP_EXT)] (fminmag): Likewise. * math/Versions (fmaxmag): New libm symbol at version GLIBC_2.25. (fmaxmagf): Likewise. (fmaxmagl): Likewise. (fminmag): Likewise. (fminmagf): Likewise. (fminmagl): Likewise. * math/Makefile (gen-libm-calls): Add s_fmaxmagF and s_fminmagF. * math/s_fmaxmag_template.c: New file. * math/s_fminmag_template.c: Likewise. * math/libm-test.inc (fmaxmag_test_data): New array. (fmaxmag_test): New function. (fminmag_test_data): New array. (fminmag_test): New function. (main): Call fmaxmag_test and fminmag_test. * math/test-tgmath.c (NCALLS): Increase to 132. (F(compile_test)): Call fmaxmag and fminmag. (F(fminmag)): New function. (F(fmaxmag)): Likewise. * manual/arith.texi (Misc FP Arithmetic): Document fminmag, fminmagf, fminmagl, fmaxmag, fmaxmagf and fmaxmagl. * manual/libm-err-tab.pl (@all_functions): Add fmaxmag and fminmag. * sysdeps/ieee754/ldbl-opt/nldbl-fmaxmag.c: New file. * sysdeps/ieee754/ldbl-opt/nldbl-fminmag.c: Likewise. * sysdeps/ieee754/ldbl-opt/s_fmaxmagl.c: Likewise. * sysdeps/ieee754/ldbl-opt/s_fminmagl.c: Likewise. * sysdeps/ieee754/ldbl-opt/Makefile (libnldbl-calls): Add fmaxmag and fminmag. (CFLAGS-nldbl-fmaxmag.c): New variable. (CFLAGS-nldbl-fminmag.c): Likewise. * sysdeps/nacl/libm.abilist: Update. * sysdeps/unix/sysv/linux/aarch64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/alpha/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/arm/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/hppa/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/i386/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/ia64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/m68k/coldfire/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/m68k/m680x0/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/microblaze/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/mips/mips32/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/mips/mips64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/nios2/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/fpu/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/libm-le.abilist: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/sh/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/tile/tilegx/tilegx32/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/tile/tilegx/tilegx64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/tile/tilepro/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/x86_64/64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/x86_64/x32/libm.abilist: Likewise.
* Robust mutexes: Fix lost wake-up.Torvald Riegel2016-12-193-4/+32
| | | | | | | | | | | | | | | | | | | | | | Assume that Thread 1 waits to acquire a robust mutex using futexes to block (and thus sets the FUTEX_WAITERS flag), and is unblocked when this mutex is released. If Thread 2 concurrently acquires the lock and is killed, Thread 1 can recover from the died owner but fail to restore the FUTEX_WAITERS flag. This can lead to a Thread 3 that also blocked using futexes at the same time as Thread 1 to not get woken up because FUTEX_WAITERS is not set anymore. The fix for this is to ensure that we continue to preserve the FUTEX_WAITERS flag whenever we may have set it or shared it with another thread. This is the same requirement as in the algorithm for normal mutexes, only that the robust mutexes need additional handling for died owners and thus preserving the FUTEX_WAITERS flag cannot be done just in the futex slowpath code. [BZ #20973] * nptl/pthread_mutex_lock.c (__pthread_mutex_lock_full): Fix lost wake-up in robust mutexes. * nptl/pthread_mutex_timedlock.c (pthread_mutex_timedlock): Likewise.
* benchtests: Add fmaxf/fminf benchmarksAdhemerval Zanella2016-12-194-1/+56
| | | | | | | | | | | | | | | | | | | | | | | This patch adds fmaxf and fminf benchtests. It is based on math/s_fmax_template.c implementation which checks for basically four different classes: 1. if x is greater or equal than y. 2. if x is less than y. 3. if x or y is signaling. 4. if y is nan. Cases 1 and 2 are used for default input number (by mixing normal double numbers and infinity), while case 3 and 4 are used each for on for a benchmark class. Checked on x86_64-linux-gnu and powerpc64-linux-gnu. * benchtests/Makefile (bench-math): Add fminf and fmaxf. (CFLAGS-bench-fmaxf.c): New rule. (CFLAGS-bench-fminf.c): Likewise. * benchtests/fmaxf-inputs: New file. * benchtests/fminf-inputs: Likewise.
* benchtests: Add fmax/fmin benchmarksAdhemerval Zanella2016-12-194-1/+55
| | | | | | | | | | | | | | | | | | | | | | This patch adds fmax and fmin benchtests. It is based math/s_fmax_template.c implementation which checks for basically four different classes: 1. if x is greater or equal than y. 2. if x is less than y. 3. if x or y is signaling. 4. if y is nan. Cases 1 and 2 are used for default input number (by mixing normal double numbers and infinity), while case 3 and 4 are used each for on for a benchmark class. Checked on x86_64-linux-gnu and powerpc64-linux-gnu. * benchtests/Makefile (bench-math): Add fmin and fmax. (CFLAGS-bench-fmax.c): New rule. (CFLAGS-bench-fmin.c): New rule. * benchtests/fmax-inputs: New file. * benchtests/fmin-inputs: Likewise.
* Adjust benchtests to new support library.Adhemerval Zanella2016-12-1931-50/+96
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch basically replaces the test-skeleton.c inclusion by support/test-driver.c and also minor adjustments in bench-string.h. Checked on x86_64-linux-gnu and powerpc64le-linux-gnu. * benchtests/bench-string.h (TEST_FUNCTION): Use name without parenthesis. (CMDLINE_PROCESS): Define using function instead of macro. * benchtests/bench-memccpy.c: Include <support/test-driver.c> instead of test-skeleton. * benchtests/bench-memchr.c: Likewise. * benchtests/bench-memcmp.c: Likewise. * benchtests/bench-memcpy-large.c: Likewise. * benchtests/bench-memcpy.c: Likewise. * benchtests/bench-memmem.c: Likewise. * benchtests/bench-memmove-large.c: Likewise. * benchtests/bench-memmove.c: Likewise. * benchtests/bench-memset-large.c: Likewise. * benchtests/bench-memset.c: Likewise. * benchtests/bench-rawmemchr.c: Likewise. * benchtests/bench-strcasecmp.c: Likewise. * benchtests/bench-strcasestr.c: Likewise. * benchtests/bench-strcat.c: Likewise. * benchtests/bench-strchr.c: Likewise. * benchtests/bench-strcmp.c: Likewise. * benchtests/bench-strcpy.c: Likewise. * benchtests/bench-strcpy_chk.c: Likewise. * benchtests/bench-strlen.c: Likewise. * benchtests/bench-strncasecmp.c: Likewise. * benchtests/bench-strncmp.c: Likewise. * benchtests/bench-strncpy.c: Likewise. * benchtests/bench-strnlen.c: Likewise. * benchtests/bench-strpbrk.c: Likewise. * benchtests/bench-strrchr.c: Likewise. * benchtests/bench-strsep.c: Likewise. * benchtests/bench-strspn.c: Likewise. * benchtests/bench-strstr.c: Likewise. * benchtests/bench-strtok.c: Likewise.
* Disable TSX on some Haswell processors.Andrew Senkevich2016-12-192-6/+29
| | | | | | | | | | Patch disables Intel TSX on some Haswell processors to avoid TSX on kernels that weren't updated with the latest microcode package (which disables broken feature by default). * sysdeps/x86/cpu-features.c (get_common_indeces): Add stepping identification. (init_cpu_features): Add handle of Haswell.
* Add missing bug number to ChangeLogFlorian Weimer2016-12-181-0/+1
|
* assert.h: allow gcc to detect assert(a = 1) errorsJim Meyering2016-12-182-4/+25
| | | | | | | | | | | | | | * assert/assert.h (assert): Rewrite assert's definition so that a s/==/=/ typo, e.g., assert(errno = ENOENT) is not hidden from gcc's -Wparentheses by assert-added parentheses. The new definition uses "if (expr) /* empty */; else __assert_fail...", so gcc -Wall will now detect that type of error in an assert, too. The __STRICT_ANSI__ disjunct is to make this work also with both -ansi and -pedantic, which would reject the use of ({...}). I would have preferred to use __extension__ to mark that, but doing so would mistakenly suppress warnings about any extension in the user-supplied "expr". E.g., "assert ( ({1;}) )" must continue to evoke a warning.
* Link benchset tests against libsupportSiddhesh Poyarekar2016-12-182-0/+6
| | | | | | | | Benchsets in benchtests use test-skeleton, so they too need to be linked against the new libsupport DSO. * benchtests/Makefile (binaries-benchset): Depend on libsupport DSO.
* Add ChangeLog for previous commitSiddhesh Poyarekar2016-12-181-0/+5
| | | | Oops.
* Add -B to python invocation to avoid generating pyc filesMartin Galvan2016-12-181-1/+7
| | | | | | | | | | | Without -B, python invocations may result in generation of pyc files for modules within the source tree, which does not work well when the source tree is read-only. 2016-12-17 Martin Galvan <martingalvan@sourceware.org> * Rules (python-flags, python-invoke): New. ($(test-printers-out)): Use $(python-flags).
* Document sNaN argument error handling.Joseph Myers2016-12-162-0/+10
| | | | | | | | | | | | | TS 18661-1 says that "Whether a signaling NaN input causes a domain error is implementation-defined.". Considering it a domain error would (given glibc's math_errhandling definition) mean setting errno to EDOM. glibc consistently does not set errno for sNaN inputs (unless it does so for qNaN as well, i.e. iseqsig), so this patch adds documentation of the implementation-defined choice not to treat this case as a domain error. * manual/arith.texi (Math Error Reporting): Document that sNaN arguments are not considered domain errors.
* New string function explicit_bzero (from OpenBSD).Zack Weinberg2016-12-1649-22/+711
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | explicit_bzero(s, n) is the same as memset(s, 0, n), except that the compiler is not allowed to delete a call to explicit_bzero even if the memory pointed to by 's' is dead after the call. Right now, this effect is achieved externally by having explicit_bzero be a function whose semantics are unknown to the compiler, and internally, with a no-op asm statement that clobbers memory. This does mean that small explicit_bzero operations cannot be expanded inline as small memset operations can, but on the other hand, small memset operations do get deleted by the compiler. Hopefully full compiler support for explicit_bzero will happen relatively soon. There are two new tests: test-explicit_bzero.c verifies the visible semantics in the same way as the existing test-bzero.c, and tst-xbzero-opt.c verifies the not-being-optimized-out property. The latter is conceptually based on a test written by Matthew Dempsky for the OpenBSD regression suite. The crypt() implementation has an immediate use for this new feature. We avoid having to add a GLIBC_PRIVATE alias for explicit_bzero by running all of libcrypt's calls through the fortified variant, __explicit_bzero_chk, which is in the impl namespace anyway. Currently I'm not aware of anything in libc proper that needs this, but the glue is all in place if it does become necessary. The legacy DES implementation wasn't bothering to clear its buffers, so I added that, mostly for consistency's sake. * string/explicit_bzero.c: New routine. * string/test-explicit_bzero.c, string/tst-xbzero-opt.c: New tests. * string/Makefile (routines, strop-tests, tests): Add them. * string/test-memset.c: Add ifdeffage for testing explicit_bzero. * string/string.h [__USE_MISC]: Declare explicit_bzero. * debug/explicit_bzero_chk.c: New routine. * debug/Makefile (routines): Add it. * debug/tst-chk1.c: Test fortification of explicit_bzero. * string/bits/string3.h: Fortify explicit_bzero. * manual/string.texi: Document explicit_bzero. * NEWS: Mention addition of explicit_bzero. * crypt/crypt-entry.c (__crypt_r): Clear key-dependent intermediate data before returning, using explicit_bzero. * crypt/md5-crypt.c (__md5_crypt_r): Likewise. * crypt/sha256-crypt.c (__sha256_crypt_r): Likewise. * crypt/sha512-crypt.c (__sha512_crypt_r): Likewise. * include/string.h: Redirect internal uses of explicit_bzero to __explicit_bzero_chk[_internal]. * string/Versions [GLIBC_2.25]: Add explicit_bzero. * debug/Versions [GLIBC_2.25]: Add __explicit_bzero_chk. * sysdeps/arm/nacl/libc.abilist * sysdeps/unix/sysv/linux/aarch64/libc.abilist * sysdeps/unix/sysv/linux/alpha/libc.abilist * sysdeps/unix/sysv/linux/arm/libc.abilist * sysdeps/unix/sysv/linux/hppa/libc.abilist * sysdeps/unix/sysv/linux/i386/libc.abilist * sysdeps/unix/sysv/linux/ia64/libc.abilist * sysdeps/unix/sysv/linux/m68k/coldfire/libc.abilist * sysdeps/unix/sysv/linux/m68k/m680x0/libc.abilist * sysdeps/unix/sysv/linux/microblaze/libc.abilist * sysdeps/unix/sysv/linux/mips/mips32/fpu/libc.abilist * sysdeps/unix/sysv/linux/mips/mips32/nofpu/libc.abilist * sysdeps/unix/sysv/linux/mips/mips64/n32/libc.abilist * sysdeps/unix/sysv/linux/mips/mips64/n64/libc.abilist * sysdeps/unix/sysv/linux/nios2/libc.abilist * sysdeps/unix/sysv/linux/powerpc/powerpc32/fpu/libc.abilist * sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/libc.abilist * sysdeps/unix/sysv/linux/powerpc/powerpc64/libc-le.abilist * sysdeps/unix/sysv/linux/powerpc/powerpc64/libc.abilist * sysdeps/unix/sysv/linux/s390/s390-32/libc.abilist * sysdeps/unix/sysv/linux/s390/s390-64/libc.abilist * sysdeps/unix/sysv/linux/sh/libc.abilist * sysdeps/unix/sysv/linux/sparc/sparc32/libc.abilist * sysdeps/unix/sysv/linux/sparc/sparc64/libc.abilist * sysdeps/unix/sysv/linux/tile/tilegx/tilegx32/libc.abilist * sysdeps/unix/sysv/linux/tile/tilegx/tilegx64/libc.abilist * sysdeps/unix/sysv/linux/tile/tilepro/libc.abilist * sysdeps/unix/sysv/linux/x86_64/64/libc.abilist * sysdeps/unix/sysv/linux/x86_64/x32/libc.abilist: Add entries for explicit_bzero and __explicit_bzero_chk.
* Define FE_SNANS_ALWAYS_SIGNAL.Joseph Myers2016-12-166-3/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TS 18661-1 defines a macro FE_SNANS_ALWAYS_SIGNAL in <fenv.h>, to indicate that the recommended practice regarding sNaNs (that operations always produce a qNaN output with "invalid" exception, even in the fmax / fmin / hypot / pow cases where a qNaN input would not result in qNaN output) is followed. Now that those functions with C99 special cases for NaNs have been fixed not to apply those special cases to sNaN, only to qNaN, glibc follows that recommended practice. This patch makes it define the corresponding macro. Since compiler optimizations may affect whether sNaNs behave as expected and the macro relates to both language and library features, it is only defined if __SUPPORT_SNAN__ is defined (which GCC defines for -fsignaling-nans). It is also not defined if FE_INVALID is undefined, since the recommended practice specifically refers to raising the "invalid" exception, so it seems inappropriate to define the macro for soft-float cases without support for exceptions. (Further refinement would be possible in cases where bits/fenv.h is shared by configurations both with and without exceptions support.) Tested for x86_64 and x86, and also did compile-only testing for nios2 to cover the no-exceptions case. * math/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT) && FE_INVALID && __SUPPORT_SNAN__] (FE_SNANS_ALWAYS_SIGNAL): New macro. * math/test-fe-snans-always-signal.c: New file. * math/Makefile (tests): Add test-fe-snans-always-signal. (CFLAGS-test-fe-snans-always-signal.c): New variable. * manual/arith.texi (Infinity and NaN): Document FE_SNANS_ALWAYS_SIGNAL.
* Fix typos and missing closing bracket in test-memchr.cAdhemerval Zanella2016-12-162-2/+7
| | | | | * string/test-memchr.c (do_test): Typo on ‘byte’ and missing closing bracket.
* Make build-many-glibcs.py flush stdout before execv.Joseph Myers2016-12-162-0/+6
| | | | | | | | | | | | When build-many-glibcs.py re-execs itself with execv, any buffered output on stdout may be lost (in particular, messages intended to go to a bot's log about the re-exec taking place). This patch makes it flush stdout before execv, similar to the flush before running a subprocess from the bot that is done to ensure output appears in the right order. * scripts/build-many-glibcs.py (Context.exec_self): Flush stdout before calling execv.
* Fix powerpc64/power7 memchr for large input sizesAdhemerval Zanella2016-12-163-10/+49
| | | | | | | | | | | | | | | | | | | | | | | Current optimized powercp64/power7 memchr uses a strategy to check for p versus align(p+n) (where 'p' is the input char pointer and n the maximum size to check for the byte) without taking care for possible overflow on the pointer addition in case of large 'n'. It was triggered by 3038145ca23 where default rawmemchr (used to created ppc64 rawmemchr in ifunc selection) now uses memchr (p, c, (size_t)-1) on its implementation. This patch fixes it by implement a satured addition where overflows sets the maximum pointer size to UINTPTR_MAX. Checked on powerpc64le-linux-gnu. [BZ# 20971] * sysdeps/powerpc/powerpc64/power7/memchr.S (__memchr): Avoid overflow in pointer addition. * string/test-memchr.c (do_test): Add an argument to pass as the size on memchr. (test_main): Add check for SIZE_MAX.
* Make w_scalbln type-genericGabriel F. T. Gomes2016-12-167-110/+27
| | | | | | | | | This patch converts the wrapper scalbln (which set errno directly rather than doing anything with __kernel_standard) to use the type-generic template machinery, in the same way that has been done for ldexp. Tested for powerpc64le, s390, and x86_64.
* Fix x86, x86_64 fmax, fmin sNaN handling, add tests (bug 20947).Joseph Myers2016-12-1512-27/+322
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Various fmax and fmin function implementations mishandle sNaN arguments: (a) When both arguments are NaNs, the return value should be a qNaN, but sometimes it is an sNaN if at least one argument is an sNaN. (b) Under TS 18661-1 semantics, if either argument is an sNaN then the result should be a qNaN (whereas if one argument is a qNaN and the other is not a NaN, the result should be the non-NaN argument). Various implementations treat sNaNs like qNaNs here. This patch fixes the x86 and x86_64 versions (ignoring float and double for 32-bit x86 given the inability to reliably avoid the sNaN turning into a qNaN before it gets to the called function). Tests of sNaN inputs to these functions are added. Note on architecture versions I haven't changed for this issue: AArch64 already gets this right (it uses a hardware instruction with the correct semantics for both quiet and signaling NaNs) and does not need changes. It's possible Alpha, IA64, SPARC might need changes (this would be shown by the testsuite if so). Tested for x86_64 and x86 (both i686 and i586 builds, to cover the different x86 implementations). [BZ #20947] * sysdeps/i386/fpu/s_fmaxl.S (__fmaxl): Add the arguments when either is a signaling NaN. * sysdeps/i386/fpu/s_fminl.S (__fminl): Likewise. Make code follow fmaxl more closely. * sysdeps/i386/i686/fpu/s_fmaxl.S (__fmaxl): Add the arguments when either is a signaling NaN. * sysdeps/i386/i686/fpu/s_fminl.S (__fminl): Likewise. * sysdeps/x86_64/fpu/s_fmax.S (__fmax): Likewise. * sysdeps/x86_64/fpu/s_fmaxf.S (__fmaxf): Likewise. * sysdeps/x86_64/fpu/s_fmaxl.S (__fmaxl): Likewise. * sysdeps/x86_64/fpu/s_fmin.S (__fmin): Likewise. * sysdeps/x86_64/fpu/s_fminf.S (__fminf): Likewise. * sysdeps/x86_64/fpu/s_fminl.S (__fminl): Likewise. * math/libm-test.inc (fmax_test_data): Add tests of sNaN inputs. (fmin_test_data): Likewise.
* Fix assertion failure on test timeoutAndreas Schwab2016-12-152-1/+6
|
* Fix powerpc fmax, fmin sNaN handling (bug 20947).Joseph Myers2016-12-153-2/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Various fmax and fmin function implementations mishandle sNaN arguments: (a) When both arguments are NaNs, the return value should be a qNaN, but sometimes it is an sNaN if at least one argument is an sNaN. (b) Under TS 18661-1 semantics, if either argument is an sNaN then the result should be a qNaN (whereas if one argument is a qNaN and the other is not a NaN, the result should be the non-NaN argument). Various implementations treat sNaNs like qNaNs here. This patch fixes the powerpc versions of these functions (shared by float and double, 32-bit and 64-bit). The structure of those versions is that all ordered cases are already handled before anything dealing with the case where the arguments are unordered; thus, this patch causes no change to the code executed in the common case (neither argument a NaN). Tested for powerpc (32-bit and 64-bit), together with tests to be added along with the x86_64 / x86 fixes. [BZ #20947] * sysdeps/powerpc/fpu/s_fmax.S (__fmax): Add the arguments when either is a signaling NaN. * sysdeps/powerpc/fpu/s_fmin.S (__fmin): Likewise.
* Fix generic fmax, fmin sNaN handling (bug 20947).Joseph Myers2016-12-143-2/+21
| | | | | | | | | | | | | | | | | | | | | | | | Various fmax and fmin function implementations mishandle sNaN arguments: (a) When both arguments are NaNs, the return value should be a qNaN, but sometimes it is an sNaN if at least one argument is an sNaN. (b) Under TS 18661-1 semantics, if either argument is an sNaN then the result should be a qNaN (whereas if one argument is a qNaN and the other is not a NaN, the result should be the non-NaN argument). Various implementations treat sNaNs like qNaNs here. This patch fixes the generic implementations used in the absence of architecture-specific versions. Tested for mips64 and powerpc (together with testcases that I'll add along with the x86_64 / x86 fixes). [BZ #20947] * math/s_fmax_template.c (M_DECL_FUNC (__fmax)): Add the arguments when either is a signaling NaN. * math/s_fmin_template.c (M_DECL_FUNC (__fmin)): Likewise.
* Refactor long double information into bits/long-double.h.Joseph Myers2016-12-1421-166/+121
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Information about whether the ABI of long double is the same as that of double is split between bits/mathdef.h and bits/wordsize.h. When the ABIs are the same, bits/mathdef.h defines __NO_LONG_DOUBLE_MATH. In addition, in the case where the same glibc binary supports both -mlong-double-64 and -mlong-double-128, bits/wordsize.h defines __LONG_DOUBLE_MATH_OPTIONAL, along with __NO_LONG_DOUBLE_MATH if this particular compilation is with -mlong-double-64. As part of the refactoring I proposed in <https://sourceware.org/ml/libc-alpha/2016-11/msg00745.html>, this patch puts all that information in a single header, bits/long-double.h. It is included from sys/cdefs.h alongside the include of bits/wordsize.h, so other headers generally do not need to include bits/long-double.h directly. Previously, various bits/mathdef.h headers and bits/wordsize.h headers had this long double information (including implicitly in some bits/mathdef.h headers through not having the defines present in the default version). After the patch, it's all in six bits/long-double.h headers. Furthermore, most of those new headers are not architecture-specific. Architectures with optional long double all use the ldbl-opt sysdeps directory, either in the order (ldbl-64-128, ldbl-opt, ldbl-128) or (ldbl-128ibm, ldbl-opt). Thus a generic header for the case where long double = double, and headers in ldbl-128, ldbl-96 and ldbl-opt, suffices to cover every architecture except for cases where long double properties vary between different ABIs sharing a set of installed headers; fortunately all the ldbl-opt cases share a single compiler-predefined macro __LONG_DOUBLE_128__ that can be used to tell whether this compilation is -mlong-double-64 or -mlong-double-128. The two cases where a set of headers is shared between ABIs with different long double properties, MIPS (o32 has long double = double, other ABIs use ldbl-128) and SPARC (32-bit has optional long double, 64-bit has required long double), need their own bits/long-double.h headers. As with bits/wordsize.h, multiple-include protection for this header is generally implicit through the include guards on sys/cdefs.h, and multiple inclusion is harmless in any case. There is one subtlety: the header must not define __LONG_DOUBLE_MATH_OPTIONAL if __NO_LONG_DOUBLE_MATH was defined before its inclusion, because doing so breaks how sysdeps/ieee754/ldbl-opt/nldbl-compat.h defines __NO_LONG_DOUBLE_MATH itself before including system headers. Subject to keeping that working, it would be reasonable to move these macros from defined/undefined #ifdef to always-defined 1/0 #if semantics, but this patch does not attempt to do so, just rearranges where the macros are defined. After this patch, the only use of bits/mathdef.h is the alpha one for modifying complex function ABIs for old GCC. Thus, all versions of the header other than the default and alpha versions are removed, as is the include from math.h. Tested for x86_64 and x86. Also did compilation-only testing with build-many-glibcs.py. * bits/long-double.h: New file. * sysdeps/ieee754/ldbl-128/bits/long-double.h: Likewise. * sysdeps/ieee754/ldbl-96/bits/long-double.h: Likewise. * sysdeps/ieee754/ldbl-opt/bits/long-double.h: Likewise. * sysdeps/mips/bits/long-double.h: Likewise. * sysdeps/unix/sysv/linux/sparc/bits/long-double.h: Likewise. * math/Makefile (headers): Add bits/long-double.h. * misc/sys/cdefs.h: Include <bits/long-double.h>. * stdlib/strtold.c: Include <bits/long-double.h> instead of <bits/wordsize.h>. * bits/mathdef.h [!_COMPLEX_H]: Do not allow inclusion. [!__NO_LONG_DOUBLE_MATH]: Remove conditional code. * math/math.h: Do not include <bits/mathdef.h>. * sysdeps/aarch64/bits/mathdef.h: Remove file. * sysdeps/alpha/bits/mathdef.h [!_COMPLEX_H]: Do not allow inclusion. * sysdeps/ia64/bits/mathdef.h: Remove file. * sysdeps/m68k/m680x0/bits/mathdef.h: Likewise. * sysdeps/mips/bits/mathdef.h: Likewise. * sysdeps/powerpc/bits/mathdef.h: Likewise. * sysdeps/s390/bits/mathdef.h: Likewise. * sysdeps/sparc/bits/mathdef.h: Likewise. * sysdeps/x86/bits/mathdef.h: Likewise. * sysdeps/s390/s390-32/bits/wordsize.h [!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]: Remove conditional code. * sysdeps/s390/s390-64/bits/wordsize.h [!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]: Likewise. * sysdeps/unix/sysv/linux/alpha/bits/wordsize.h [!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]: Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/wordsize.h [!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]: Likewise. * sysdeps/unix/sysv/linux/sparc/bits/wordsize.h [!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]: Likewise.
* Include <linux/falloc.h> in bits/fcntl-linux.h.Joseph Myers2016-12-142-9/+10
| | | | | | | | | | | | | | | This patch makes bits/fcntl-linux.h include <linux/falloc.h> to define the FALLOC_* flags under __USE_GNU (linux/falloc.h defines only those bits, nothing else). Tested for x86_64 and x86. * sysdeps/unix/sysv/linux/bits/fcntl-linux.h [__USE_GNU]: Include <linux/falloc.h>. (FALLOC_FL_KEEP_SIZE): Remove. (FALLOC_FL_PUNCH_HOLE): Likewise. (FALLOC_FL_COLLAPSE_RANGE): Likewise. (FALLOC_FL_ZERO_RANGE): Likewise.
* Fix arg used as litteral suffix in tst-strfrom.hGabriel F. T. Gomes2016-12-142-1/+5
| | | | | | | | | The macro ENTRY in tst-strfrom.h is used to generate the input values for each floating-point type (float, double, long double). It should append the parameter LSUF (Literal suffix) to the floating-point number, but is using CSUF (C function suffix). This patch fixes it. Tested for powerpc64le and x86_64.
* Consolidate renameat Linux implementationAdhemerval Zanella2016-12-143-1/+35
| | | | | | | | | | | | | | | This patch consolidates the Linux renameat implementation on sysdeps/unix/sysv/linux/renameat.c. The renameat syscall was deprecated at b0da6d44 for newer architectures, so using the auto-generation list may generate wrappers that returns ENOSYS. Current code try to use __NR_renameat and if it is not define it uses __NR_renameat2. Checked on x86_64 and aarch64. * sysdeps/unix/sysv/linux/renameat.c: New file. * sysdeps/unix/sysv/linux/syscalls.list: Remove renameat.
* Consolidate rename Linux implementationAdhemerval Zanella2016-12-142-4/+15
| | | | | | | | | | | | This patch consolidates the Linux rename implementation on sysdeps/unix/sysv/linux/rename.c. Current code try to use __NR_rename if is defined and apply the same strategy for __NR_renameat and __NR_renameat2. Check on x86_64 and aarch64. * sysdeps/unix/sysv/linux/rename.c: New file. * sysdeps/unix/sysv/linux/generic/rename.c: Remove file.
* Add [BZ #19398] marker to ChangeLog entry.Joseph Myers2016-12-141-0/+1
|
* Improve strtok and strtok_r performance. Instead of calling strpbrk whichWilco Dijkstra2016-12-146-92/+88
| | | | | | | | | | | | | | | | | | calls strcspn, call strcspn directly so we get the end of the token without an extra call to rawmemchr. Also avoid an unnecessary call to strcspn after the last token by adding an early exit for an empty string. Change strtok to tailcall strtok_r to avoid unnecessary code duplication. Remove the special header optimization for strtok_r of a 1-character constant string - both strspn and strcspn contain optimizations for this case. Benchmarking this showed similar performance in the worst case, but up to 5.5x better performance in the "found" case for large inputs. * benchtests/bench-strtok.c (oldstrtok): Add old implementation. * string/strtok.c (strtok): Change to tailcall __strtok_r. * string/strtok_r.c (__strtok_r): Optimize for performance. * string/string-inlines.c (__old_strtok_r_1c): New function. * string/bits/string2.h (__strtok_r): Move to string-inlines.c.
* Make w_log1p type-genericGabriel F. T. Gomes2016-12-147-131/+21
| | | | | | | | This patch converts the wrapper log1p (which set errno directly rather than doing anything with __kernel_standard) to use the type-generic template machinery, in the same way that has been done for ilogb. Tested for powerpc64le, s390, and x86_64.
* Improve generic rawmemchr for targets that don't have anWilco Dijkstra2016-12-142-149/+10
| | | | | | | | | | assembler version by tailcalling memchr with the maximum size. If a target has an optimized memchr this is significantly faster, if not, then this makes little difference. Also optimize the special case of zero to use strlen as this is typically faster than memchr. * string/rawmemchr.c (RAWMEMCHR): Use faster memchr/strlen.
* Use Linux 4.9 (headers) in build-many-glibcs.py.Joseph Myers2016-12-142-1/+6
| | | | | | | | | | | | This patch updates build-many-glibcs.py to use Linux 4.9 for kernel headers unless another version is explicitly specified. Note that when a version changes like this you'll need to use --replace-sources when updating an existing checkout to tell build-many-glibcs.py it's OK to delete and replace the sources of a component for which the version used has changed. * scripts/build-many-glibcs.py (Context.checkout): Default Linux kernel version to 4.9.
* Better design of libm.a installation rule.Andrew Senkevich2016-12-132-4/+12
| | | | | * math/Makefile ($(inst_libdir)/libm-$(version).a): New target. * ($(inst_libdir)/libm.a): Fix rule to create the target only.
* powerpc: remove _dl_platform_string and _dl_powerpc_platformsAndreas Schwab2016-12-133-62/+19
|
* nptl/tst-cancel7: Add missing case labelFlorian Weimer2016-12-132-0/+5
| | | | | | The label was lost during the conversion to the new test framework in commit c23de0aacbeaa7a091609b35764bed931475a16d, and the --command option is currently unused.
* Expose linking against libsupport as make dependencyFlorian Weimer2016-12-133-6/+15
| | | | This ensures that tests are rebuilt when libsupport changes.
* powerpc: strncmp optimization for power9Rajalakshmi Srinivasaraghavan2016-12-136-1/+433
| | | | | | | Vectorized loops are used for strings > 32B when compared to power8 optimization. Tested on power9 ppc64le simulator.
* Add getentropy, getrandom, <sys/random.h> [BZ #17252]Florian Weimer2016-12-1241-4/+693
|