about summary refs log tree commit diff
path: root/sysdeps/x86_64
Commit message (Collapse)AuthorAgeFilesLines
* x86: Add thresholds for "rep movsb/stosb" to tunablesH.J. Lu2020-07-062-26/+2
| | | | | | | | | | Add x86_rep_movsb_threshold and x86_rep_stosb_threshold to tunables to update thresholds for "rep movsb" and "rep stosb" at run-time. Note that the user specified threshold for "rep movsb" smaller than the minimum threshold will be ignored. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* i386: Use builtin sqrtlAdhemerval Zanella2020-06-221-27/+0
| | | | Checked on i686-linux-gnu.
* x86_64: Use builtin sqrt{f,l}Adhemerval Zanella2020-06-224-65/+31
| | | | Checked on x86_64-linux-gnu.
* Fix avx2 strncmp offset compare condition check [BZ #25933]Sunil K Pandey2020-06-171-0/+15
| | | | | | | | | strcmp-avx2.S: In avx2 strncmp function, strings are compared in chunks of 4 vector size(i.e. 32x4=128 byte for avx2). After first 4 vector size comparison, code must check whether it already passed the given offset. This patch implement avx2 offset check condition for strncmp function, if both string compare same for first 4 vector size.
* x86_64: Use %xmmN with vpxor to clear a vector registerH.J. Lu2020-06-172-3/+3
| | | | | Since "vpxor %xmmN, %xmmN, %xmmN" clears the whole vector register, use %xmmN, instead of %ymmN, with vpxor to clear a vector register.
* dl-runtime: reloc_{offset,index} now functions arch overide'ableVineet Gupta2020-06-052-9/+35
| | | | | | | | | | | | | | | The existing macros are fragile and expect local variables with a certain name. Fix this by defining them as functions with default implementation in a new header dl-runtime.h which arches can override if need be. This came up during ARC port review, hence the need for argument pltgot in reloc_index() which is not needed by existing ports. This patch potentially only affects hppa/x86 ports, build tested for both those configs and a few more. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Linux: Remove remnants of the getcpu cacheFlorian Weimer2020-05-162-2/+1
| | | | | | The getcpu cache was removed from the kernel in Linux 2.6.24. glibc support from the sched_getcpu implementation was removed in commit dd26c44403582fdf10d663170f947dfe4b3207a0 ("Consolidate sched_getcpu").
* x86-64: Use RDX_LP on __x86_shared_non_temporal_threshold [BZ #25966]H.J. Lu2020-05-091-3/+3
| | | | | | | | | Since __x86_shared_non_temporal_threshold is defined as long int __x86_shared_non_temporal_threshold; and long int is 4 bytes for x32, use RDX_LP to compare against __x86_shared_non_temporal_threshold in assembly code.
* Remove unused floating-point configuration from gmp-impl.h.Joseph Myers2020-04-281-2/+0
| | | | | | | | | | | | | | | This patch removes the IEEE_DOUBLE_BIG_ENDIAN and IEEE_DOUBLE_MIXED_ENDIAN macros from gmp-impl.h and gmp-mparam.h, and the ieee_double_extract union from gmp-impl.h. The macros were used only in defining the union, which was used nowhere in glibc. As GMP's gmp-impl.h is over 5000 lines, the file in glibc is so far from the GMP version that it doesn't seem to make sense to keep things there that are not relevant in glibc. (I expect there is plenty more in the header after this patch that is also not relevant in glibc and can be cleaned up later.) Tested with build-many-glibcs.py that installed stripped shared libraries are unchanged by this patch.
* Revert "x86_64: Add SSE sfp-exceptions"Adhemerval Zanella2020-04-201-3/+1
| | | | | | | | | | | The __sfp_handle_exceptions is not fully correct regarding raising exceptions, since there is no direct way to raise only FP_EX_OVERFLOW nor FP_EX_UNDERFLOW for SSE mode. Both libgcc and feraiseexcept rely on x87 mode to accomplish it. This reverts commit 460ee50de054396cc9791ff4cfdc2f5029fb923d. Checked on x86_64.
* x86_64: Add SSE sfp-exceptionsAdhemerval Zanella2020-04-171-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | The exported x86_64 fenv.h functions operate on both i387 and SSE (since they should work on both float, double, and long double) while the internal libc_fe* set either SSE (float, double, and float128) or i387 (long double). The libgcc __sfp_handle_exceptions (used on float128 implementation), however, will set either SEE or i387 exception depending of the exception to raise. This broke the internal assumption of float128 where only SSE operations will be used. This patch reimplements the libgcc __sfp_handle_exceptions to use only SSE operations and sets libgcc to use it instead of its own implementation. And I think we should fix libgcc in a similar manner, since checking on config/i386/64/sfp-machine.h it already only supports SSE rounding mode and x86_64 ABI also expectes float128 to use SSE registers [1] (although it is not clear on how future implementation might implement it). Checked on x86_64-linux-gnu. [1] https://github.com/hjl-tools/x86-psABI/wiki/X86-psABI
* nptl: Remove x86_64 cancellation assembly implementations [BZ #25765]Adhemerval Zanella2020-04-031-7/+0
| | | | | | | | | All cancellable syscalls are done by C implementations, so there is no no need to use a specialized implementation to optimize register usage. It fixes BZ #25765. Checked on x86_64-linux-gnu.
* math: Add inputs that yield larger errors for float type (x86_64)Paul Zimmermann2020-03-311-42/+46
| | | | | | | | | | | The corner cases included were generated using exhaustive search for all float/binary32 values on x86_64 (comparing to MPFR for correct rounding to nearest). For the j0/j1/y0 functions, only cases with ulp error <= 9 were included. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* math: Remove inline math testsAdhemerval Zanella2020-03-191-1116/+0
| | | | | | | | | | | | | With mathinline removal there is no need to keep building and testing inline math tests. The gen-libm-tests.py support to generate ULP_I_* is removed and all libm-test-ulps files are updated to longer have the i{float,double,ldouble} entries. The support for no-test-inline is also removed from both gen-auto-libm-tests and the auto-libm-test-out-* were regenerated. Checked on x86_64-linux-gnu and i686-linux-gnu.
* x86: Avoid single-argument _Static_assert in <tls.h>Florian Weimer2020-02-171-8/+12
| | | | | Older GCC versions do not support this extension. Fixes commit f1bdee61797 ("x86 tls: Use _Static_assert for TLS access size assertion").
* x86 tls: Use _Static_assert for TLS access size assertionSamuel Thibault2020-02-171-26/+20
|
* ld.so: Do not export free/calloc/malloc/realloc functions [BZ #25486]Florian Weimer2020-02-151-6/+0
| | | | | | | | | | | | | | | | | | | Exporting functions and relying on symbol interposition from libc.so makes the choice of implementation dependent on DT_NEEDED order, which is not what some compiler drivers expect. This commit replaces one magic mechanism (symbol interposition) with another one (preprocessor-/compiler-based redirection). This makes the hand-over from the minimal malloc to the full malloc more explicit. Removing the ABI symbols is backwards-compatible because libc.so is always in scope, and the dynamic loader will find the malloc-related symbols there since commit f0b2132b35248c1f4a80f62a2c38cddcc802aa8c ("ld.so: Support moving versioned symbols between sonames [BZ #24741]"). Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* linux: Consolidate INLINE_SYSCALLAdhemerval Zanella2020-02-141-1/+1
| | | | | | | | | | | With all Linux ABIs using the expected Linux kABI to indicate syscalls errors, there is no need to replicate the INLINE_SYSCALL. The generic Linux sysdep.h includes errno.h even for !__ASSEMBLER__, which is ok now and it allows cleanup some archaic code that assume otherwise. Checked with a build against all affected ABIs.
* Add libm_alias_finite for _finite symbolsWilco Dijkstra2020-01-0321-31/+53
| | | | | | | | | | | | | | | | | | | This patch adds a new macro, libm_alias_finite, to define all _finite symbol. It sets all _finite symbol as compat symbol based on its first version (obtained from the definition at built generated first-versions.h). The <fn>f128_finite symbols were introduced in GLIBC 2.26 and so need special treatment in code that is shared between long double and float128. It is done by adding a list, similar to internal symbol redifinition, on sysdeps/ieee754/float128/float128_private.h. Alpha also needs some tricky changes to ensure we still emit 2 compat symbols for sqrt(f). Passes buildmanyglibc. Co-authored-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* Update copyright dates with scripts/update-copyrights.Joseph Myers2020-01-01515-515/+515
|
* Always use wordsize-64 version of s_trunc.c.Stefan Liebler2019-12-111-1/+1
| | | | | | | | | | This patch replaces s_trunc.c in sysdeps/dbl-64 with the one in sysdeps/dbl-64/wordsize-64 and removes the latter one. The code is not changed except changes in code style. Also adjusted the include path in x86_64 and sparc64 files. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Always use wordsize-64 version of s_ceil.c.Stefan Liebler2019-12-111-1/+1
| | | | | | | | | | This patch replaces s_ceil.c in sysdeps/dbl-64 with the one in sysdeps/dbl-64/wordsize-64 and removes the latter one. The code is not changed except changes in code style. Also adjusted the include path in x86_64 and sparc64 files. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Always use wordsize-64 version of s_floor.c.Stefan Liebler2019-12-111-1/+1
| | | | | | | | | | This patch replaces s_floor.c in sysdeps/dbl-64 with the one in sysdeps/dbl-64/wordsize-64 and removes the latter one. The code is not changed except changes in code style. Also adjusted the include path in x86_64 and sparc64 files. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Always use wordsize-64 version of s_rint.c.Stefan Liebler2019-12-111-1/+1
| | | | | | | | | | This patch replaces s_rint.c in sysdeps/dbl-64 with the one in sysdeps/dbl-64/wordsize-64 and removes the latter one. The code is not changed except changes in code style. Also adjusted the include path in x86_64 file. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Always use wordsize-64 version of s_nearbyint.c.Stefan Liebler2019-12-111-1/+1
| | | | | | | | | | This patch replaces s_nearbyint.c in sysdeps/dbl-64 with the one in sysdeps/dbl-64/wordsize-64 and removes the latter one. The code is not changed except changes in code style. Also adjusted the include path in x86_64 file. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Do not run IFUNC resolvers for LD_DEBUG=unused [BZ #24214]Florian Weimer2019-12-021-1/+2
| | | | | | | | | This commit adds missing skip_ifunc checks to aarch64, arm, i386, sparc, and x86_64. A new test case ensures that IRELATIVE IFUNC resolvers do not run in various diagnostic modes of the dynamic loader. Reviewed-By: Szabolcs Nagy <szabolcs.nagy@arm.com>
* nptl: Add tests for internal pthread_rwlock_t offsetsAdhemerval Zanella2019-11-261-0/+6
| | | | | | | | | | | | | This patch new build tests to check for internal fields offsets for internal pthread_rwlock_t definition. Althoug the '__data.__flags' field layout should be preserved due static initializators, the patch also adds tests for the futexes that may be used in a shared memory (although using different libc version in such scenario is not really supported). Checked with a build against all affected ABIs. Change-Id: Iccc103d557de13d17e4a3f59a0cad2f4a640c148
* nptl: Cleanup mutex internal offset testsAdhemerval Zanella2019-11-261-4/+0
| | | | | | | | | | | | | The offsets of pthread_mutex_t __data.__nusers, __data.__spins, __data.elision, __data.list are not required to be constant over the releases. Only the __data.__kind is used for static initializers. This patch also adds an additional size check for __data.__kind. Checked with a build against affected ABIs. Change-Id: I7a4e48cc91b4c4ada57e9a5d1b151fb702bfaa9f
* Remove x64 _finite tests and referencesWilco Dijkstra2019-10-2151-428/+57
| | | | | | | | | Remove _finite tests and references from x86_64. Rather than calling __exp_finite, use exp directly (since it's the same entry point). x86_64 builds and passes testsuite. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Prefer https to http for gnu.org and fsf.org URLsPaul Eggert2019-09-07521-521/+521
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Also, change sources.redhat.com to sourceware.org. This patch was automatically generated by running the following shell script, which uses GNU sed, and which avoids modifying files imported from upstream: sed -ri ' s,(http|ftp)(://(.*\.)?(gnu|fsf|sourceware)\.org($|[^.]|\.[^a-z])),https\2,g s,(http|ftp)(://(.*\.)?)sources\.redhat\.com($|[^.]|\.[^a-z]),https\2sourceware.org\4,g ' \ $(find $(git ls-files) -prune -type f \ ! -name '*.po' \ ! -name 'ChangeLog*' \ ! -path COPYING ! -path COPYING.LIB \ ! -path manual/fdl-1.3.texi ! -path manual/lgpl-2.1.texi \ ! -path manual/texinfo.tex ! -path scripts/config.guess \ ! -path scripts/config.sub ! -path scripts/install-sh \ ! -path scripts/mkinstalldirs ! -path scripts/move-if-change \ ! -path INSTALL ! -path locale/programs/charmap-kw.h \ ! -path po/libc.pot ! -path sysdeps/gnu/errlist.c \ ! '(' -name configure \ -execdir test -f configure.ac -o -f configure.in ';' ')' \ ! '(' -name preconfigure \ -execdir test -f preconfigure.ac ';' ')' \ -print) and then by running 'make dist-prepare' to regenerate files built from the altered files, and then executing the following to cleanup: chmod a+x sysdeps/unix/sysv/linux/riscv/configure # Omit irrelevant whitespace and comment-only changes, # perhaps from a slightly-different Autoconf version. git checkout -f \ sysdeps/csky/configure \ sysdeps/hppa/configure \ sysdeps/riscv/configure \ sysdeps/unix/sysv/linux/csky/configure # Omit changes that caused a pre-commit check to fail like this: # remote: *** error: sysdeps/powerpc/powerpc64/ppc-mcount.S: trailing lines git checkout -f \ sysdeps/powerpc/powerpc64/ppc-mcount.S \ sysdeps/unix/sysv/linux/s390/s390-64/syscall.S # Omit change that caused a pre-commit check to fail like this: # remote: *** error: sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S: last line does not end in newline git checkout -f sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S
* Use generic memset/memcpy/memmove in benchtestsWilco Dijkstra2019-08-301-1/+0
| | | | | | | | | | | | | | | | | | | | | | | Use the generic C memset/memcpy/memmove in benchtests since comparing against a slow byte-oriented implementation makes no sense. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> 2019-08-29 Wilco Dijkstra <wdijkstr@arm.com> * benchtests/bench-memcpy.c (simple_memcpy): Remove. (generic_memcpy): Include generic C memcpy. * benchtests/bench-memmove.c (simple_memmove): Remove. (generic_memmove): Include generic C memmove. * benchtests/bench-memset.c (simple_memset): Remove. (generic_memset): Include generic C memset. * benchtests/bench-memset-large.c (simple_memset): Remove. (generic_memset): Include generic C memset. * benchtests/bench-memset-walk.c (simple_memset): Remove. (generic_memset): Include generic C memset. * string/memcpy.c (MEMCPY): Add defines to enable redirection. * string/memset.c (MEMSET): Likewise. * sysdeps/x86_64/memcopy.h: Remove empty file.
* x86-64: Compile branred.c with -mprefer-vector-width=128 [BZ #24603]H.J. Lu2019-07-243-0/+43
| | | | | | | | | | | | | | | | | | | | | When compiled with -O3 and AVX, GCC 8 and 9 optimize some loops in sysdeps/ieee754/dbl-64/branred.c with 256-bit vector instructions, which leads to store forward stall: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90579 There is no easy fix in compiler. This patch limits vector width to 128 bits to work around this issue. It improves performance of sin and cos by more than 40% on Skylake compiled with -O3 -march=skylake. Tested with GCC 7/8/9 on x86-64. [BZ #24603] * sysdeps/x86_64/configure.ac: Check if -mprefer-vector-width=128 works. * sysdeps/x86_64/configure: Regenerated. * sysdeps/x86_64/fpu/Makefile (CFLAGS-branred.c): New. Set to -mprefer-vector-width=128 if supported.
* x86: Add sysdeps/x86/dl-lookupcfg.hH.J. Lu2019-06-261-31/+0
| | | | | | | | | Since sysdeps/i386/dl-lookupcfg.h and sysdeps/x86_64/dl-lookupcfg.h are identical, we can replace them with sysdeps/x86/dl-lookupcfg.h. * sysdeps/i386/dl-lookupcfg.h: Moved to ... * sysdeps/x86/dl-lookupcfg.h: Here. * sysdeps/x86_64/dl-lookupcfg.h: Removed.
* wcsmbs: optimize wcscatAdhemerval Zanella2019-02-272-5/+10
| | | | | | | | | | | | | | | | | | | | | | | | This patch rewrites wcscat using wcslen and wcscpy. This is similar to the optimization done on strcat by 6e46de42fe. The strcpy changes are mainly to add the internal alias to avoid PLT calls. Checked on x86_64-linux-gnu and a build against the affected architectures. * include/wchar.h (__wcscpy): New prototype. * sysdeps/powerpc/powerpc32/power4/multiarch/wcscpy-ppc32.c (__wcscpy): Route internal symbol to generic implementation. * sysdeps/powerpc/powerpc32/power4/multiarch/wcscpy.c (wcscpy): Add internal __wcscpy alias. * sysdeps/powerpc/powerpc64/multiarch/wcscpy.c (wcscpy): Likewise. * sysdeps/s390/wcscpy.c (wcscpy): Likewise. * sysdeps/x86_64/multiarch/wcscpy.c (wcscpy): Likewise. * wcsmbs/wcscpy.c (wcscpy): Add * sysdeps/x86_64/multiarch/wcscpy-c.c (WCSCPY): Adjust macro to use generic implementation. * wcsmbs/wcscat.c (wcscat): Rewrite using wcslen and wcscpy.
* Add fall-through comments.Joseph Myers2019-02-121-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds fall-through comments in some cases where -Wextra produces implicit-fallthrough warnings. The patch is non-exhaustive. Apart from architecture-specific code for non-x86_64 architectures, it does not change sunrpc/xdr.c (legacy code, probably should have such changes, but left to be dealt with separately), or places that already had comments about the fall-through but not matching the form expected by -Wimplicit-fallthrough=3 (the default level with -Wextra; my inclination is to adjust those comments to match rather than downgrading to -Wimplicit-fallthrough=1 to allow any comment), or one place where I thought the implicit fallthrough was not correct and so should be handled separately as a bug fix. I think the key thing to consider in review of this patch is whether the fall-through is indeed intended and correct in each place where such a comment is added. Tested for x86_64. * elf/dl-exception.c (_dl_exception_create_format): Add fall-through comments. * elf/ldconfig.c (parse_conf_include): Likewise. * elf/rtld.c (print_statistics): Likewise. * locale/programs/charmap.c (parse_charmap): Likewise. * misc/mntent_r.c (__getmntent_r): Likewise. * posix/wordexp.c (parse_arith): Likewise. (parse_backtick): Likewise. * resolv/ns_ttl.c (ns_parse_ttl): Likewise. * sysdeps/x86/cpu-features.c (init_cpu_features): Likewise. * sysdeps/x86_64/dl-machine.h (elf_machine_rela): Likewise.
* Fix handling of collating elements in fnmatch (bug 17396, bug 16976)Andreas Schwab2019-02-041-1/+2
| | | | | | This fixes the same bug in fnmatch that was fixed by commit 7e2f0d2d77 for regexp matching. As a side effect it also removes the use of an unbound VLA.
* x86-64 memcmp: Use unsigned Jcc instructions on size [BZ #24155]H.J. Lu2019-02-043-9/+93
| | | | | | | | | | | | | | | | Since the size argument is unsigned. we should use unsigned Jcc instructions, instead of signed, to check size. Tested on x86-64 and x32, with and without --disable-multi-arch. [BZ #24155] CVE-2019-7309 * NEWS: Updated for CVE-2019-7309. * sysdeps/x86_64/memcmp.S: Use RDX_LP for size. Clear the upper 32 bits of RDX register for x32. Use unsigned Jcc instructions, instead of signed. * sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memcmp-2. * sysdeps/x86_64/x32/tst-size_t-memcmp-2.c: New test.
* x86-64 strnlen/wcsnlen: Properly handle the length parameter [BZ# 24097]H.J. Lu2019-01-215-11/+106
| | | | | | | | | | | | | | | | | | | | | On x32, the size_t parameter may be passed in the lower 32 bits of a 64-bit register with the non-zero upper 32 bits. The string/memory functions written in assembly can only use the lower 32 bits of a 64-bit register as length or must clear the upper 32 bits before using the full 64-bit register for length. This pach fixes strnlen/wcsnlen for x32. Tested on x86-64 and x32. On x86-64, libc.so is the same with and withou the fix. [BZ# 24097] CVE-2019-6488 * sysdeps/x86_64/multiarch/strlen-avx2.S: Use RSI_LP for length. Clear the upper 32 bits of RSI register. * sysdeps/x86_64/strlen.S: Use RSI_LP for length. * sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-strnlen and tst-size_t-wcsnlen. * sysdeps/x86_64/x32/tst-size_t-strnlen.c: New file. * sysdeps/x86_64/x32/tst-size_t-wcsnlen.c: Likewise.
* x86-64 strncpy: Properly handle the length parameter [BZ# 24097]H.J. Lu2019-01-215-8/+66
| | | | | | | | | | | | | | | | | | | On x32, the size_t parameter may be passed in the lower 32 bits of a 64-bit register with the non-zero upper 32 bits. The string/memory functions written in assembly can only use the lower 32 bits of a 64-bit register as length or must clear the upper 32 bits before using the full 64-bit register for length. This pach fixes strncpy for x32. Tested on x86-64 and x32. On x86-64, libc.so is the same with and withou the fix. [BZ# 24097] CVE-2019-6488 * sysdeps/x86_64/multiarch/strcpy-avx2.S: Use RDX_LP for length. * sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S: Likewise. * sysdeps/x86_64/multiarch/strcpy-ssse3.S: Likewise. * sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-strncpy. * sysdeps/x86_64/x32/tst-size_t-strncpy.c: New file.
* x86-64 strncmp family: Properly handle the length parameter [BZ# 24097]H.J. Lu2019-01-217-11/+170
| | | | | | | | | | | | | | | | | | | | | | On x32, the size_t parameter may be passed in the lower 32 bits of a 64-bit register with the non-zero upper 32 bits. The string/memory functions written in assembly can only use the lower 32 bits of a 64-bit register as length or must clear the upper 32 bits before using the full 64-bit register for length. This pach fixes the strncmp family for x32. Tested on x86-64 and x32. On x86-64, libc.so is the same with and withou the fix. [BZ# 24097] CVE-2019-6488 * sysdeps/x86_64/multiarch/strcmp-avx2.S: Use RDX_LP for length. * sysdeps/x86_64/multiarch/strcmp-sse42.S: Likewise. * sysdeps/x86_64/strcmp.S: Likewise. * sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-strncasecmp, tst-size_t-strncmp and tst-size_t-wcsncmp. * sysdeps/x86_64/x32/tst-size_t-strncasecmp.c: New file. * sysdeps/x86_64/x32/tst-size_t-strncmp.c: Likewise. * sysdeps/x86_64/x32/tst-size_t-wcsncmp.c: Likewise.
* x86-64 memset/wmemset: Properly handle the length parameter [BZ# 24097]H.J. Lu2019-01-215-16/+121
| | | | | | | | | | | | | | | | | | | | On x32, the size_t parameter may be passed in the lower 32 bits of a 64-bit register with the non-zero upper 32 bits. The string/memory functions written in assembly can only use the lower 32 bits of a 64-bit register as length or must clear the upper 32 bits before using the full 64-bit register for length. This pach fixes memset/wmemset for x32. Tested on x86-64 and x32. On x86-64, libc.so is the same with and withou the fix. [BZ# 24097] CVE-2019-6488 * sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S: Use RDX_LP for length. Clear the upper 32 bits of RDX register. * sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: Likewise. * sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-wmemset. * sysdeps/x86_64/x32/tst-size_t-memset.c: New file. * sysdeps/x86_64/x32/tst-size_t-wmemset.c: Likewise.
* x86-64 memrchr: Properly handle the length parameter [BZ# 24097]H.J. Lu2019-01-214-5/+63
| | | | | | | | | | | | | | | | | | On x32, the size_t parameter may be passed in the lower 32 bits of a 64-bit register with the non-zero upper 32 bits. The string/memory functions written in assembly can only use the lower 32 bits of a 64-bit register as length or must clear the upper 32 bits before using the full 64-bit register for length. This pach fixes memrchr for x32. Tested on x86-64 and x32. On x86-64, libc.so is the same with and withou the fix. [BZ# 24097] CVE-2019-6488 * sysdeps/x86_64/memrchr.S: Use RDX_LP for length. * sysdeps/x86_64/multiarch/memrchr-avx2.S: Likewise. * sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memrchr. * sysdeps/x86_64/x32/tst-size_t-memrchr.c: New file.
* x86-64 memcpy: Properly handle the length parameter [BZ# 24097]H.J. Lu2019-01-216-42/+122
| | | | | | | | | | | | | | | | | | | | | | | | On x32, the size_t parameter may be passed in the lower 32 bits of a 64-bit register with the non-zero upper 32 bits. The string/memory functions written in assembly can only use the lower 32 bits of a 64-bit register as length or must clear the upper 32 bits before using the full 64-bit register for length. This pach fixes memcpy for x32. Tested on x86-64 and x32. On x86-64, libc.so is the same with and withou the fix. [BZ# 24097] CVE-2019-6488 * sysdeps/x86_64/multiarch/memcpy-ssse3-back.S: Use RDX_LP for length. Clear the upper 32 bits of RDX register. * sysdeps/x86_64/multiarch/memcpy-ssse3.S: Likewise. * sysdeps/x86_64/multiarch/memmove-avx512-no-vzeroupper.S: Likewise. * sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: Likewise. * sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memcpy. tst-size_t-wmemchr. * sysdeps/x86_64/x32/tst-size_t-memcpy.c: New file.
* x86-64 memcmp/wmemcmp: Properly handle the length parameter [BZ# 24097]H.J. Lu2019-01-216-9/+114
| | | | | | | | | | | | | | | | | | | | | | On x32, the size_t parameter may be passed in the lower 32 bits of a 64-bit register with the non-zero upper 32 bits. The string/memory functions written in assembly can only use the lower 32 bits of a 64-bit register as length or must clear the upper 32 bits before using the full 64-bit register for length. This pach fixes memcmp/wmemcmp for x32. Tested on x86-64 and x32. On x86-64, libc.so is the same with and withou the fix. [BZ# 24097] CVE-2019-6488 * sysdeps/x86_64/multiarch/memcmp-avx2-movbe.S: Use RDX_LP for length. Clear the upper 32 bits of RDX register. * sysdeps/x86_64/multiarch/memcmp-sse4.S: Likewise. * sysdeps/x86_64/multiarch/memcmp-ssse3.S: Likewise. * sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memcmp and tst-size_t-wmemcmp. * sysdeps/x86_64/x32/tst-size_t-memcmp.c: New file. * sysdeps/x86_64/x32/tst-size_t-wmemcmp.c: Likewise.
* x86-64 memchr/wmemchr: Properly handle the length parameter [BZ# 24097]H.J. Lu2019-01-216-5/+148
| | | | | | | | | | | | | | | | | | | | | | On x32, the size_t parameter may be passed in the lower 32 bits of a 64-bit register with the non-zero upper 32 bits. The string/memory functions written in assembly can only use the lower 32 bits of a 64-bit register as length or must clear the upper 32 bits before using the full 64-bit register for length. This pach fixes memchr/wmemchr for x32. Tested on x86-64 and x32. On x86-64, libc.so is the same with and withou the fix. [BZ# 24097] CVE-2019-6488 * sysdeps/x86_64/memchr.S: Use RDX_LP for length. Clear the upper 32 bits of RDX register. * sysdeps/x86_64/multiarch/memchr-avx2.S: Likewise. * sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memchr and tst-size_t-wmemchr. * sysdeps/x86_64/x32/test-size_t.h: New file. * sysdeps/x86_64/x32/tst-size_t-memchr.c: Likewise. * sysdeps/x86_64/x32/tst-size_t-wmemchr.c: Likewise.
* x86-64: Optimize strcat/strncat, strcpy/strncpy and stpcpy/stpncpy with AVX2Leonardo Sandoval2019-01-1415-6/+1337
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Optimize x86-64 strcat/strncat, strcpy/strncpy and stpcpy/stpncpy with AVX2. It uses vector comparison as much as possible. In general, the larger the source string, the greater performance gain observed, reaching speedups of 1.6x compared to SSE2 unaligned routines. Select AVX2 strcat/strncat, strcpy/strncpy and stpcpy/stpncpy on AVX2 machines where vzeroupper is preferred and AVX unaligned load is fast. * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add strcat-avx2, strncat-avx2, strcpy-avx2, strncpy-avx2, stpcpy-avx2 and stpncpy-avx2. * sysdeps/x86_64/multiarch/ifunc-impl-list.c: (__libc_ifunc_impl_list): Add tests for __strcat_avx2, __strncat_avx2, __strcpy_avx2, __strncpy_avx2, __stpcpy_avx2 and __stpncpy_avx2. * sysdeps/x86_64/multiarch/{ifunc-unaligned-ssse3.h => ifunc-strcpy.h}: rename header for a more generic name. * sysdeps/x86_64/multiarch/ifunc-strcpy.h: (IFUNC_SELECTOR): Return OPTIMIZE (avx2) on AVX 2 machines if AVX unaligned load is fast and vzeroupper is preferred. * sysdeps/x86_64/multiarch/stpcpy-avx2.S: New file * sysdeps/x86_64/multiarch/stpncpy-avx2.S: Likewise * sysdeps/x86_64/multiarch/strcat-avx2.S: Likewise * sysdeps/x86_64/multiarch/strcpy-avx2.S: Likewise * sysdeps/x86_64/multiarch/strncat-avx2.S: Likewise * sysdeps/x86_64/multiarch/strncpy-avx2.S: Likewise
* x86_64: Remove wrong THREAD_ATOMIC_* macrosAdhemerval Zanella2019-01-031-37/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The x86 defines optimized THREAD_ATOMIC_* macros where reference always the current thread instead of the one indicated by input 'descr' argument. It work as long the input is the self thread pointer, however it generates wrong code if the semantic is to set a bit atomicialy from another thread. This is not an issue for current GLIBC usage, however the new cancellation code expects that some synchronization code to atomically set bits from different threads. The generic code generates an additional load to reference to TLS segment, for instance the code: THREAD_ATOMIC_BIT_SET (THREAD_SELF, cancelhandling, CANCELED_BIT); Compiles to: lock;orl $4, %fs:776 Where with patch changes it now compiles to: mov %fs:16,%rax lock;orl $4, 776(%rax) If some usage indeed proves to be a hotspot we can add an extra macro with a more descriptive name (THREAD_ATOMIC_BIT_SET_SELF for instance) where x86_64 might optimize it. Checked on x86_64-linux-gnu. * sysdeps/x86_64/nptl/tls.h (THREAD_ATOMIC_CMPXCHG_VAL, THREAD_ATOMIC_AND, THREAD_ATOMIC_BIT_SET): Remove macros.
* Update copyright dates with scripts/update-copyrights.Joseph Myers2019-01-01504-504/+504
| | | | | | | * All files with FSF copyright notices: Update copyright dates using scripts/update-copyrights. * locale/programs/charmap-kw.h: Regenerated. * locale/programs/locfile-kw.h: Likewise.
* x86-64: Remove s_sincosf-sse2.SH.J. Lu2018-12-263-568/+2
| | | | | | | | | | | | | | The current s_sincosf.c is faster than s_sincosf-sse2.S. On Broadwell with FMA disabled, bench-sincosf shows: Before After Improvement max 154.032 114.517 34% min 6.25 5.609 11% mean 14.8728 12.8589 15% * sysdeps/x86_64/fpu/s_sincosf.S: Removed. * sysdeps/x86_64/fpu/multiarch/s_sincosf-sse2.S: Likewise. * sysdeps/x86_64/fpu/multiarch/s_sincosf-sse2.c: New file.
* Regenerate sysdeps/x86_64/fpu/libm-test-ulpsH.J. Lu2018-12-261-0/+6
| | | | * sysdeps/x86_64/fpu/libm-test-ulps: Regenerated.