about summary refs log tree commit diff
path: root/sysdeps
Commit message (Collapse)AuthorAgeFilesLines
* i386: Avoid rely on linker optimization to avoid relocationAdhemerval Zanella Netto2022-11-211-4/+9
| | | | | | | | | | | | | | lld does not implement all the linker optimization to avoid the GOT relocation as done by binutils (bfd/elf32-i386.c:elf_i386_convert_load_reloc). The current 'movl main@GOT(%ebx), %eax' will then create a GOT relocation when building with lld, which make static-pie status to not being able to start the provided main function. The change uses a __wrap_main local symbol, which in turn calls main (similar as used by aarch64 and s390x). Checked on i686-linux-gnu with binutils and lld. Reviewed-by: Fangrui Song <maskray@google.com>
* elf: Fix rtld-audit trampoline for aarch64Vladislav Khmelevsky2022-11-211-3/+1
| | | | | | | | | | | | | | | This patch fixes two problems with audit: 1. The DL_OFFSET_RV_VPCS offset was mixed up with DL_OFFSET_RG_VPCS, resulting in x2 register value nulling in RG structure. 2. We need to preserve the x8 register before function call, but don't have to save it's new value and restore it before return. Anyway the final restore was using OFFSET_RV instead of OFFSET_RG value which is wrong (althoug doesn't affect anything). Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Define in_int32_t_range to check if the 64 bit time_t syscall should be usedYunQiang Su2022-11-1716-20/+20
| | | | | | | | | | | | | | | | | Currently glibc uses in_time_t_range to detects time_t overflow, and if it occurs fallbacks to 64 bit syscall version. The function name is confusing because internally time_t might be either 32 bits or 64 bits (depending on __TIMESIZE). This patch refactors the in_time_t_range by replacing it with in_int32_t_range for the case to check if the 64 bit time_t syscall should be used. The in_time_t range is used to detect overflow of the syscall return value. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf/tst-tlsopt-powerpc fails when compiled with -mcpu=power10 (BZ# 29776)Alan Modra2022-11-141-1/+5
| | | | | Supports pcrel addressing of TLS GOT entry. Also tweak the non-pcrel asm constraint to better reflect how the reg is used.
* LoongArch: Hard Float Support for fmaximum_mag_num{f/ }, fminimum_mag_num{f/ }.Xiaolin Tang2022-11-144-0/+192
| | | | | | | | | | Use hardware Floating-point instruction f{maxa/mina}.{s/d}, fclass.{s/d} to implement fmaximum_mag_num{f/ }, fminimum_mag_num{f/ }. * sysdeps/loongarch/fpu/s_fmaximum_mag_num.c: New file. * sysdeps/loongarch/fpu/s_fmaximum_mag_numf.c: Likewise. * sysdeps/loongarch/fpu/s_fminimum_mag_num.c: Likewise. * sysdeps/loongarch/fpu/s_fminimum_mag_numf.c: Likewise.
* LoongArch: Hard Float Support for fmaximum_mag{f/ }, fminimum_mag{f/ }.Xiaolin Tang2022-11-144-0/+160
| | | | | | | | | | Use hardware Floating-point instruction f{maxa/mina}.{s/d}, fclass.{s/d} to implement fmaximum_mag{f/ }, fminimum_mag{f/ }. * sysdeps/loongarch/fpu/s_fmaximum_mag.c: New file. * sysdeps/loongarch/fpu/s_fmaximum_magf.c: Likewise. * sysdeps/loongarch/fpu/s_fminimum_mag.c: Likewise. * sysdeps/loongarch/fpu/s_fminimum_magf.c: Likewise.
* LoongArch: Hard Float Support for fmaxmag{f/ }, fminmag{f/ }.Xiaolin Tang2022-11-144-0/+116
| | | | | | | | | | Use hardware Floating-point instruction f{maxa/mina}.{s/d}, to implement fmaxmag{f/ }, fminmag{f/ }. * sysdeps/loongarch/fpu/s_fmaxmag.c: New file. * sysdeps/loongarch/fpu/s_fmaxmagf.c: Likewise. * sysdeps/loongarch/fpu/s_fminmag.c: Likewise. * sysdeps/loongarch/fpu/s_fminmagf.c: Likewise.
* LoongArch: Hard Float Support for fmaximum_num{f/ }, fminimum_num{f/ }.Xiaolin Tang2022-11-144-0/+193
| | | | | | | | | | Use hardware Floating-point instruction f{max/min}.{s/d}, fclass.{s/d} to implement fmaximum_num{f/ }, fminimum_num{f/ }. * sysdeps/loongarch/fpu/s_fmaximum_num.c: New file. * sysdeps/loongarch/fpu/s_fmaximum_numf.c: Likewise. * sysdeps/loongarch/fpu/s_fminimum_num.c: Likewise. * sysdeps/loongarch/fpu/s_fminimum_numf.c: Likewise.
* LoongArch: Hard Float Support for fmaximum{f/ }, fminimum{f/ }.Xiaolin Tang2022-11-144-0/+160
| | | | | | | | | | Use hardware Floating-point instruction f{max/min}.{s/d}, fclass.{s/d} to implement fmaximum{f/ }, fminimum{f/ }. * sysdeps/loongarch/fpu/s_fmaximum.c: New file. * sysdeps/loongarch/fpu/s_fmaximumf.c: Likewise. * sysdeps/loongarch/fpu/s_fminimum.c: Likewise. * sysdeps/loongarch/fpu/s_fminimumf.c: Likewise.
* LoongArch: Hard Float Support for float-point classification functions.Xiaolin Tang2022-11-1411-0/+333
| | | | | | | | | | | | | | | | | | Use hardware Floating-point instruction fclass.{s/d} to implement classification functions, i.e finite{f/ }, fpclassify{f/ }, isnan{f/ }, isinf{f/ }, issignaling{f/ }. * sysdeps/loongarch/fpu/s_finite.c: New file. * sysdeps/loongarch/fpu/s_finitef.c: Likewise. * sysdeps/loongarch/fpu/s_fpclassify.c: Likewise. * sysdeps/loongarch/fpu/s_fpclassifyf.c: Likewise. * sysdeps/loongarch/fpu/s_isinf.c: Likewise. * sysdeps/loongarch/fpu/s_isinff.c: Likewise. * sysdeps/loongarch/fpu/s_isnan.c: Likewise. * sysdeps/loongarch/fpu/s_isnanf.c: Likewise. * sysdeps/loongarch/fpu/s_issignaling.c: Likewise. * sysdeps/loongarch/fpu/s_issignalingf.c: Likewise. * sysdeps/loongarch/fpu_control.h: Add _FCLASS_* macro.
* LoongArch: Use __builtin_{fma, fmaf} to implement function {fma, fmaf}.Xiaolin Tang2022-11-141-0/+4
| | | | | | | Use __builtin_{fma, fmaf} to implement function {fma, fmaf} instead of the generic implementation. * sysdeps/loongarch/fpu/math-use-builtins-fma.h: New file.
* Linux: Support __IPC_64 in sysvctl *ctl command arguments (bug 29771)Florian Weimer2022-11-104-26/+63
| | | | | | | | | | | | | | | | | Old applications pass __IPC_64 as part of the command argument because old glibc did not check for unknown commands, and passed through the arguments directly to the kernel, without adding __IPC_64. Applications need to continue doing that for old glibc compatibility, so this commit enables this approach in current glibc. For msgctl and shmctl, if no translation is required, make direct system calls, as we did before the time64 changes. If translation is required, mask __IPC_64 from the command argument. For semctl, the union-in-vararg argument handling means that translation is needed on all architectures. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* riscv: Get level 3 cache's informationZong Li2022-11-091-0/+6
| | | | | | | | | RISC-V architecture extends the cache information for level 3 cache in AUX vector in Linux v.6.1-rc1. This patch supports sysconf to get the level 3 cache information. Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com> Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
* x86: Add avx2 optimized functions for the wchar_t strcpy familyNoah Goldstein2022-11-0827-18/+115
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implemented: wcscat-avx2 (+ 744 bytes wcscpy-avx2 (+ 539 bytes) wcpcpy-avx2 (+ 577 bytes) wcsncpy-avx2 (+1108 bytes) wcpncpy-avx2 (+1214 bytes) wcsncat-avx2 (+1085 bytes) Performance Changes: Times are from N = 10 runs of the benchmark suite and are reported as geometric mean of all ratios of New Implementation / Best Old Implementation. Best Old Implementation was determined with the highest ISA implementation. wcscat-avx2 -> 0.975 wcscpy-avx2 -> 0.591 wcpcpy-avx2 -> 0.698 wcsncpy-avx2 -> 0.730 wcpncpy-avx2 -> 0.711 wcsncat-avx2 -> 0.954 Code Size Changes: This change increase the size of libc.so by ~5.5kb bytes. For reference the patch optimizing the normal strcpy family functions decreases libc.so by ~5.2kb. Full check passes on x86-64 and build succeeds for all ISA levels w/ and w/o multiarch.
* x86: Add evex optimized functions for the wchar_t strcpy familyNoah Goldstein2022-11-0833-7/+858
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implemented: wcscat-evex (+ 905 bytes) wcscpy-evex (+ 674 bytes) wcpcpy-evex (+ 709 bytes) wcsncpy-evex (+1358 bytes) wcpncpy-evex (+1467 bytes) wcsncat-evex (+1213 bytes) Performance Changes: Times are from N = 10 runs of the benchmark suite and are reported as geometric mean of all ratios of New Implementation / Best Old Implementation. Best Old Implementation was determined with the highest ISA implementation. wcscat-evex -> 0.991 wcscpy-evex -> 0.587 wcpcpy-evex -> 0.695 wcsncpy-evex -> 0.719 wcpncpy-evex -> 0.694 wcsncat-evex -> 0.979 Code Size Changes: This change increase the size of libc.so by ~6.3kb bytes. For reference the patch optimizing the normal strcpy family functions decreases libc.so by ~5.7kb. Full check passes on x86-64 and build succeeds for all ISA levels w/ and w/o multiarch.
* x86: Optimize and shrink st{r|p}{n}{cat|cpy}-avx2 functionsNoah Goldstein2022-11-0813-1234/+1594
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Optimizations are: 1. Use more overlapping stores to avoid branches. 2. Reduce how unrolled the aligning copies are (this is more of a code-size save, its a negative for some sizes in terms of perf). 3. For st{r|p}n{cat|cpy} re-order the branches to minimize the number that are taken. Performance Changes: Times are from N = 10 runs of the benchmark suite and are reported as geometric mean of all ratios of New Implementation / Old Implementation. strcat-avx2 -> 0.998 strcpy-avx2 -> 0.937 stpcpy-avx2 -> 0.971 strncpy-avx2 -> 0.793 stpncpy-avx2 -> 0.775 strncat-avx2 -> 0.962 Code Size Changes: function -> Bytes New / Bytes Old -> Ratio strcat-avx2 -> 685 / 1639 -> 0.418 strcpy-avx2 -> 560 / 903 -> 0.620 stpcpy-avx2 -> 592 / 939 -> 0.630 strncpy-avx2 -> 1176 / 2390 -> 0.492 stpncpy-avx2 -> 1268 / 2438 -> 0.520 strncat-avx2 -> 1042 / 2563 -> 0.407 Notes: 1. Because of the significant difference between the implementations they are split into three files. strcpy-avx2.S -> strcpy, stpcpy, strcat strncpy-avx2.S -> strncpy strncat-avx2.S > strncat I couldn't find a way to merge them without making the ifdefs incredibly difficult to follow. Full check passes on x86-64 and build succeeds for all ISA levels w/ and w/o multiarch.
* x86: Optimize and shrink st{r|p}{n}{cat|cpy}-evex functionsNoah Goldstein2022-11-087-1173/+2115
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Optimizations are: 1. Use more overlapping stores to avoid branches. 2. Reduce how unrolled the aligning copies are (this is more of a code-size save, its a negative for some sizes in terms of perf). 3. Improve the loop a bit (similiar to what we do in strlen with 2x vpminu + kortest instead of 3x vpminu + kmov + test). 4. For st{r|p}n{cat|cpy} re-order the branches to minimize the number that are taken. Performance Changes: Times are from N = 10 runs of the benchmark suite and are reported as geometric mean of all ratios of New Implementation / Old Implementation. stpcpy-evex -> 0.922 strcat-evex -> 0.985 strcpy-evex -> 0.880 strncpy-evex -> 0.831 stpncpy-evex -> 0.780 strncat-evex -> 0.958 Code Size Changes: function -> Bytes New / Bytes Old -> Ratio strcat-evex -> 819 / 1874 -> 0.437 strcpy-evex -> 700 / 1074 -> 0.652 stpcpy-evex -> 735 / 1094 -> 0.672 strncpy-evex -> 1397 / 2611 -> 0.535 stpncpy-evex -> 1489 / 2691 -> 0.553 strncat-evex -> 1184 / 2832 -> 0.418 Notes: 1. Because of the significant difference between the implementations they are split into three files. strcpy-evex.S -> strcpy, stpcpy, strcat strncpy-evex.S -> strncpy strncat-evex.S > strncat I couldn't find a way to merge them without making the ifdefs incredibly difficult to follow. 2. All implementations can be made evex512 by including "x86-evex512-vecs.h" at the top. 3. All implementations have an optional define: `USE_EVEX_MASKED_STORE` Setting to one uses evex-masked stores for handling short strings. This saves code size and branches. It's disabled for all implementations are the moment as there are some serious drawbacks to masked stores in certain cases, but that may be fixed on future architectures. Full check passes on x86-64 and build succeeds for all ISA levels w/ and w/o multiarch.
* x86: Use VMM API in memcmpeq-evex.S and minor changesNoah Goldstein2022-11-081-100/+155
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Changes to generated code are: 1. In a few places use `vpcmpeqb` instead of `vpcmpneq` to save a byte of code size. 2. Add a branch for length <= (VEC_SIZE * 6) as opposed to doing the entire block of [VEC_SIZE * 4 + 1, VEC_SIZE * 8] in a single basic-block (the space to add the extra branch without changing code size is bought with the above change). Change (2) has roughly a 20-25% speedup for sizes in [VEC_SIZE * 4 + 1, VEC_SIZE * 6] and negligible to no-cost for [VEC_SIZE * 6 + 1, VEC_SIZE * 8] From N=10 runs on Tigerlake: align1,align2 ,length ,result ,New Time ,Cur Time ,New Time / Old Time 0 ,0 ,129 ,0 ,5.404 ,6.887 ,0.785 0 ,0 ,129 ,1 ,5.308 ,6.826 ,0.778 0 ,0 ,129 ,18446744073709551615 ,5.359 ,6.823 ,0.785 0 ,0 ,161 ,0 ,5.284 ,6.827 ,0.774 0 ,0 ,161 ,1 ,5.317 ,6.745 ,0.788 0 ,0 ,161 ,18446744073709551615 ,5.406 ,6.778 ,0.798 0 ,0 ,193 ,0 ,6.804 ,6.802 ,1.000 0 ,0 ,193 ,1 ,6.950 ,6.754 ,1.029 0 ,0 ,193 ,18446744073709551615 ,6.792 ,6.719 ,1.011 0 ,0 ,225 ,0 ,6.625 ,6.699 ,0.989 0 ,0 ,225 ,1 ,6.776 ,6.735 ,1.003 0 ,0 ,225 ,18446744073709551615 ,6.758 ,6.738 ,0.992 0 ,0 ,256 ,0 ,5.402 ,5.462 ,0.989 0 ,0 ,256 ,1 ,5.364 ,5.483 ,0.978 0 ,0 ,256 ,18446744073709551615 ,5.341 ,5.539 ,0.964 Rewriting with VMM API allows for memcmpeq-evex to be used with evex512 by including "x86-evex512-vecs.h" at the top. Complete check passes on x86-64.
* x86: Use VMM API in memcmp-evex-movbe.S and minor changesNoah Goldstein2022-11-081-133/+175
| | | | | | | | | | The only change to the existing generated code is `tzcnt` -> `bsf` to save a byte of code size here and there. Rewriting with VMM API allows for memcmp-evex-movbe to be used with evex512 by including "x86-evex512-vecs.h" at the top. Complete check passes on x86-64.
* Linux: Add ppoll fortify symbol for 64 bit time_t (BZ# 29746)Adhemerval Zanella2022-11-0822-2/+67
| | | | | | | | | | | | | | | | | | | | Similar to ppoll, the poll.h header needs to redirect the poll call to a proper fortified ppoll with 64 bit time_t support. The implementation is straightforward, just need to add a similar check as __poll_chk and call the 64 bit time_t ppoll version. The debug fortify tests are also extended to cover 64 bit time_t for affected ABIs. Unfortunately it requires an aditional symbol, which makes backport tricky. One possibility is to add a static inline version if compiler supports is and call abort instead of __chk_fail, so fortified version will call __poll64 in the end. Another possibility is to just remove the fortify support for _TIME_BITS=64. Checked on i686-linux-gnu.
* hurd: Add sigtimedwait and sigwaitinfo supportSamuel Thibault2022-11-073-112/+207
| | | | | This simply needed to add the timeout parameter to mach_msg, and copy information from struct hurd_signal_detail.
* x86_64: Implement evex512 version of strrchr and wcsrchrSunil K Pandey2022-11-035-0/+297
| | | | | | | | | | | | | | | | | | | | | | | | | | | Changes from v1: Use vec api for register. Replace VPCMP with VPCMPEQ Restructure and remove 1 unconditional jump. Change page cross logic to use sall. This patch implements following evex512 version of string functions. evex512 version takes up to 30% less cycle as compared to evex, depending on length and alignment. - strrchr function using 512 bit vectors. - wcsrchr function using 512 bit vectors. Code size data: strrchr-evex.o 879 byte strrchr-evex512.o 601 byte (-32%) wcsrchr-evex.o 882 byte wcsrchr-evex512.o 572 byte (-35%) Placeholder function, not used by any processor at the moment. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* elf: Introduce <dl-call_tls_init_tp.h> and call_tls_init_tp (bug 29249)Florian Weimer2022-11-0323-31/+60
| | | | | | | | This makes it more likely that the compiler can compute the strlen argument in _startup_fatal at compile time, which is required to avoid a dependency on strlen this early during process startup. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* elf: Rework exception handling in the dynamic loader [BZ #25486]Florian Weimer2022-11-0327-133/+7
| | | | | | | | | | | | | | | | | | | | | | | The old exception handling implementation used function interposition to replace the dynamic loader implementation (no TLS support) with the libc implementation (TLS support). This results in problems if the link order between the dynamic loader and libc is reversed (bug 25486). The new implementation moves the entire implementation of the exception handling functions back into the dynamic loader, using THREAD_GETMEM and THREAD_SETMEM for thread-local data support. These depends on Hurd support for these macros, added in commit b65a82e4e757c1e6cb7073916 ("hurd: Add THREAD_GET/SETMEM/_NC"). One small obstacle is that the exception handling facilities are used before the TCB has been set up, so a check is needed if the TCB is available. If not, a regular global variable is used to store the exception handling information. Also rename dl-error.c to dl-catch.c, to avoid confusion with the dlerror function. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* linux: Drop useless include from fstatat.cAurelien Jarno2022-11-021-2/+0
| | | | | | It is a left-over from previous refactorings. Reviewed by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* linux: Fix fstatat on MIPSn64 (BZ #29730)Aurelien Jarno2022-11-021-0/+51
| | | | | | | | | | | | Commit 6e8a0aac2f883 ("time: Fix overflow itimer tests on 32-bit systems") changed in_time_t_range to assume a 32-bit time_t. This broke fstatat on MIPSn64 that was using it with a 64-bit time_t due to difference between stat and stat64. This commit fix that by adding a MIPSn64 specific version, which bypasses the EOVERFLOW tests. Resolves: BZ #29730 Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* configure: Use -Wno-ignored-attributes if compiler warns about multiple aliasesAdhemerval Zanella2022-11-013-0/+10
| | | | | | | | | clang emits an warning when a double alias redirection is used, to warn the the original symbol will be used even when weak definition is overridden. However, this is a common pattern for weak_alias, where multiple alias are set to same symbol. Reviewed-by: Fangrui Song <maskray@google.com>
* Disable use of -fsignaling-nans if compiler does not support itAdhemerval Zanella2022-11-014-18/+18
| | | | Reviewed-by: Fangrui Song <maskray@google.com>
* Apply asm redirection in not-cancel before first useAdhemerval Zanella2022-11-011-15/+15
| | | | | | | It is avoid a build failure on clang where it can not redeclare function attribute after its first use. Reviewed-by: Fangrui Song <maskray@google.com>
* Fix build with GCC 13 _FloatN, _FloatNx built-in functionsJoseph Myers2022-10-317-5/+513
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | GCC 13 has added more _FloatN and _FloatNx versions of existing <math.h> and <complex.h> built-in functions, for use in libstdc++-v3. This breaks the glibc build because of how those functions are defined as aliases to functions with the same ABI but different types. Add appropriate -fno-builtin-* options for compiling relevant files, as already done for the case of long double functions aliasing double ones and based on the list of files used there. I fixed some mistakes in that list of double files that I noticed while implementing this fix, but there may well be more such (harmless) cases, in this list or the new one (files that don't actually exist or don't define the named functions as aliases so don't need the options). I did try to exclude cases where glibc doesn't define certain functions for _FloatN or _FloatNx types at all from the new uses of -fno-builtin-* options. As with the options for double files (see the commit message for commit 49348beafe9ba150c9bd48595b3f372299bddbb0, "Fix build with GCC 10 when long double = double."), it's deliberate that the options are used even if GCC currently doesn't have a built-in version of a given functions, so providing some level of future-proofing against more such built-in functions being added in future. Tested with build-many-glibcs.py for aarch64-linux-gnu powerpc-linux-gnu powerpc64le-linux-gnu x86_64-linux-gnu (compilers and glibcs builds) with GCC mainline.
* x86-64: Improve evex512 version of strlen functionsSunil K Pandey2022-10-301-34/+57
| | | | | | | | | | | This patch improves following functionality - Replace VPCMP with VPCMPEQ. - Replace page cross check logic with sall. - Remove extra lea from align_more. - Remove uncondition loop jump. - Use bsf to check max length in first vector. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* stdlib/strfrom: Add copysign to fix NAN issue on riscv (BZ #29501)Letu Ren2022-10-282-0/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | According to the specification of ISO/IEC TS 18661-1:2014, The strfromd, strfromf, and strfroml functions are equivalent to snprintf(s, n, format, fp) (7.21.6.5), except the format string contains only the character %, an optional precision that does not contain an asterisk *, and one of the conversion specifiers a, A, e, E, f, F, g, or G, which applies to the type (double, float, or long double) indicated by the function suffix (rather than by a length modifier). Use of these functions with any other 20 format string results in undefined behavior. strfromf will convert the arguement with type float to double first. According to the latest version of IEEE754 which is published in 2019, Conversion of a quiet NaN from a narrower format to a wider format in the same radix, and then back to the same narrower format, should not change the quiet NaN payload in any way except to make it canonical. When either an input or result is a NaN, this standard does not interpret the sign of a NaN. However, operations on bit strings—copy, negate, abs, copySign—specify the sign bit of a NaN result, sometimes based upon the sign bit of a NaN operand. The logical predicates totalOrder and isSignMinus are also affected by the sign bit of a NaN operand. For all other operations, this standard does not specify the sign bit of a NaN result, even when there is only one input NaN, or when the NaN is produced from an invalid operation. converting NAN or -NAN with type float to double doesn't need to keep the signbit. As a result, this test case isn't mandatory. The problem is that according to RISC-V ISA manual in chapter 11.3 of riscv-isa-20191213, Except when otherwise stated, if the result of a floating-point operation is NaN, it is the canonical NaN. The canonical NaN has a positive sign and all significand bits clear except the MSB, a.k.a. the quiet bit. For single-precision floating-point, this corresponds to the pattern 0x7fc00000. which means that conversion -NAN from float to double won't keep the signbit. Since glibc ought to be consistent here between types and architectures, this patch adds copysign to fix this problem if the string is NAN. This patch adds two different functions under sysdeps directory to work around the issue. This patch has been tested on x86_64 and riscv64. Resolves: BZ #29501 v2: Change from macros to different inline functions. v3: Add unlikely check to isnan. v4: Fix wrong commit message header. v5: Fix style: add space before parentheses. v6: Add copyright. Signed-off-by: Letu Ren <fantasquex@gmail.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* aarch64: Fix the extension header write in getcontext and swapcontextSzabolcs Nagy2022-10-282-4/+4
| | | | | | The extension header is two 32bit words and in the last header both should be 0. There is plenty space in the __reserved area, but it's better not to write more than we mean to.
* aarch64: Don't build wordcopySzabolcs Nagy2022-10-281-0/+0
| | | | | | | Use an empty wordcopy.c to avoid building the generic one. It does not seem to be used anywhere. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Introduce to _dl_call_finiFlorian Weimer2022-10-271-0/+8
| | | | | | | | | | This consolidates the destructor invocations from _dl_fini and dlclose. Remove the micro-optimization that avoids calling _dl_call_fini if they are no destructors (as dlclose is quite expensive anyway). The debug log message is now printed unconditionally. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* ld.so: Export tls_init_tp_called as __rtld_tls_init_tp_calledFlorian Weimer2022-10-271-0/+3
| | | | | | | This allows the rest of dynamic loader to check whether the TCB has been set up (and THREAD_GETMEM and THREAD_SETMEM will work). Reviewed-by: Siddhesh Poyarekar <siddhesh@gotplt.org>
* aarch64: Use memcpy_simd as the default memcpyWilco Dijkstra2022-10-266-370/+81
| | | | | | Since __memcpy_simd is the fastest memcpy on almost all cores, replace the generic memcpy with it. If SVE is available, a SVE memcpy will be used by default (including for Neoverse N2).
* aarch64: Cleanup memset ifuncWilco Dijkstra2022-10-262-17/+26
| | | | | Cleanup memset ifunc selectors. The A64FX memset relies on a ZVA size of 256, so add an explicit check.
* x86_64: Implement evex512 version of strchrnul, strchr and wcschrSunil K Pandey2022-10-256-0/+322
| | | | | | | | | | | | | | | | | | | | | | | | | This patch implements following evex512 version of string functions. evex512 version takes up to 30% less cycle as compared to evex, depending on length and alignment. - strchrnul function using 512 bit vectors. - strchr function using 512 bit vectors. - wcschr function using 512 bit vectors. Code size data: strchrnul-evex.o 599 byte strchrnul-evex512.o 569 byte (-5%) strchr-evex.o 639 byte strchr-evex512.o 595 byte (-7%) wcschr-evex.o 644 byte wcschr-evex512.o 607 byte (-6%) Placeholder function, not used by any processor at the moment. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* linux: Fix generic struct_stat for 64 bit time (BZ# 29657)Adhemerval Zanella2022-10-256-74/+622
| | | | | | | | | | | | | | | | | The generic Linux struct_stat misses the conditionals to use bits/struct_stat_time64_helper.h in the __USE_TIME_BITS64 for architecture that uses __TIMESIZE == 32 (currently csky and nios2). Since newer ports should not support 32 bit time_t, the generic implementation should be used as default. For arm, hppa, and sh a copy of default struct_stat is added, while for csky and nios a new one based on generic is used, along with conditionals to use bits/struct_stat_time64_helper.h. The default struct_stat is also replaced with the generic one. Checked on aarch64-linux-gnu and arm-linux-gnueabihf.
* Avoid undefined behaviour in ibm128 implementation of llroundl (BZ #29488)Aurelien Jarno2022-10-241-12/+9
| | | | | | | | Detecting an overflow edge case depended on signed overflow of a long long. Replace the additions and the overflow checks by __builtin_add_overflow(). Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
* Remove all assembly optimizations for htonl and htonsAdhemerval Zanella2022-10-245-175/+0
| | | | | | | | | | The builtin bswap is already used if optimziation is enabled for GCC 4.8+, so glibc symbols will be used in a very limited scenarios. Also, gcc generated code is quite similar to all but ia64 and i386 htons. Checked on alpha, i686, and ia64.
* Remove htonl.S for i386/x86_64Cristian Rodríguez2022-10-242-68/+0
| | | | | | | Generic implementation on top of __bswap_32 always expands inline to either bswap or movbe depending on -march=*. Signed-off-by: Cristian Rodríguez <crrodriguez@opensuse.org>
* Fix BZ #29463 in the ibm128 implementation of y1l tooMichael Hudson-Doyle2022-10-241-0/+3
| | | | | | | | | Avoid moving code across SET_RESTORE_ROUNDL in order to fix [BZ #29463]. Tested-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
* Add ADDRB from Linux 6.0 to bits/termios-c_cflag.hJoseph Myers2022-10-243-0/+12
| | | | | | | | | | | | Linux 6.0 adds a constant ADDRB, a termios c_cflag bit, to its include/uapi/asm-generic/termbits-common.h. Add it accordingly to glibc's bits/termios-c_cflag.h headers. As other constants in these headers are generally in octal, I converted the value to octal to match. As ADDRB isn't in a POSIX-reserved namespace, I made it conditional on __USE_MISC. Tested for x86_64.
* x86: Use `testb` for FSRM check in memmove-vec-unaligned-ermsNoah Goldstein2022-10-201-0/+4
| | | | | | | `testb` saves a bit of code size is the imm-operand can be encoded 1-bytes. Tested on x86-64.
* x86: Use `testb` for case-locale check in str{n}casecmp-sse42Noah Goldstein2022-10-201-2/+2
| | | | | | | `testb` saves a bit of code size is the imm-operand can be encoded 1-bytes. Tested on x86-64.
* x86: Use `testb` for case-locale check in str{n}casecmp-sse2Noah Goldstein2022-10-201-2/+2
| | | | | | | `testb` saves a bit of code size is the imm-operand can be encoded 1-bytes. Tested on x86-64.
* x86: Use `testb` for case-locale check in str{n}casecmp-avx2Noah Goldstein2022-10-201-1/+1
| | | | | | | `testb` saves a bit of code size is the imm-operand can be encoded 1-bytes. Tested on x86-64.
* x86: Add support for VEC_SIZE == 64 in strcmp-evex.S implNoah Goldstein2022-10-201-246/+438
| | | | | | | | | | | | | | | Unused at the moment, but evex512 strcmp, strncmp, strcasecmp{l}, and strncasecmp{l} functions can be added by including strcmp-evex.S with "x86-evex512-vecs.h" defined. In addition save code size a bit in a few places. 1. tzcnt ... -> bsf ... 2. vpcmp{b|d} $0 ... -> vpcmpeq{b|d} This saves a touch of code size but has minimal net affect. Full check passes on x86-64.