about summary refs log tree commit diff
path: root/sysdeps/x86_64
Commit message (Collapse)AuthorAgeFilesLines
* Fix whitespace related license issues.Carlos O'Donell2024-10-072-2/+2
| | | | | | | | | | | | Several copies of the licenses in files contained whitespace related problems. Two cases are addressed here, the first is two spaces after a period which appears between "PURPOSE." and "See". The other is a space after the last forward slash in the URL. Both issues are corrected and the licenses now match the official textual description of the license (and the other license in the sources). Since these whitespaces changes do not alter the paragraph structure of the license, nor create new sentences, they do not change the license.
* x86/string: Fixup alignment of main loop in str{n}cmp-evex [BZ #32212]Noah Goldstein2024-09-301-13/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The loop should be aligned to 32-bytes so that it can ideally run out the DSB. This is particularly important on Skylake-Server where deficiencies in it's DSB implementation make it prone to not being able to run loops out of the DSB. For example running strcmp-evex on 200Mb string: 32-byte aligned loop: - 43,399,578,766 idq.dsb_uops not 32-byte aligned loop: - 6,060,139,704 idq.dsb_uops This results in a 25% performance degradation for the non-aligned version. The fix is to just ensure the code layout is such that the loop is aligned. (Which was previously the case but was accidentally dropped in 84e7c46df). NB: The fix was actually 64-byte alignment. This is because 64-byte alignment generally produces more stable performance than 32-byte aligned code (cache line crosses can affect perf), so if we are going past 16-byte alignmnent, might as well go to 64. 64-byte alignment also matches most other functions we over-align, so it creates a common point of optimization. Times are reported as ratio of Time_With_Patch / Time_Without_Patch. Lower is better. The values being reported is the geometric mean of the ratio across all tests in bench-strcmp and bench-strncmp. Note this patch is only attempting to improve the Skylake-Server strcmp for long strings. The rest of the numbers are only to test for regressions. Tigerlake Results Strings <= 512: strcmp : 1.026 strncmp: 0.949 Tigerlake Results Strings > 512: strcmp : 0.994 strncmp: 0.998 Skylake-Server Results Strings <= 512: strcmp : 0.945 strncmp: 0.943 Skylake-Server Results Strings > 512: strcmp : 0.778 strncmp: 1.000 The 2.6% regression on TGL-strcmp is due to slowdowns caused by changes in alignment of code handling small sizes (most on the page-cross logic). These should be safe to ignore because 1) We previously only 16-byte aligned the function so this behavior is not new and was essentially up to chance before this patch and 2) this type of alignment related regression on small sizes really only comes up in tight micro-benchmark loops and is unlikely to have any affect on realworld performance. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* nptl: Fix Race conditions in pthread cancellation [BZ#12683]Adhemerval Zanella2024-08-231-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current racy approach is to enable asynchronous cancellation before making the syscall and restore the previous cancellation type once the syscall returns, and check if cancellation has happen during the cancellation entrypoint. As described in BZ#12683, this approach shows 2 problems: 1. Cancellation can act after the syscall has returned from the kernel, but before userspace saves the return value. It might result in a resource leak if the syscall allocated a resource or a side effect (partial read/write), and there is no way to program handle it with cancellation handlers. 2. If a signal is handled while the thread is blocked at a cancellable syscall, the entire signal handler runs with asynchronous cancellation enabled. This can lead to issues if the signal handler call functions which are async-signal-safe but not async-cancel-safe. For the cancellation to work correctly, there are 5 points at which the cancellation signal could arrive: [ ... )[ ... )[ syscall ]( ... 1 2 3 4 5 1. Before initial testcancel, e.g. [*... testcancel) 2. Between testcancel and syscall start, e.g. [testcancel...syscall start) 3. While syscall is blocked and no side effects have yet taken place, e.g. [ syscall ] 4. Same as 3 but with side-effects having occurred (e.g. a partial read or write). 5. After syscall end e.g. (syscall end...*] And libc wants to act on cancellation in cases 1, 2, and 3 but not in cases 4 or 5. For the 4 and 5 cases, the cancellation will eventually happen in the next cancellable entrypoint without any further external event. The proposed solution for each case is: 1. Do a conditional branch based on whether the thread has received a cancellation request; 2. It can be caught by the signal handler determining that the saved program counter (from the ucontext_t) is in some address range beginning just before the "testcancel" and ending with the syscall instruction. 3. SIGCANCEL can be caught by the signal handler and determine that the saved program counter (from the ucontext_t) is in the address range beginning just before "testcancel" and ending with the first uninterruptable (via a signal) syscall instruction that enters the kernel. 4. In this case, except for certain syscalls that ALWAYS fail with EINTR even for non-interrupting signals, the kernel will reset the program counter to point at the syscall instruction during signal handling, so that the syscall is restarted when the signal handler returns. So, from the signal handler's standpoint, this looks the same as case 2, and thus it's taken care of. 5. For syscalls with side-effects, the kernel cannot restart the syscall; when it's interrupted by a signal, the kernel must cause the syscall to return with whatever partial result is obtained (e.g. partial read or write). 6. The saved program counter points just after the syscall instruction, so the signal handler won't act on cancellation. This is similar to 4. since the program counter is past the syscall instruction. So The proposed fixes are: 1. Remove the enable_asynccancel/disable_asynccancel function usage in cancellable syscall definition and instead make them call a common symbol that will check if cancellation is enabled (__syscall_cancel at nptl/cancellation.c), call the arch-specific cancellable entry-point (__syscall_cancel_arch), and cancel the thread when required. 2. Provide an arch-specific generic system call wrapper function that contains global markers. These markers will be used in SIGCANCEL signal handler to check if the interruption has been called in a valid syscall and if the syscalls has side-effects. A reference implementation sysdeps/unix/sysv/linux/syscall_cancel.c is provided. However, the markers may not be set on correct expected places depending on how INTERNAL_SYSCALL_NCS is implemented by the architecture. It is expected that all architectures add an arch-specific implementation. 3. Rewrite SIGCANCEL asynchronous handler to check for both canceling type and if current IP from signal handler falls between the global markers and act accordingly. 4. Adjust libc code to replace LIBC_CANCEL_ASYNC/LIBC_CANCEL_RESET to use the appropriate cancelable syscalls. 5. Adjust 'lowlevellock-futex.h' arch-specific implementations to provide cancelable futex calls. Some architectures require specific support on syscall handling: * On i386 the syscall cancel bridge needs to use the old int80 instruction because the optimized vDSO symbol the resulting PC value for an interrupted syscall points to an address outside the expected markers in __syscall_cancel_arch. It has been discussed in LKML [1] on how kernel could help userland to accomplish it, but afaik discussion has stalled. Also, sysenter should not be used directly by libc since its calling convention is set by the kernel depending of the underlying x86 chip (check kernel commit 30bfa7b3488bfb1bb75c9f50a5fcac1832970c60). * mips o32 is the only kABI that requires 7 argument syscall, and to avoid add a requirement on all architectures to support it, mips support is added with extra internal defines. Checked on aarch64-linux-gnu, arm-linux-gnueabihf, powerpc-linux-gnu, powerpc64-linux-gnu, powerpc64le-linux-gnu, i686-linux-gnu, and x86_64-linux-gnu. [1] https://lkml.org/lkml/2016/3/8/1105 Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* x86: Unifies 'strnlen-evex' and 'strnlen-evex512' implementations.Matthew Sterrett2024-08-193-680/+469
| | | | | | | | | | | | | | | | | | | | This commit uses a common implementation 'strnlen-evex-base.S' for both 'strnlen-evex' and 'strnlen-evex512' This patch serves both to reduce the number of implementations, and it also does some small optimizations that benefit strnlen-evex and strnlen-evex512. All tests pass on x86. Benchmarks were taken on SKX. https://www.intel.com/content/www/us/en/products/sku/123613/intel-core-i97900x-xseries-processor-13-75m-cache-up-to-4-30-ghz/specifications.html Geometric mean for strnlen-evex over all benchmarks (N=10) was (new/old) 0.881 Geometric mean for strnlen-evex512 over all benchmarks (N=10) was (new/old) 0.953 Code Size Changes: strnlen-evex : +31 bytes strnlen-evex512 : +156 bytes Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* x86: Add `Avoid_STOSB` tunable to allow NT memset without ERMSNoah Goldstein2024-08-151-5/+13
| | | | | | | | | | | | | | | | | | | The goal of this flag is to allow targets which don't prefer/have ERMS to still access the non-temporal memset implementation. There are 4 cases for tuning memset: 1) `Avoid_STOSB && Avoid_Non_Temporal_Memset` - Memset with temporal stores 2) `Avoid_STOSB && !Avoid_Non_Temporal_Memset` - Memset with temporal/non-temporal stores. Non-temporal path goes through `rep stosb` path. We accomplish this by setting `x86_rep_stosb_threshold` to `x86_memset_non_temporal_threshold`. 3) `!Avoid_STOSB && Avoid_Non_Temporal_Memset` - Memset with temporal stores/`rep stosb` 3) `!Avoid_STOSB && !Avoid_Non_Temporal_Memset` - Memset with temporal stores/`rep stosb`/non-temporal stores. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86: Fix bug in strchrnul-evex512 [BZ #32078]Noah Goldstein2024-08-151-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Issue was we were expecting not matches with CHAR before the start of the string in the page cross case. The check code in the page cross case: ``` and $0xffffffffffffffc0,%rax vmovdqa64 (%rax),%zmm17 vpcmpneqb %zmm17,%zmm16,%k1 vptestmb %zmm17,%zmm17,%k0{%k1} kmovq %k0,%rax inc %rax shr %cl,%rax je L(continue) ``` expects that all characters that neither match null nor CHAR will be 1s in `rax` prior to the `inc`. Then the `inc` will overflow all of the 1s where no relevant match was found. This is incorrect in the page-cross case, as the `vmovdqa64 (%rax),%zmm17` loads from before the start of the input string. If there are matches with CHAR before the start of the string, `rax` won't properly overflow. The fix is quite simple. Just replace: ``` inc %rax shr %cl,%rax ``` With: ``` sar %cl,%rax inc %rax ``` The arithmetic shift will clear any matches prior to the start of the string while maintaining the signbit so the 1s can properly overflow to zero in the case of no matches. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* added inputs giving large errors on x86_64 for new C23 functionsPaul Zimmermann2024-08-071-43/+43
| | | | | | | | | | | These functions are exp10m1, exp2m1, log10p1, log2p1. Also regenerated ulps on x86_64. For each format, there are 4 values, one for each rounding mode. (For the intel96 format, there are 8 values, 4 for Intel hardware, and 4 for AMD hardware. However, regen-ulps was only run on Intel. It should be run in a separate patch on a AMD x86_64.) Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86-64: Remove sysdeps/x86_64/x32/dl-machine.hH.J. Lu2024-07-252-87/+16
| | | | | | | | | | | | | | | | | | | | Remove sysdeps/x86_64/x32/dl-machine.h by folding x32 ARCH_LA_PLTENTER, ARCH_LA_PLTEXIT and RTLD_START into sysdeps/x86_64/dl-machine.h. There are no regressions on x86-64 nor x32. There are no changes in x86-64 _dl_start_user. On x32, _dl_start_user changes are <_dl_start_user>: mov %eax,%r12d + mov %esp,%r13d mov (%rsp),%edx mov %edx,%esi - mov %esp,%r13d and $0xfffffff0,%esp mov 0x0(%rip),%edi # <_dl_start_user+0x14> lea 0x8(%r13,%rdx,4),%ecx Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* This patch adds larger ulp errors for the log2p1 function.Paul Zimmermann2024-07-221-5/+5
| | | | | | | | Changes in v2: - added larger error for long double on AMD reported by Adhemerval (https://sourceware.org/pipermail/libc-alpha/2024-June/157755.html) Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x32: xfail elf/tst-platform-1 [BZ #22363]H.J. Lu2024-07-191-0/+6
| | | | | | | | Xfail elf/tst-platform-1 on x32 since kernel passes i686 in AT_PLATFORM. See https://sourceware.org/bugzilla/show_bug.cgi?id=22363 Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Sam James <sam@gentoo.org>
* elf: Support recursive use of dynamic TLS in interposed mallocFlorian Weimer2024-07-011-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It turns out that quite a few applications use bundled mallocs that have been built to use global-dynamic TLS (instead of the recommended initial-exec TLS). The previous workaround from commit afe42e935b3ee97bac9a7064157587777259c60e ("elf: Avoid some free (NULL) calls in _dl_update_slotinfo") does not fix all encountered cases unfortunatelly. This change avoids the TLS generation update for recursive use of TLS from a malloc that was called during a TLS update. This is possible because an interposed malloc has a fixed module ID and TLS slot. (It cannot be unloaded.) If an initially-loaded module ID is encountered in __tls_get_addr and the dynamic linker is already in the middle of a TLS update, use the outdated DTV, thus avoiding another call into malloc. It's still necessary to update the DTV to the most recent generation, to get out of the slow path, which is why the check for recursion is needed. The bookkeeping is done using a global counter instead of per-thread flag because TLS access in the dynamic linker is tricky. All this will go away once the dynamic linker stops using malloc for TLS, likely as part of a change that pre-allocates all TLS during pthread_create/dlopen. Fixes commit d2123d68275acc0f061e73d5f86ca504e0d5a344 ("elf: Fix slow tls access after dlopen [BZ #19924]"). Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* x86_64: Optimize large size copy in memmove-ssse3MayShao-oc2024-06-301-5/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch optimizes large size copy using normal store when src > dst and overlap. Make it the same as the logic in memmove-vec-unaligned-erms.S. Current memmove-ssse3 use '__x86_shared_cache_size_half' as the non- temporal threshold, this patch updates that value to '__x86_shared_non_temporal_threshold'. Currently, the __x86_shared_non_temporal_threshold is cpu-specific, and different CPUs will have different values based on the related nt-benchmark results. However, in memmove-ssse3, the nontemporal threshold uses '__x86_shared_cache_size_half', which sounds unreasonable. The performance is not changed drastically although shows overall improvements without any major regressions or gains. Results on Zhaoxin KX-7000: bench-memcpy geometric_mean(N=20) New / Original: 0.999 bench-memcpy-random geometric_mean(N=20) New / Original: 0.999 bench-memcpy-large geometric_mean(N=20) New / Original: 0.978 bench-memmove geometric_mean(N=20) New / Original: 1.000 bench-memmmove-large geometric_mean(N=20) New / Original: 0.962 Results on Intel Core i5-6600K: bench-memcpy geometric_mean(N=20) New / Original: 1.001 bench-memcpy-random geometric_mean(N=20) New / Original: 0.999 bench-memcpy-large geometric_mean(N=20) New / Original: 1.001 bench-memmove geometric_mean(N=20) New / Original: 0.995 bench-memmmove-large geometric_mean(N=20) New / Original: 0.936 Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* elf: Remove _DL_PLATFORMS_COUNTStefan Liebler2024-06-181-3/+2
| | | | | | | | | Remove the definitions of _DL_PLATFORMS_COUNT as those are not used anymore after removal in elf/dl-cache.c:search_cache(). Note: On x86, we can also get rid of the definitions HWCAP_PLATFORMS_START and HWCAP_PLATFORMS_COUNT. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Convert to autoconf 2.72 (vanilla release, no distribution patches)Andreas K. Hüttel2024-06-171-17/+23
| | | | | | | As discussed at the patch review meeting Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org> Reviewed-by: Simon Chopin <simon.chopin@canonical.com>
* Implement C23 exp2m1, exp10m1Joseph Myers2024-06-171-0/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C23 adds various <math.h> function families originally defined in TS 18661-4. Add the exp2m1 and exp10m1 functions (exp2(x)-1 and exp10(x)-1, like expm1). As with other such functions, these use type-generic templates that could be replaced with faster and more accurate type-specific implementations in future. Test inputs are copied from those for expm1, plus some additions close to the overflow threshold (copied from exp2 and exp10) and also some near the underflow threshold. exp2m1 has the unusual property of having an input (M_MAX_EXP) where whether the function overflows (under IEEE semantics) depends on the rounding mode. Although these could reasonably be XFAILed in the testsuite (as we do in some cases for arguments very close to a function's overflow threshold when an error of a few ulps in the implementation can result in the implementation not agreeing with an ideal one on whether overflow takes place - the testsuite isn't smart enough to handle this automatically), since these functions aren't required to be correctly rounding, I made the implementation check for and handle this case specially. The Makefile ordering expected by lint-makefiles for the new functions is a bit peculiar, but I implemented it in this patch so that the test passes; I don't know why log2 also needed moving in one Makefile variable setting when it didn't in my previous patches, but the failure showed a different place was expected for that function as well. The powerpc64le IFUNC setup seems not to be as self-contained as one might hope; it shouldn't be necessary to add IFUNCs for new functions such as these simply to get them building, but without setting up IFUNCs for the new functions, there were undefined references to __GI___expm1f128 (that IFUNC machinery results in no such function being defined, but doesn't stop include/math.h from doing the redirection resulting in the exp2m1f128 and exp10m1f128 implementations expecting to call it). Tested for x86_64 and x86, and with build-many-glibcs.py.
* Implement C23 log10p1Joseph Myers2024-06-171-0/+24
| | | | | | | | | | | | | | C23 adds various <math.h> function families originally defined in TS 18661-4. Add the log10p1 functions (log10(1+x): like log1p, but for base-10 logarithms). This is directly analogous to the log2p1 implementation (except that whereas log2p1 has a smaller underflow range than log1p, log10p1 has a larger underflow range). The test inputs are copied from those for log1p and log2p1, plus a few more inputs in that wider underflow range. Tested for x86_64 and x86, and with build-many-glibcs.py.
* Implement C23 logp1Joseph Myers2024-06-171-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | C23 adds various <math.h> function families originally defined in TS 18661-4. Add the logp1 functions (aliases for log1p functions - the name is intended to be more consistent with the new log2p1 and log10p1, where clearly it would have been very confusing to name those functions log21p and log101p). As aliases rather than new functions, the content of this patch is somewhat different from those actually adding new functions. Tests are shared with log1p, so this patch *does* mechanically update all affected libm-test-ulps files to expect the same errors for both functions. The vector versions of log1p on aarch64 and x86_64 are *not* updated to have logp1 aliases (and thus there are no corresponding header, tests, abilist or ulps changes for vector functions either). It would be reasonable for such vector aliases and corresponding changes to other files to be made separately. For now, the log1p tests instead avoid testing logp1 in the vector case (a Makefile change is needed to avoid problems with grep, used in generating the .c files for vector function tests, matching more than one ALL_RM_TEST line in a file testing multiple functions with the same inputs, when it assumes that the .inc file only has a single such line). Tested for x86_64 and x86, and with build-many-glibcs.py.
* x86: Add seperate non-temporal tunable for memsetNoah Goldstein2024-05-301-3/+3
| | | | | | | | | | | The tuning for non-temporal stores for memset vs memcpy is not always the same. This includes both the exact value and whether non-temporal stores are profitable at all for a given arch. This patch add `x86_memset_non_temporal_threshold`. Currently we disable non-temporal stores for non Intel vendors as the only benchmarks showing its benefit have been on Intel hardware. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86: Improve large memset perf with non-temporal stores [RHEL-29312]Noah Goldstein2024-05-301-58/+91
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously we use `rep stosb` for all medium/large memsets. This is notably worse than non-temporal stores for large (above a few MBs) memsets. See: https://docs.google.com/spreadsheets/d/1opzukzvum4n6-RUVHTGddV6RjAEil4P2uMjjQGLbLcU/edit?usp=sharing For data using different stategies for large memset on ICX and SKX. Using non-temporal stores can be up to 3x faster on ICX and 2x faster on SKX. Historically, these numbers would not have been so good because of the zero-over-zero writeback optimization that `rep stosb` is able to do. But, the zero-over-zero writeback optimization has been removed as a potential side-channel attack, so there is no longer any good reason to only rely on `rep stosb` for large memsets. On the flip size, non-temporal writes can avoid data in their RFO requests saving memory bandwidth. All of the other changes to the file are to re-organize the code-blocks to maintain "good" alignment given the new code added in the `L(stosb_local)` case. The results from running the GLIBC memset benchmarks on TGL-client for N=20 runs: Geometric Mean across the suite New / Old EXEX256: 0.979 Geometric Mean across the suite New / Old EXEX512: 0.979 Geometric Mean across the suite New / Old AVX2 : 0.986 Geometric Mean across the suite New / Old SSE2 : 0.979 Most of the cases are essentially unchanged, this is mostly to show that adding the non-temporal case didn't add any regressions to the other cases. The results on the memset-large benchmark suite on TGL-client for N=20 runs: Geometric Mean across the suite New / Old EXEX256: 0.926 Geometric Mean across the suite New / Old EXEX512: 0.925 Geometric Mean across the suite New / Old AVX2 : 0.928 Geometric Mean across the suite New / Old SSE2 : 0.924 So roughly a 7.5% speedup. This is lower than what we see on servers (likely because clients typically have faster single-core bandwidth so saving bandwidth on RFOs is less impactful), but still advantageous. Full test-suite passes on x86_64 w/ and w/o multiarch. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86_64: Reformat elf_machine_relaXin Wang2024-05-271-4/+5
| | | | | | | | | | | A space is added before the left bracket of the x86_64 elf_machine_rela function, in order to harmonize with the rest of the implementation of the function and to make it easier to retrieve the function. The lines where the function definition is located has been re-indented, as well as its left curly bracket placed in the correct position. Signed-off-by: Xin Wang <yw987194828@gmail.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* Implement C23 log2p1Joseph Myers2024-05-201-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | C23 adds various <math.h> function families originally defined in TS 18661-4. Add the log2p1 functions (log2(1+x): like log1p, but for base-2 logarithms). This illustrates the intended structure of implementations of all these function families: define them initially with a type-generic template implementation. If someone wishes to add type-specific implementations, it is likely such implementations can be both faster and more accurate than the type-generic one and can then override it for types for which they are implemented (adding benchmarks would be desirable in such cases to demonstrate that a new implementation is indeed faster). The test inputs are copied from those for log1p. Note that these changes make gen-auto-libm-tests depend on MPFR 4.2 (or later). The bulk of the changes are fairly generic for any such new function. (sysdeps/powerpc/nofpu/Makefile only needs changing for those type-generic templates that use fabs.) Tested for x86_64 and x86, and with build-many-glibcs.py.
* x86_64: Fix missing wcsncat function definition without multiarch (x86-64-v4)Gabi Falk2024-05-081-3/+3
| | | | | | | | | | | | | | | | This code expects the WCSCAT preprocessor macro to be predefined in case the evex implementation of the function should be defined with a name different from __wcsncat_evex. However, when glibc is built for x86-64-v4 without multiarch support, sysdeps/x86_64/wcsncat.S defines WCSNCAT variable instead of WCSCAT to build it as wcsncat. Rename the variable to WCSNCAT, as it is actually a better naming choice for the variable in this case. Reported-by: Kenton Groombridge Link: https://bugs.gentoo.org/921945 Fixes: 64b8b6516b ("x86: Add evex optimized functions for the wchar_t strcpy family") Signed-off-by: Gabi Falk <gabifalk@gmx.com> Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
* Revert "x86_64: Suppress false positive valgrind error"Florian Weimer2024-04-132-24/+0
| | | | | | | | | | | | | | This reverts commit a1735e0aa858f0c8b15e5ee9975bff4279423680. The test failure is a real valgrind bug that needs to be fixed before valgrind is usable with a glibc that has been built with CC="gcc -march=x86-64-v3". The proposed valgrind patch teaches valgrind to replace ld.so strcmp with an unoptimized scalar implementation, thus avoiding any AVX2-related problems. Valgrind bug: <https://bugs.kde.org/show_bug.cgi?id=485487> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Exclude FMA4 IFUNC functions for -mapxfH.J. Lu2024-04-065-9/+67
| | | | | | | | | | When -mapxf is used to build glibc, the resulting glibc will never run on FMA4 machines. Exclude FMA4 IFUNC functions when -mapxf is used. This requires GCC which defines __APX_F__ for -mapxf with commit: 1df56719bd8 x86: Define __APX_F__ for -mapxf Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
* math: x86 trunc traps when FE_INEXACT is enabled (BZ 31603)Adhemerval Zanella2024-04-041-36/+0
| | | | | | | | | | | | | The implementations of trunc functions using x87 floating point (i386 and x86_64 long double only) traps when FE_INEXACT is enabled. Although this is a GNU extension outside the scope of the C standard, other architectures that also support traps do not show this behavior. The fix moves the implementation to a common one that holds any exceptions with a 'fnclex' (libc_feholdexcept_setround_387). Checked on x86_64-linux-gnu and i686-linux-gnu. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* math: x86 floor traps when FE_INEXACT is enabled (BZ 31601)Adhemerval Zanella2024-04-041-33/+0
| | | | | | | | | | | | | The implementations of floor functions using x87 floating point (i386 and 86_64 long double only) traps when FE_INEXACT is enabled. Although this is a GNU extension outside the scope of the C standard, other architectures that also support traps do not show this behavior. The fix moves the implementation to a common one that holds any exceptions with a 'fnclex' (libc_feholdexcept_setround_387). Checked on x86_64-linux-gnu and i686-linux-gnu. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* math: x86 ceill traps when FE_INEXACT is enabled (BZ 31600)Adhemerval Zanella2024-04-041-34/+0
| | | | | | | | | | | | | The implementations of ceil functions using x87 floating point (i386 and x86_64 long double only) traps when FE_INEXACT is enabled. Although this is a GNU extension outside the scope of the C standard, other architectures that also support traps do not show this behavior. The fix moves the implementation to a common one that holds any exceptions with a 'fnclex' (libc_feholdexcept_setround_387). Checked on x86_64-linux-gnu and i686-linux-gnu. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86_64: Remove avx512 strstr implementationAdhemerval Zanella2024-03-274-248/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As indicated in a recent thread, this it is a simple brute-force algorithm that checks the whole needle at a matching character pair (and does so 1 byte at a time after the first 64 bytes of a needle). Also it never skips ahead and thus can match at every haystack position after trying to match all of the needle, which generic implementation avoids. As indicated by Wilco, a 4x larger needle and 16x larger haystack gives a clear 65x slowdown both basic_strstr and __strstr_avx512: "ifuncs": ["basic_strstr", "twoway_strstr", "__strstr_avx512", "__strstr_sse2_unaligned", "__strstr_generic"], { "len_haystack": 65536, "len_needle": 1024, "align_haystack": 0, "align_needle": 0, "fail": 1, "desc": "Difficult bruteforce needle", "timings": [4.0948e+07, 15094.5, 3.20818e+07, 108558, 10839.2] }, { "len_haystack": 1048576, "len_needle": 4096, "align_haystack": 0, "align_needle": 0, "fail": 1, "desc": "Difficult bruteforce needle", "timings": [2.69767e+09, 100797, 2.08535e+09, 495706, 82666.9] } PS: I don't have an AVX512 capable machine to verify this issues, but skimming through the code it does seems to follow what Wilco has described. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* Add tst-gnu2-tls2mod1 to test-internal-extrasAndreas Schwab2024-03-191-0/+2
| | | | | | That allows sysdeps/x86_64/tst-gnu2-tls2mod1.S to use internal headers. Fixes: 717ebfa85c ("x86-64: Allocate state buffer space for RDI, RSI and RBX")
* x86-64: Allocate state buffer space for RDI, RSI and RBXH.J. Lu2024-03-181-0/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | _dl_tlsdesc_dynamic preserves RDI, RSI and RBX before realigning stack. After realigning stack, it saves RCX, RDX, R8, R9, R10 and R11. Define TLSDESC_CALL_REGISTER_SAVE_AREA to allocate space for RDI, RSI and RBX to avoid clobbering saved RDI, RSI and RBX values on stack by xsave to STATE_SAVE_OFFSET(%rsp). +==================+<- stack frame start aligned at 8 or 16 bytes | |<- RDI saved in the red zone | |<- RSI saved in the red zone | |<- RBX saved in the red zone | |<- paddings for stack realignment of 64 bytes |------------------|<- xsave buffer end aligned at 64 bytes | |<- | |<- | |<- |------------------|<- xsave buffer start at STATE_SAVE_OFFSET(%rsp) | |<- 8-byte padding for 64-byte alignment | |<- 8-byte padding for 64-byte alignment | |<- R11 | |<- R10 | |<- R9 | |<- R8 | |<- RDX | |<- RCX +==================+<- RSP aligned at 64 bytes Define TLSDESC_CALL_REGISTER_SAVE_AREA, the total register save area size for all integer registers by adding 24 to STATE_SAVE_OFFSET since RDI, RSI and RBX are saved onto stack without adjusting stack pointer first, using the red-zone. This fixes BZ #31501. Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
* x86-64: Update _dl_tlsdesc_dynamic to preserve AMX registersH.J. Lu2024-02-293-1/+44
| | | | | | | | | | | | | | | | _dl_tlsdesc_dynamic should also preserve AMX registers which are caller-saved. Add X86_XSTATE_TILECFG_ID and X86_XSTATE_TILEDATA_ID to x86-64 TLSDESC_CALL_STATE_SAVE_MASK. Compute the AMX state size and save it in xsave_state_full_size which is only used by _dl_tlsdesc_dynamic_xsave and _dl_tlsdesc_dynamic_xsavec. This fixes the AMX part of BZ #31372. Tested on AMX processor. AMX test is enabled only for compilers with the fix for https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114098 GCC 14 and GCC 11/12/13 branches have the bug fix. Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
* x86_64: Suppress false positive valgrind errorH.J. Lu2024-02-282-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When strcmp-avx2.S is used as the default, elf/tst-valgrind-smoke fails with ==1272761== Conditional jump or move depends on uninitialised value(s) ==1272761== at 0x4022C98: strcmp (strcmp-avx2.S:462) ==1272761== by 0x400B05B: _dl_name_match_p (dl-misc.c:75) ==1272761== by 0x40085F3: _dl_map_object (dl-load.c:1966) ==1272761== by 0x401AEA4: map_doit (rtld.c:644) ==1272761== by 0x4001488: _dl_catch_exception (dl-catch.c:237) ==1272761== by 0x40015AE: _dl_catch_error (dl-catch.c:256) ==1272761== by 0x401B38F: do_preload (rtld.c:816) ==1272761== by 0x401C116: handle_preload_list (rtld.c:892) ==1272761== by 0x401EDF5: dl_main (rtld.c:1842) ==1272761== by 0x401A79E: _dl_sysdep_start (dl-sysdep.c:140) ==1272761== by 0x401BEEE: _dl_start_final (rtld.c:494) ==1272761== by 0x401BEEE: _dl_start (rtld.c:581) ==1272761== by 0x401AD87: ??? (in */elf/ld.so) The assembly codes are: 0x0000000004022c80 <+144>: vmovdqu 0x20(%rdi),%ymm0 0x0000000004022c85 <+149>: vpcmpeqb 0x20(%rsi),%ymm0,%ymm1 0x0000000004022c8a <+154>: vpcmpeqb %ymm0,%ymm15,%ymm2 0x0000000004022c8e <+158>: vpandn %ymm1,%ymm2,%ymm1 0x0000000004022c92 <+162>: vpmovmskb %ymm1,%ecx 0x0000000004022c96 <+166>: inc %ecx => 0x0000000004022c98 <+168>: jne 0x4022c32 <strcmp+66> strcmp-avx2.S has 32-byte vector loads of strings which are shorter than 32 bytes: (gdb) p (char *) ($rdi + 0x20) $6 = 0x1ffeffea20 "memcheck-amd64-linux.so" (gdb) p (char *) ($rsi + 0x20) $7 = 0x4832640 "core-amd64-linux.so" (gdb) call (int) strlen ((char *) ($rsi + 0x20)) $8 = 19 (gdb) call (int) strlen ((char *) ($rdi + 0x20)) $9 = 23 (gdb) It triggers the valgrind error. The above code is safe since the loads don't cross the page boundary. Update tst-valgrind-smoke.sh to accept an optional suppression file and pass a suppression file to valgrind when strcmp-avx2.S is the default implementation of strcmp. Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
* x86-64: Don't use SSE resolvers for ISA level 3 or aboveH.J. Lu2024-02-281-6/+9
| | | | | | | | | | | | | | | | When glibc is built with ISA level 3 or above enabled, SSE resolvers aren't available and glibc fails to build: ld: .../elf/librtld.os: in function `init_cpu_features': .../elf/../sysdeps/x86/cpu-features.c:1200:(.text+0x1445f): undefined reference to `_dl_runtime_resolve_fxsave' ld: .../elf/librtld.os: relocation R_X86_64_PC32 against undefined hidden symbol `_dl_runtime_resolve_fxsave' can not be used when making a shared object /usr/local/bin/ld: final link failed: bad value For ISA level 3 or above, don't use _dl_runtime_resolve_fxsave nor _dl_tlsdesc_dynamic_fxsave. This fixes BZ #31429. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* x86: Update _dl_tlsdesc_dynamic to preserve caller-saved registersH.J. Lu2024-02-2810-150/+306
| | | | | | | | | | | | | | | | | | | | | | | | | | | Compiler generates the following instruction sequence for GNU2 dynamic TLS access: leaq tls_var@TLSDESC(%rip), %rax call *tls_var@TLSCALL(%rax) or leal tls_var@TLSDESC(%ebx), %eax call *tls_var@TLSCALL(%eax) CALL instruction is transparent to compiler which assumes all registers, except for EFLAGS and RAX/EAX, are unchanged after CALL. When _dl_tlsdesc_dynamic is called, it calls __tls_get_addr on the slow path. __tls_get_addr is a normal function which doesn't preserve any caller-saved registers. _dl_tlsdesc_dynamic saved and restored integer caller-saved registers, but didn't preserve any other caller-saved registers. Add _dl_tlsdesc_dynamic IFUNC functions for FNSAVE, FXSAVE, XSAVE and XSAVEC to save and restore all caller-saved registers. This fixes BZ #31372. Add GLRO(dl_x86_64_runtime_resolve) with GLRO(dl_x86_tlsdesc_dynamic) to optimize elf_machine_runtime_setup. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* x86_64: Exclude SSE, AVX and FMA4 variants in libm multiarchSunil K Pandey2024-02-2560-295/+896
| | | | | | | | | | | | | | | | | | | When glibc is built with ISA level 3 or higher by default, the resulting glibc binaries won't run on SSE or FMA4 processors. Exclude SSE, AVX and FMA4 variants in libm multiarch when ISA level 3 or higher is enabled by default. When glibc is built with ISA level 2 enabled by default, only keep SSE4.1 variant. Fixes BZ 31335. NB: elf/tst-valgrind-smoke test fails with ISA level 4, because valgrind doesn't support AVX512 instructions: https://bugs.kde.org/show_bug.cgi?id=383010 Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* Apply the Makefile sorting fixH.J. Lu2024-02-153-14/+14
| | | | Apply the Makefile sorting fix generated by sort-makefile-lines.py.
* sysdeps/x86_64/Makefile (tests): Add the end markerH.J. Lu2024-02-151-2/+4
|
* x86: Expand the comment on when REP STOSB is used on memsetAdhemerval Zanella2024-02-131-1/+3
| | | | Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86/cet: fix shadow stack test scriptsMichael Jeanson2024-02-123-3/+3
| | | | | | | | | | | | | Some shadow stack test scripts use the '==' operator with the 'test' command to validate exit codes resulting in the following error: sysdeps/x86_64/tst-shstk-legacy-1e.sh: 31: test: 139: unexpected operator The '==' operator is invalid for the 'test' command, use '-eq' like the previous call to 'test'. Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* string: Use builtins for ffs and ffsllAdhemerval Zanella Netto2024-02-014-83/+2
| | | | | | | It allows to remove a lot of arch-specific implementations. Checked on x86_64, aarch64, powerpc64. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* x86-64: Check if mprotect works before rewriting PLTH.J. Lu2024-01-151-0/+25
| | | | | | | | | | | | | | | | Systemd execution environment configuration may prohibit changing a memory mapping to become executable: MemoryDenyWriteExecute= Takes a boolean argument. If set, attempts to create memory mappings that are writable and executable at the same time, or to change existing memory mappings to become executable, or mapping shared memory segments as executable, are prohibited. When it is set, systemd service stops working if PLT rewrite is enabled. Check if mprotect works before rewriting PLT. This fixes BZ #31230. This also works with SELinux when deny_execmem is on. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* x86_64: Optimize ffsll function code size.Sunil K Pandey2024-01-131-5/+5
| | | | | | | | | | | | | | | | | Ffsll function randomly regress by ~20%, depending on how code gets aligned in memory. Ffsll function code size is 17 bytes. Since default function alignment is 16 bytes, it can load on 16, 32, 48 or 64 bytes aligned memory. When ffsll function load at 16, 32 or 64 bytes aligned memory, entire code fits in single 64 bytes cache line. When ffsll function load at 48 bytes aligned memory, it splits in two cache line, hence random regression. Ffsll function size reduction from 17 bytes to 12 bytes ensures that it will always fit in single 64 bytes cache line. This patch fixes ffsll function random performance regression. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* x86-64/cet: Make CET feature check specific to Linux/x86H.J. Lu2024-01-111-4/+5
| | | | | | CET feature bits in TCB, which are Linux specific, are used to check if CET features are active. Move CET feature check to Linux/x86 directory. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* i386: Remove CET support bitsH.J. Lu2024-01-106-1/+165
| | | | | | | | | | | | 1. Remove _dl_runtime_resolve_shstk and _dl_runtime_profile_shstk. 2. Move CET offsets from x86 cpu-features-offsets.sym to x86-64 features-offsets.sym. 3. Rename x86 cet-control.h to x86-64 feature-control.h since it is only for x86-64 and also used for PLT rewrite. 4. Add x86-64 ldsodefs.h to include feature-control.h. 5. Change TUNABLE_CALLBACK (set_plt_rewrite) to x86-64 only. 6. Move x86 dl-procruntime.c to x86-64. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86-64/cet: Move check-cet.awk to x86_64H.J. Lu2024-01-102-1/+54
| | | | Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86-64/cet: Move dl-cet.[ch] to x86_64 directoriesH.J. Lu2024-01-101-0/+364
| | | | | | Since CET is only enabled for x86-64, move dl-cet.[ch] to x86_64 directories. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86: Move x86-64 shadow stack startup codesH.J. Lu2024-01-101-0/+74
| | | | | | Move sysdeps/x86/libc-start.h to sysdeps/x86_64/libc-start.h and use sysdeps/generic/libc-start.h for i386. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* i386: Remove CET supportAdhemerval Zanella2024-01-091-0/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | CET is only support for x86_64, this patch reverts: - faaee1f07ed x86: Support shadow stack pointer in setjmp/longjmp. - be9ccd27c09 i386: Add _CET_ENDBR to indirect jump targets in add_n.S/sub_n.S - c02695d7764 x86/CET: Update vfork to prevent child return - 5d844e1b725 i386: Enable CET support in ucontext functions - 124bcde683 x86: Add _CET_ENDBR to functions in crti.S - 562837c002 x86: Add _CET_ENDBR to functions in dl-tlsdesc.S - f753fa7dea x86: Support IBT and SHSTK in Intel CET [BZ #21598] - 825b58f3fb i386-mcount.S: Add _CET_ENDBR to _mcount and __fentry__ - 7e119cd582 i386: Use _CET_NOTRACK in i686/memcmp.S - 177824e232 i386: Use _CET_NOTRACK in memcmp-sse4.S - 0a899af097 i386: Use _CET_NOTRACK in memcpy-ssse3-rep.S - 7fb613361c i386: Use _CET_NOTRACK in memcpy-ssse3.S - 77a8ae0948 i386: Use _CET_NOTRACK in memset-sse2-rep.S - 00e7b76a8f i386: Use _CET_NOTRACK in memset-sse2.S - 90d15dc577 i386: Use _CET_NOTRACK in strcat-sse2.S - f1574581c7 i386: Use _CET_NOTRACK in strcpy-sse2.S - 4031d7484a i386/sub_n.S: Add a missing _CET_ENDBR to indirect jump - target - Checked on i686-linux-gnu.
* x86: Move CET infrastructure to x86_64Adhemerval Zanella2024-01-0953-0/+1506
| | | | | | | | The CET is only supported for x86_64 and there is no plan to add kernel support for i386. Move the Makefile rules and files from the generic x86 folder to x86_64 one. Checked on x86_64-linux-gnu and i686-linux-gnu.
* x32: Handle displacement overflow in PLT rewrite [BZ #31218]H.J. Lu2024-01-064-2/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PLT rewrite calculated displacement with ElfW(Addr) disp = value - branch_start - JMP32_INSN_SIZE; On x32, displacement from 0xf7fbe060 to 0x401030 was calculated as unsigned int disp = 0x401030 - 0xf7fbe060 - 5; with disp == 0x8442fcb and caused displacement overflow. The PLT entry was changed to: 0xf7fbe060 <+0>: e9 cb 2f 44 08 jmp 0x401030 0xf7fbe065 <+5>: cc int3 0xf7fbe066 <+6>: cc int3 0xf7fbe067 <+7>: cc int3 0xf7fbe068 <+8>: cc int3 0xf7fbe069 <+9>: cc int3 0xf7fbe06a <+10>: cc int3 0xf7fbe06b <+11>: cc int3 0xf7fbe06c <+12>: cc int3 0xf7fbe06d <+13>: cc int3 0xf7fbe06e <+14>: cc int3 0xf7fbe06f <+15>: cc int3 x32 has 32-bit address range, but it doesn't wrap address around at 4GB, JMP target was changed to 0x100401030 (0xf7fbe060LL + 0x8442fcbLL + 5), which is above 4GB. Always use uint64_t to calculate displacement. This fixes BZ #31218. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>