about summary refs log tree commit diff
path: root/sysdeps/x86_64
Commit message (Collapse)AuthorAgeFilesLines
* x86: Update _dl_tlsdesc_dynamic to preserve caller-saved registersH.J. Lu2024-02-2810-150/+306
| | | | | | | | | | | | | | | | | | | | | | | | | | | Compiler generates the following instruction sequence for GNU2 dynamic TLS access: leaq tls_var@TLSDESC(%rip), %rax call *tls_var@TLSCALL(%rax) or leal tls_var@TLSDESC(%ebx), %eax call *tls_var@TLSCALL(%eax) CALL instruction is transparent to compiler which assumes all registers, except for EFLAGS and RAX/EAX, are unchanged after CALL. When _dl_tlsdesc_dynamic is called, it calls __tls_get_addr on the slow path. __tls_get_addr is a normal function which doesn't preserve any caller-saved registers. _dl_tlsdesc_dynamic saved and restored integer caller-saved registers, but didn't preserve any other caller-saved registers. Add _dl_tlsdesc_dynamic IFUNC functions for FNSAVE, FXSAVE, XSAVE and XSAVEC to save and restore all caller-saved registers. This fixes BZ #31372. Add GLRO(dl_x86_64_runtime_resolve) with GLRO(dl_x86_tlsdesc_dynamic) to optimize elf_machine_runtime_setup. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* x86_64: Exclude SSE, AVX and FMA4 variants in libm multiarchSunil K Pandey2024-02-2560-295/+896
| | | | | | | | | | | | | | | | | | | When glibc is built with ISA level 3 or higher by default, the resulting glibc binaries won't run on SSE or FMA4 processors. Exclude SSE, AVX and FMA4 variants in libm multiarch when ISA level 3 or higher is enabled by default. When glibc is built with ISA level 2 enabled by default, only keep SSE4.1 variant. Fixes BZ 31335. NB: elf/tst-valgrind-smoke test fails with ISA level 4, because valgrind doesn't support AVX512 instructions: https://bugs.kde.org/show_bug.cgi?id=383010 Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* Apply the Makefile sorting fixH.J. Lu2024-02-153-14/+14
| | | | Apply the Makefile sorting fix generated by sort-makefile-lines.py.
* sysdeps/x86_64/Makefile (tests): Add the end markerH.J. Lu2024-02-151-2/+4
|
* x86: Expand the comment on when REP STOSB is used on memsetAdhemerval Zanella2024-02-131-1/+3
| | | | Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86/cet: fix shadow stack test scriptsMichael Jeanson2024-02-123-3/+3
| | | | | | | | | | | | | Some shadow stack test scripts use the '==' operator with the 'test' command to validate exit codes resulting in the following error: sysdeps/x86_64/tst-shstk-legacy-1e.sh: 31: test: 139: unexpected operator The '==' operator is invalid for the 'test' command, use '-eq' like the previous call to 'test'. Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* string: Use builtins for ffs and ffsllAdhemerval Zanella Netto2024-02-014-83/+2
| | | | | | | It allows to remove a lot of arch-specific implementations. Checked on x86_64, aarch64, powerpc64. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* x86-64: Check if mprotect works before rewriting PLTH.J. Lu2024-01-151-0/+25
| | | | | | | | | | | | | | | | Systemd execution environment configuration may prohibit changing a memory mapping to become executable: MemoryDenyWriteExecute= Takes a boolean argument. If set, attempts to create memory mappings that are writable and executable at the same time, or to change existing memory mappings to become executable, or mapping shared memory segments as executable, are prohibited. When it is set, systemd service stops working if PLT rewrite is enabled. Check if mprotect works before rewriting PLT. This fixes BZ #31230. This also works with SELinux when deny_execmem is on. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* x86_64: Optimize ffsll function code size.Sunil K Pandey2024-01-131-5/+5
| | | | | | | | | | | | | | | | | Ffsll function randomly regress by ~20%, depending on how code gets aligned in memory. Ffsll function code size is 17 bytes. Since default function alignment is 16 bytes, it can load on 16, 32, 48 or 64 bytes aligned memory. When ffsll function load at 16, 32 or 64 bytes aligned memory, entire code fits in single 64 bytes cache line. When ffsll function load at 48 bytes aligned memory, it splits in two cache line, hence random regression. Ffsll function size reduction from 17 bytes to 12 bytes ensures that it will always fit in single 64 bytes cache line. This patch fixes ffsll function random performance regression. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* x86-64/cet: Make CET feature check specific to Linux/x86H.J. Lu2024-01-111-4/+5
| | | | | | CET feature bits in TCB, which are Linux specific, are used to check if CET features are active. Move CET feature check to Linux/x86 directory. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* i386: Remove CET support bitsH.J. Lu2024-01-106-1/+165
| | | | | | | | | | | | 1. Remove _dl_runtime_resolve_shstk and _dl_runtime_profile_shstk. 2. Move CET offsets from x86 cpu-features-offsets.sym to x86-64 features-offsets.sym. 3. Rename x86 cet-control.h to x86-64 feature-control.h since it is only for x86-64 and also used for PLT rewrite. 4. Add x86-64 ldsodefs.h to include feature-control.h. 5. Change TUNABLE_CALLBACK (set_plt_rewrite) to x86-64 only. 6. Move x86 dl-procruntime.c to x86-64. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86-64/cet: Move check-cet.awk to x86_64H.J. Lu2024-01-102-1/+54
| | | | Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86-64/cet: Move dl-cet.[ch] to x86_64 directoriesH.J. Lu2024-01-101-0/+364
| | | | | | Since CET is only enabled for x86-64, move dl-cet.[ch] to x86_64 directories. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86: Move x86-64 shadow stack startup codesH.J. Lu2024-01-101-0/+74
| | | | | | Move sysdeps/x86/libc-start.h to sysdeps/x86_64/libc-start.h and use sysdeps/generic/libc-start.h for i386. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* i386: Remove CET supportAdhemerval Zanella2024-01-091-0/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | CET is only support for x86_64, this patch reverts: - faaee1f07ed x86: Support shadow stack pointer in setjmp/longjmp. - be9ccd27c09 i386: Add _CET_ENDBR to indirect jump targets in add_n.S/sub_n.S - c02695d7764 x86/CET: Update vfork to prevent child return - 5d844e1b725 i386: Enable CET support in ucontext functions - 124bcde683 x86: Add _CET_ENDBR to functions in crti.S - 562837c002 x86: Add _CET_ENDBR to functions in dl-tlsdesc.S - f753fa7dea x86: Support IBT and SHSTK in Intel CET [BZ #21598] - 825b58f3fb i386-mcount.S: Add _CET_ENDBR to _mcount and __fentry__ - 7e119cd582 i386: Use _CET_NOTRACK in i686/memcmp.S - 177824e232 i386: Use _CET_NOTRACK in memcmp-sse4.S - 0a899af097 i386: Use _CET_NOTRACK in memcpy-ssse3-rep.S - 7fb613361c i386: Use _CET_NOTRACK in memcpy-ssse3.S - 77a8ae0948 i386: Use _CET_NOTRACK in memset-sse2-rep.S - 00e7b76a8f i386: Use _CET_NOTRACK in memset-sse2.S - 90d15dc577 i386: Use _CET_NOTRACK in strcat-sse2.S - f1574581c7 i386: Use _CET_NOTRACK in strcpy-sse2.S - 4031d7484a i386/sub_n.S: Add a missing _CET_ENDBR to indirect jump - target - Checked on i686-linux-gnu.
* x86: Move CET infrastructure to x86_64Adhemerval Zanella2024-01-0953-0/+1506
| | | | | | | | The CET is only supported for x86_64 and there is no plan to add kernel support for i386. Move the Makefile rules and files from the generic x86 folder to x86_64 one. Checked on x86_64-linux-gnu and i686-linux-gnu.
* x32: Handle displacement overflow in PLT rewrite [BZ #31218]H.J. Lu2024-01-064-2/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PLT rewrite calculated displacement with ElfW(Addr) disp = value - branch_start - JMP32_INSN_SIZE; On x32, displacement from 0xf7fbe060 to 0x401030 was calculated as unsigned int disp = 0x401030 - 0xf7fbe060 - 5; with disp == 0x8442fcb and caused displacement overflow. The PLT entry was changed to: 0xf7fbe060 <+0>: e9 cb 2f 44 08 jmp 0x401030 0xf7fbe065 <+5>: cc int3 0xf7fbe066 <+6>: cc int3 0xf7fbe067 <+7>: cc int3 0xf7fbe068 <+8>: cc int3 0xf7fbe069 <+9>: cc int3 0xf7fbe06a <+10>: cc int3 0xf7fbe06b <+11>: cc int3 0xf7fbe06c <+12>: cc int3 0xf7fbe06d <+13>: cc int3 0xf7fbe06e <+14>: cc int3 0xf7fbe06f <+15>: cc int3 x32 has 32-bit address range, but it doesn't wrap address around at 4GB, JMP target was changed to 0x100401030 (0xf7fbe060LL + 0x8442fcbLL + 5), which is above 4GB. Always use uint64_t to calculate displacement. This fixes BZ #31218. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* elf: Add ELF_DYNAMIC_AFTER_RELOC to rewrite PLTH.J. Lu2024-01-058-1/+387
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add ELF_DYNAMIC_AFTER_RELOC to allow target specific processing after relocation. For x86-64, add #define DT_X86_64_PLT (DT_LOPROC + 0) #define DT_X86_64_PLTSZ (DT_LOPROC + 1) #define DT_X86_64_PLTENT (DT_LOPROC + 3) 1. DT_X86_64_PLT: The address of the procedure linkage table. 2. DT_X86_64_PLTSZ: The total size, in bytes, of the procedure linkage table. 3. DT_X86_64_PLTENT: The size, in bytes, of a procedure linkage table entry. With the r_addend field of the R_X86_64_JUMP_SLOT relocation set to the memory offset of the indirect branch instruction. Define ELF_DYNAMIC_AFTER_RELOC for x86-64 to rewrite the PLT section with direct branch after relocation when the lazy binding is disabled. PLT rewrite is disabled by default since SELinux may disallow modifying code pages and ld.so can't detect it in all cases. Use $ export GLIBC_TUNABLES=glibc.cpu.plt_rewrite=1 to enable PLT rewrite with 32-bit direct jump at run-time or $ export GLIBC_TUNABLES=glibc.cpu.plt_rewrite=2 to enable PLT rewrite with 32-bit direct jump and on APX processors with 64-bit absolute jump at run-time. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* x86-64/cet: Check the restore token in longjmpH.J. Lu2024-01-041-6/+41
| | | | | | | | | | | | | | | | | | | | | | setcontext and swapcontext put a restore token on the old shadow stack which is used to restore the target shadow stack when switching user contexts. When longjmp from a user context, the target shadow stack can be different from the current shadow stack and INCSSP can't be used to restore the shadow stack pointer to the target shadow stack. Update longjmp to search for a restore token. If found, use the token to restore the shadow stack pointer before using INCSSP to pop the shadow stack. Stop the token search and use INCSSP if the shadow stack entry value is the same as the current shadow stack pointer. It is a user error if there is a shadow stack switch without leaving a restore token on the old shadow stack. The only difference between __longjmp.S and __longjmp_chk.S is that __longjmp_chk.S has a check for invalid longjmp usages. Merge __longjmp.S and __longjmp_chk.S by adding the CHECK_INVALID_LONGJMP macro. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* i386: Ignore --enable-cetH.J. Lu2024-01-042-0/+113
| | | | | | | | | | | | | | Since shadow stack is only supported for x86-64, ignore --enable-cet for i386. Always setting $(enable-cet) for i386 to "no" to support ifneq ($(enable-cet),no) in x86 Makefiles. We can't use ifeq ($(enable-cet),yes) since $(enable-cet) can be "yes", "no" or "permissive". Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Update copyright dates with scripts/update-copyrightsPaul Eggert2024-01-011180-1180/+1180
|
* x86/cet: Enable shadow stack during startupH.J. Lu2024-01-011-2/+10
| | | | | | | | | | | | | | | | | | | | | | | Previously, CET was enabled by kernel before passing control to user space and the startup code must disable CET if applications or shared libraries aren't CET enabled. Since the current kernel only supports shadow stack and won't enable shadow stack before passing control to user space, we need to enable shadow stack during startup if the application and all shared library are shadow stack enabled. There is no need to disable shadow stack at startup. Shadow stack can only be enabled in a function which will never return. Otherwise, shadow stack will underflow at the function return. 1. GL(dl_x86_feature_1) is set to the CET features which are supported by the processor and are not disabled by the tunable. Only non-zero features in GL(dl_x86_feature_1) should be enabled. After enabling shadow stack with ARCH_SHSTK_ENABLE, ARCH_SHSTK_STATUS is used to check if shadow stack is really enabled. 2. Use ARCH_SHSTK_ENABLE in RTLD_START in dynamic executable. It is safe since RTLD_START never returns. 3. Call arch_prctl (ARCH_SHSTK_ENABLE) from ARCH_SETUP_TLS in static executable. Since the start function using ARCH_SETUP_TLS never returns, it is safe to enable shadow stack in ARCH_SETUP_TLS.
* x86/cet: Sync with Linux kernel 6.6 shadow stack interfaceH.J. Lu2024-01-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | Sync with Linux kernel 6.6 shadow stack interface. Since only x86-64 is supported, i386 shadow stack codes are unchanged and CET shouldn't be enabled for i386. 1. When the shadow stack base in TCB is unset, the default shadow stack is in use. Use the current shadow stack pointer as the marker for the default shadow stack. It is used to identify if the current shadow stack is the same as the target shadow stack when switching ucontexts. If yes, INCSSP will be used to unwind shadow stack. Otherwise, shadow stack restore token will be used. 2. Allocate shadow stack with the map_shadow_stack syscall. Since there is no function to explicitly release ucontext, there is no place to release shadow stack allocated by map_shadow_stack in ucontext functions. Such shadow stacks will be leaked. 3. Rename arch_prctl CET commands to ARCH_SHSTK_XXX. 4. Rewrite the CET control functions with the current kernel shadow stack interface. Since CET is no longer enabled by kernel, a separate patch will enable shadow stack during startup.
* x86-64: Fix the tcb field load for x32 [BZ #31185]H.J. Lu2023-12-221-2/+2
| | | | | | | | | | | | | | | | | | | | | | | _dl_tlsdesc_undefweak and _dl_tlsdesc_dynamic access the thread pointer via the tcb field in TCB: _dl_tlsdesc_undefweak: _CET_ENDBR movq 8(%rax), %rax subq %fs:0, %rax ret _dl_tlsdesc_dynamic: ... subq %fs:0, %rax movq -8(%rsp), %rdi ret Since the tcb field in TCB is a pointer, %fs:0 is a 32-bit location, not 64-bit. It should use "sub %fs:0, %RAX_LP" instead. Since _dl_tlsdesc_undefweak returns ptrdiff_t and _dl_make_tlsdesc_dynamic returns void *, RAX_LP is appropriate here for x32 and x86-64. This fixes BZ #31185.
* x86-64: Fix the dtv field load for x32 [BZ #31184]H.J. Lu2023-12-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On x32, I got FAIL: elf/tst-tlsgap $ gdb elf/tst-tlsgap ... open tst-tlsgap-mod1.so Thread 2 "tst-tlsgap" received signal SIGSEGV, Segmentation fault. [Switching to LWP 2268754] _dl_tlsdesc_dynamic () at ../sysdeps/x86_64/dl-tlsdesc.S:108 108 movq (%rsi), %rax (gdb) p/x $rsi $4 = 0xf7dbf9005655fb18 (gdb) This is caused by _dl_tlsdesc_dynamic: _CET_ENDBR /* Preserve call-clobbered registers that we modify. We need two scratch regs anyway. */ movq %rsi, -16(%rsp) movq %fs:DTV_OFFSET, %rsi Since the dtv field in TCB is a pointer, %fs:DTV_OFFSET is a 32-bit location, not 64-bit. Load the dtv field to RSI_LP instead of rsi. This fixes BZ #31184.
* x86: Do not raises floating-point exception traps on fesetexceptflag (BZ 30990)Bruno Haible2023-12-191-10/+14
| | | | | | | | | | | | | | | | | | | According to ISO C23 (7.6.4.4), fesetexcept is supposed to set floating-point exception flags without raising a trap (unlike feraiseexcept, which is supposed to raise a trap if feenableexcept was called with the appropriate argument). The flags can be set in the 387 unit or in the SSE unit. When we need to clear a flag, we need to do so in both units, due to the way fetestexcept is implemented. When we need to set a flag, it is sufficient to do it in the SSE unit, because that is guaranteed to not trap. However, on i386 CPUs that have only a 387 unit, set the flags in the 387, as long as this cannot trap. Co-authored-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* x86: Unifies 'strlen-evex' and 'strlen-evex512' implementations.Matthew Sterrett2023-12-185-472/+439
| | | | | | | | | | | | | | | | | | | | | | This commit uses a common implementation 'strlen-evex-base.S' for both 'strlen-evex' and 'strlen-evex512' The motivation is to reduce the number of implementations to maintain. This incidentally gives a small performance improvement. All tests pass on x86. Benchmarks were taken on SKX. https://www.intel.com/content/www/us/en/products/sku/123613/intel-core-i97900x-xseries-processor-13-75m-cache-up-to-4-30-ghz/specifications.html Geometric mean for strlen-evex512 over all benchmarks (N=10) was (new/old) 0.939 Geometric mean for wcslen-evex512 over all benchmarks (N=10) was (new/old) 0.965 Code Size Changes: strlen-evex512.S : +24 bytes wcslen-evex512.S : +54 bytes Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* x86: Only align destination to 1x VEC_SIZE in memset 4x loopNoah Goldstein2023-11-281-1/+1
| | | | | | | | | Current code aligns to 2x VEC_SIZE. Aligning to 2x has no affect on performance other than potentially resulting in an additional iteration of the loop. 1x maintains aligned stores (the only reason to align in this case) and doesn't incur any unnecessary loop iterations. Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
* elf: Remove LD_PROFILE for static binariesAdhemerval Zanella2023-11-212-30/+36
| | | | | | | | | | | The _dl_non_dynamic_init does not parse LD_PROFILE, which does not enable profile for dlopen objects. Since dlopen is deprecated for static objects, it is better to remove the support. It also allows to trim down libc.a of profile support. Checked on x86_64-linux-gnu. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* x86: Use dl-symbol-redir-ifunc.h on cpu-tunablesAdhemerval Zanella2023-11-212-28/+15
| | | | | | | | | | | | | | | | The dl-symbol-redir-ifunc.h redirects compiler-generated libcalls to arch-specific memory implementations to avoid ifunc calls where it is not yet possible. The memcmp-isa-default-impl.h aims to fix the same issue by calling the specific memset implementation directly. Using the memcmp symbol directly allows the compiler to inline the memset calls (especially because _dl_tunable_set_hwcaps uses constants values), generating better code. Checked on x86_64-linux-gnu. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com> Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* elf: Ignore GLIBC_TUNABLES for setuid/setgid binariesAdhemerval Zanella2023-11-211-1/+0
| | | | | | | | | | | | | | | | | | | | The tunable privilege levels were a retrofit to try and keep the malloc tunable environment variables' behavior unchanged across security boundaries. However, CVE-2023-4911 shows how tricky can be tunable parsing in a security-sensitive environment. Not only parsing, but the malloc tunable essentially changes some semantics on setuid/setgid processes. Although it is not a direct security issue, allowing users to change setuid/setgid semantics is not a good security practice, and requires extra code and analysis to check if each tunable is safe to use on all security boundaries. It also means that security opt-in features, like aarch64 MTE, would need to be explicit enabled by an administrator with a wrapper script or with a possible future system-wide tunable setting. Co-authored-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: DJ Delorie <dj@redhat.com>
* x86: Fix unchecked AVX512-VBMI2 usage in strrchr-evex-base.SNoah Goldstein2023-11-151-24/+51
| | | | | | | | | | | | | strrchr-evex-base used `vpcompress{b|d}` in the page cross logic but was missing the CPU_FEATURE checks for VBMI2 in the ifunc/ifunc-impl-list. The fix is either to add those checks or change the logic to not use `vpcompress{b|d}`. Choosing the latter here so that the strrchr-evex implementation is usable on SKX. New implementation is a bit slower, but this is in a cold path so its probably okay.
* x86: Prepare `strrchr-evex` and `strrchr-evex512` for AVX10Noah Goldstein2023-10-063-569/+293
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit refactors `strrchr-evex` and `strrchr-evex512` to use a common implementation: `strrchr-evex-base.S`. The motivation is `strrchr-evex` needed to be refactored to not use 64-bit masked registers in preperation for AVX10. Once vec-width masked register combining was removed, the EVEX and EVEX512 implementations can easily be implemented in the same file without any major overhead. The net result is performance improvements (measured on TGL) for both `strrchr-evex` and `strrchr-evex512`. Although, note there are some regressions in the test suite and it may be many of the cases that make the total-geomean of improvement/regression across bench-strrchr are cold. The point of the performance measurement is to show there are no major regressions, but the primary motivation is preperation for AVX10. Benchmarks where taken on TGL: https://www.intel.com/content/www/us/en/products/sku/213799/intel-core-i711850h-processor-24m-cache-up-to-4-80-ghz/specifications.html EVEX geometric_mean(N=5) of all benchmarks New / Original : 0.74 EVEX512 geometric_mean(N=5) of all benchmarks New / Original: 0.87 Full check passes on x86.
* hurd: Drop REG_GSFS and REG_ESDS from x86_64's ucontextSamuel Thibault2023-09-281-5/+1
| | | | These are useless on x86_64, and __NGREG was actually wrong with them.
* elf: Fix slow tls access after dlopen [BZ #19924]Szabolcs Nagy2023-09-011-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In short: __tls_get_addr checks the global generation counter and if the current dtv is older then _dl_update_slotinfo updates dtv up to the generation of the accessed module. So if the global generation is newer than generation of the module then __tls_get_addr keeps hitting the slow dtv update path. The dtv update path includes a number of checks to see if any update is needed and this already causes measurable tls access slow down after dlopen. It may be possible to detect up-to-date dtv faster. But if there are many modules loaded (> TLS_SLOTINFO_SURPLUS) then this requires at least walking the slotinfo list. This patch tries to update the dtv to the global generation instead, so after a dlopen the tls access slow path is only hit once. The modules with larger generation than the accessed one were not necessarily synchronized before, so additional synchronization is needed. This patch uses acquire/release synchronization when accessing the generation counter. Note: in the x86_64 version of dl-tls.c the generation is only loaded once, since relaxed mo is not faster than acquire mo load. I have not benchmarked this. Tested by Adhemerval Zanella on aarch64, powerpc, sparc, x86 who reported that it fixes the performance issue of bug 19924. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86_64: Add log1p with FMAH.J. Lu2023-08-213-0/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On Skylake, it changes log1p bench performance by: Before After Improvement max 63.349 58.347 8% min 4.448 5.651 -30% mean 12.0674 10.336 14% The minimum code path is if (hx < 0x3FDA827A) /* x < 0.41422 */ { if (__glibc_unlikely (ax >= 0x3ff00000)) /* x <= -1.0 */ { ... } if (__glibc_unlikely (ax < 0x3e200000)) /* |x| < 2**-29 */ { math_force_eval (two54 + x); /* raise inexact */ if (ax < 0x3c900000) /* |x| < 2**-54 */ { ... } else return x - x * x * 0.5; FMA and non-FMA code sequences look similar. Non-FMA version is slightly faster. Since log1p is called by asinh and atanh, it improves asinh performance by: Before After Improvement max 75.645 63.135 16% min 10.074 10.071 0% mean 15.9483 14.9089 6% and improves atanh performance by: Before After Improvement max 91.768 75.081 18% min 15.548 13.883 10% mean 18.3713 16.8011 8%
* x86_64: Add expm1 with FMAH.J. Lu2023-08-143-0/+48
| | | | | | | | | | | | | | | | | | On Skylake, it improves expm1 bench performance by: Before After Improvement max 70.204 68.054 3% min 20.709 16.2 22% mean 22.1221 16.7367 24% NB: Add extern long double __expm1l (long double); extern long double __expm1f128 (long double); for __typeof (__expm1l) and __typeof (__expm1f128) when __expm1 is defined since __expm1 may be expanded in their declarations which causes the build failure.
* x86_64: Add log2 with FMAH.J. Lu2023-08-113-0/+48
| | | | | | | | | On Skylake, it improves log2 bench performance by: Before After Improvement max 208.779 63.827 69% min 9.977 6.55 34% mean 10.366 6.8191 34%
* x86_64: Sort fpu/multiarch/MakefileH.J. Lu2023-08-101-20/+74
| | | | | | Sort Makefile variables using scripts/sort-makefile-lines.py. No code generation changes observed in libm. No regressions on x86_64.
* x86_64: Fix build with --disable-multiarch (BZ 30721)Adhemerval Zanella2023-08-103-1/+5
| | | | | | | | | | | | | | With multiarch disabled, the default memmove implementation provides the fortify routines for memcpy, mempcpy, and memmove. However, it does not provide the internal hidden definitions used when building with fortify enabled. The memset has a similar issue. Checked on x86_64-linux-gnu building with different options: default and --disable-multi-arch plus default, --disable-default-pie, --enable-fortify-source={2,3}, and --enable-fortify-source={2,3} with --disable-default-pie. Tested-by: Andreas K. Huettel <dilfridge@gentoo.org> Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* Update x86_64 libm-test-ulps (x32 ABI)Andreas K. Hüttel2023-07-191-13/+14
| | | | | | | | | Based on feedback by Mike Gilbert <floppym@gentoo.org> Linux-6.1.38-dist x86_64 AMD Phenom-tm- II X6 1055T Processor -march=amdfam10 failures occur for x32 ABI Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org>
* configure: Use autoconf 2.71Siddhesh Poyarekar2023-07-172-21/+27
| | | | | | | | | | | | | | Bump autoconf requirement to 2.71 to allow regenerating configure on more recent distributions. autoconf 2.71 has been in Fedora since F36 and is the current version in Debian stable (bookworm). It appears to be current in Gentoo as well. All sysdeps configure and preconfigure scripts have also been regenerated; all changes are trivial transformations that do not affect functionality. Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* wchar: Avoid PLT entries with _FORTIFY_SOURCEFrédéric Bérat2023-07-051-0/+4
| | | | | | | | | | The change is meant to avoid unwanted PLT entries for the wmemset and wcrtomb routines when _FORTIFY_SOURCE is set. On top of that, ensure that *_chk routines have their hidden builtin definitions available. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* string: Ensure *_chk routines have their hidden builtin definition availableFrédéric Bérat2023-07-058-0/+20
| | | | | | | If libc_hidden_builtin_{def,proto} isn't properly set for *_chk routines, there are unwanted PLT entries in libc.so. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* ld.so: Always use MAP_COPY to map the first segment [BZ #30452]H.J. Lu2023-06-303-0/+9
| | | | | | | | | | | The first segment in a shared library may be read-only, not executable. To support LD_PREFER_MAP_32BIT_EXEC on such shared libraries, we also check MAP_DENYWRITE to decide if MAP_32BIT should be passed to mmap. Normally the first segment is mapped with MAP_COPY, which is defined as (MAP_PRIVATE | MAP_DENYWRITE). But if the segment alignment is greater than the page size, MAP_COPY isn't used to allocate enough space to ensure that the segment can be properly aligned. Map the first segment with MAP_COPY in this case to fix BZ #30452.
* x86: Make dl-cache.h and readelflib.c not Linux-specificSergey Bugaev2023-06-261-0/+51
| | | | | | | These files could be useful to any port that wants to use ld.so.cache. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* sysdeps/{i386, x86_64}/mempcpy_chk.S: fix linknamespace for __mempcpy_chkFrederic Berat2023-06-221-1/+1
| | | | | | | | | | | | | On i386 and x86_64, for libc.a specifically, __mempcpy_chk calls mempcpy which leads POSIX routines to call non-POSIX mempcpy indirectly. This leads the linknamespace test to fail when glibc is built with __FORTIFY_SOURCE=3. Since calling mempcpy doesn't bring any benefit for libc.a, directly call __mempcpy instead. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* x86-64: Use YMM registers in memcmpeq-evex.SH.J. Lu2023-06-011-1/+1
| | | | | | | Since the assembly source file with -evex suffix should use YMM registers, not ZMM registers, include x86-evex256-vecs.h by default to use YMM registers in memcmpeq-evex.S Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* Fix misspellings in sysdeps/x86_64 -- BZ 25337.Paul Pluzhnikov2023-05-2337-105/+105
| | | | | | | Applying this commit results in bit-identical rebuild of libc.so.6 math/libm.so.6 elf/ld-linux-x86-64.so.2 mathvec/libmvec.so.1 Reviewed-by: Florian Weimer <fweimer@redhat.com>
* Fix misspellings in sysdeps/x86_64/fpu/multiarch -- BZ 25337.Paul Pluzhnikov2023-05-23112-169/+169
| | | | | | | Applying this commit results in a bit-identical rebuild of mathvec/libmvec.so.1 (which is the only binary that gets rebuilt). Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>