about summary refs log tree commit diff
path: root/sysdeps
Commit message (Collapse)AuthorAgeFilesLines
* Always define __USE_TIME_BITS64 when 64 bit time_t is usedAdhemerval Zanella2024-04-0244-110/+96
| | | | | | | | | | | | | | | | | | | | It was raised on libc-help [1] that some Linux kernel interfaces expect the libc to define __USE_TIME_BITS64 to indicate the time_t size for the kABI. Different than defined by the initial y2038 design document [2], the __USE_TIME_BITS64 is only defined for ABIs that support more than one time_t size (by defining the _TIME_BITS for each module). The 64 bit time_t redirects are now enabled using a different internal define (__USE_TIME64_REDIRECTS). There is no expected change in semantic or code generation. Checked on x86_64-linux-gnu, i686-linux-gnu, aarch64-linux-gnu, and arm-linux-gnueabi [1] https://sourceware.org/pipermail/libc-help/2024-January/006557.html [2] https://sourceware.org/glibc/wiki/Y2038ProofnessDesign Reviewed-by: DJ Delorie <dj@redhat.com>
* x86_64: Remove avx512 strstr implementationAdhemerval Zanella2024-03-274-248/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As indicated in a recent thread, this it is a simple brute-force algorithm that checks the whole needle at a matching character pair (and does so 1 byte at a time after the first 64 bytes of a needle). Also it never skips ahead and thus can match at every haystack position after trying to match all of the needle, which generic implementation avoids. As indicated by Wilco, a 4x larger needle and 16x larger haystack gives a clear 65x slowdown both basic_strstr and __strstr_avx512: "ifuncs": ["basic_strstr", "twoway_strstr", "__strstr_avx512", "__strstr_sse2_unaligned", "__strstr_generic"], { "len_haystack": 65536, "len_needle": 1024, "align_haystack": 0, "align_needle": 0, "fail": 1, "desc": "Difficult bruteforce needle", "timings": [4.0948e+07, 15094.5, 3.20818e+07, 108558, 10839.2] }, { "len_haystack": 1048576, "len_needle": 4096, "align_haystack": 0, "align_needle": 0, "fail": 1, "desc": "Difficult bruteforce needle", "timings": [2.69767e+09, 100797, 2.08535e+09, 495706, 82666.9] } PS: I don't have an AVX512 capable machine to verify this issues, but skimming through the code it does seems to follow what Wilco has described. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* signal: Avoid system signal disposition to interfere with testsAdhemerval Zanella2024-03-271-0/+3
| | | | | Both tst-sigset2 and tst-signal1 expectes that SIGINT disposition is set to SIG_DFL.
* RISC-V: Fix the static-PIE non-relocated object checkPalmer Dabbelt2024-03-251-1/+1
| | | | | | | | | | | | The value of l_scope is only valid post relocation, so this original check was triggering undefined behavior. Instead just directly check to see if the object has been relocated, at which point using l_scope is safe. Reported-by: Andreas Schwab <schwab@suse.de> Closes: BZ #31317 Fixes: e0590f41fe ("RISC-V: Enable static-pie.") Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
* htl: Implement some support for TLS_DTV_AT_TPSergey Bugaev2024-03-232-2/+23
| | | | | Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-ID: <20240323173301.151066-19-bugaevc@gmail.com>
* htl: Respect GL(dl_stack_flags) when allocating stacksSergey Bugaev2024-03-232-2/+11
| | | | | | | | | | | | Previously, HTL would always allocate non-executable stacks. This has never been noticed, since GNU Mach on x86 ignores VM_PROT_EXECUTE and makes all pages implicitly executable. Since GNU Mach on AArch64 supports non-executable pages, HTL forgetting to pass VM_PROT_EXECUTE immediately breaks any code that (unfortunately, still) relies on executable stacks. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-ID: <20240323173301.151066-7-bugaevc@gmail.com>
* hurd: Use the RETURN_ADDRESS macroSergey Bugaev2024-03-231-1/+1
| | | | | | | This gives us PAC stripping on AArch64. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-ID: <20240323173301.151066-6-bugaevc@gmail.com>
* hurd: Disable Prefer_MAP_32BIT_EXEC on non-x86_64 for nowSergey Bugaev2024-03-232-2/+2
| | | | | | | | While we could support it on any architecture, the tunable is currently only defined on x86_64. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-ID: <20240323173301.151066-5-bugaevc@gmail.com>
* hurd: Move internal functions to internal headerSergey Bugaev2024-03-231-0/+78
| | | | | | | | | | | Move _hurd_self_sigstate (), _hurd_critical_section_lock (), and _hurd_critical_section_unlock () inline implementations (that were already guarded by #if defined _LIBC) to the internal version of the header. While at it, add <tls.h> to the includes, and use __LIBC_NO_TLS () unconditionally. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-ID: <20240323173301.151066-2-bugaevc@gmail.com>
* or1k: Add prctl wrapper to unwrap variadic argsStafford Horne2024-03-221-0/+42
| | | | | | | | | | | | | On OpenRISC variadic functions and regular functions have different calling conventions so this wrapper is needed to translate. This wrapper is copied from x86_64/x32. I don't know the build system enough to find a cleaner way to share the code between x86_64/x32 and or1k (maybe Implies?), so I went with the straight copy. This fixes test failures: misc/tst-prctl nptl/tst-setgetname
* or1k: Only define fpu rouding and exceptions with hard-floatStafford Horne2024-03-221-0/+19
| | | | | | | | | | | | | | This test failure: math/test-fenv If rounding mode and exception macros are defined then the fenv tests run and always fail. This patch adds an ifdef using the __or1k_hard_float__ macro provided by gcc to avoid defining these fenv macros when they cnnot be used. This is similar to what is done in csky. Note, I will post the or1k hard-float support soon. So, I prefer to leave the hard-float bits here for now.
* or1k: Update libm test ulpsStafford Horne2024-03-221-0/+1
| | | | | | | To fix test failures: FAIL: math/test-float-hypot FAIL: math/test-float32-hypot
* AArch64: Check kernel version for SVE ifuncsWilco Dijkstra2024-03-215-2/+53
| | | | | | | | | | | | | | | | Old Linux kernels disable SVE after every system call. Calling the SVE-optimized memcpy afterwards will then cause a trap to reenable SVE. As a result, applications with a high use of syscalls may run slower with the SVE memcpy. This is true for kernels between 4.15.0 and before 6.2.0, except for 5.14.0 which was patched. Avoid this by checking the kernel version and selecting the SVE ifunc on modern kernels. Parse the kernel version reported by uname() into a 24-bit kernel.major.minor value without calling any library functions. If uname() is not supported or if the version format is not recognized, assume the kernel is modern. Tested-by: Florian Weimer <fweimer@redhat.com> Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* powerpc: Placeholder and infrastructure/build support to add Power11 ↵Amrita H S2024-03-1915-5/+27
| | | | | | | | | | | | related changes. The following three changes have been added to provide initial Power11 support. 1. Add the directories to hold Power11 files. 2. Add support to select Power11 libraries based on AT_PLATFORM. 3. Let submachine=power11 be set automatically. Reviewed-by: Florian Weimer <fweimer@redhat.com> Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
* powerpc: Add HWCAP3/HWCAP4 data to TCB for Power Architecture.Manjunath Matti2024-03-199-19/+66
| | | | | | | | | | | | | This patch adds a new feature for powerpc. In order to get faster access to the HWCAP3/HWCAP4 masks, similar to HWCAP/HWCAP2 (i.e. for implementing __builtin_cpu_supports() in GCC) without the overhead of reading them from the auxiliary vector, we now reserve space for them in the TCB. This is an ABI change for GLIBC 2.39. Suggested-by: Peter Bergner <bergner@linux.ibm.com> Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
* elf: Enable TLS descriptor tests on aarch64Adhemerval Zanella2024-03-192-4/+5
| | | | | | | | | | | | The aarch64 uses 'trad' for traditional tls and 'desc' for tls descriptors, but unlike other targets it defaults to 'desc'. The gnutls2 configure check does not set aarch64 as an ABI that uses TLS descriptors, which then disable somes stests. Also rename the internal machinery fron gnu2 to tls descriptors. Checked on aarch64-linux-gnu. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* arm: Update _dl_tlsdesc_dynamic to preserve caller-saved registers (BZ 31372)Adhemerval Zanella2024-03-194-8/+237
| | | | | | | | | | | | | | | | | | | | | | ARM _dl_tlsdesc_dynamic slow path has two issues: * The ip/r12 is defined by AAPCS as a scratch register, and gcc is used to save the stack pointer before on some function calls. So it should also be saved/restored as well. It fixes the tst-gnu2-tls2. * None of the possible VFP registers are saved/restored. ARM has the additional complexity to have different VFP bank sizes (depending of VFP support by the chip). The tst-gnu2-tls2 test is extended to check for VFP registers, although only for hardfp builds. Different than setcontext, _dl_tlsdesc_dynamic does not have HWCAP_ARM_IWMMXT (I don't have a way to properly test it and it is almost a decade since newer hardware was released). With this patch there is no need to mark tst-gnu2-tls2 as XFAIL. Checked on arm-linux-gnueabihf. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* Add tst-gnu2-tls2mod1 to test-internal-extrasAndreas Schwab2024-03-191-0/+2
| | | | | | That allows sysdeps/x86_64/tst-gnu2-tls2mod1.S to use internal headers. Fixes: 717ebfa85c ("x86-64: Allocate state buffer space for RDI, RSI and RBX")
* x86-64: Allocate state buffer space for RDI, RSI and RBXH.J. Lu2024-03-183-11/+147
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | _dl_tlsdesc_dynamic preserves RDI, RSI and RBX before realigning stack. After realigning stack, it saves RCX, RDX, R8, R9, R10 and R11. Define TLSDESC_CALL_REGISTER_SAVE_AREA to allocate space for RDI, RSI and RBX to avoid clobbering saved RDI, RSI and RBX values on stack by xsave to STATE_SAVE_OFFSET(%rsp). +==================+<- stack frame start aligned at 8 or 16 bytes | |<- RDI saved in the red zone | |<- RSI saved in the red zone | |<- RBX saved in the red zone | |<- paddings for stack realignment of 64 bytes |------------------|<- xsave buffer end aligned at 64 bytes | |<- | |<- | |<- |------------------|<- xsave buffer start at STATE_SAVE_OFFSET(%rsp) | |<- 8-byte padding for 64-byte alignment | |<- 8-byte padding for 64-byte alignment | |<- R11 | |<- R10 | |<- R9 | |<- R8 | |<- RDX | |<- RCX +==================+<- RSP aligned at 64 bytes Define TLSDESC_CALL_REGISTER_SAVE_AREA, the total register save area size for all integer registers by adding 24 to STATE_SAVE_OFFSET since RDI, RSI and RBX are saved onto stack without adjusting stack pointer first, using the red-zone. This fixes BZ #31501. Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
* riscv: Update nofpu libm test ulpsDarius Rad2024-03-181-0/+1
| | | | | | Fix two test failures. Reviewed-by: Florian Weimer <fweimer@redhat.com>
* linux: Use rseq area unconditionally in sched_getcpu (bug 31479)Florian Weimer2024-03-151-8/+0
| | | | | | | | | | | | | | | | | | | Originally, nptl/descr.h included <sys/rseq.h>, but we removed that in commit 2c6b4b272e6b4d07303af25709051c3e96288f2d ("nptl: Unconditionally use a 32-byte rseq area"). After that, it was not ensured that the RSEQ_SIG macro was defined during sched_getcpu.c compilation that provided a definition. This commit always checks the rseq area for CPU number information before using the other approaches. This adds an unnecessary (but well-predictable) branch on architectures which do not define RSEQ_SIG, but its cost is small compared to the system call. Most architectures that have vDSO acceleration for getcpu also have rseq support. Fixes: 2c6b4b272e6b4d07303af25709051c3e96288f2d Fixes: 1d350aa06091211863e41169729cee1bca39f72f Reviewed-by: Arjun Shankar <arjun@redhat.com>
* aarch64: fix check for SVE support in assemblerSzabolcs Nagy2024-03-142-4/+6
| | | | | | | | | | | | | Due to GCC bug 110901 -mcpu can override -march setting when compiling asm code and thus a compiler targetting a specific cpu can fail the configure check even when binutils gas supports SVE. The workaround is that explicit .arch directive overrides both -mcpu and -march, and since that's what the actual SVE memcpy uses the configure check should use that too even if the GCC issue is fixed independently. Reviewed-by: Florian Weimer <fweimer@redhat.com>
* Update kernel version to 6.8 in header constant testsJoseph Myers2024-03-133-4/+4
| | | | | | | | | This patch updates the kernel version in the tests tst-mman-consts.py, tst-mount-consts.py and tst-pidfd-consts.py to 6.8. (There are no new constants covered by these tests in 6.8 that need any other header changes.) Tested with build-many-glibcs.py.
* Update syscall lists for Linux 6.8Joseph Myers2024-03-1327-2/+137
| | | | | | | | Linux 6.8 adds five new syscalls. Update syscall-names.list and regenerate the arch-syscall.h headers with build-many-glibcs.py update-syscalls. Tested with build-many-glibcs.py.
* powerpc: Remove power8 strcasestr optimizationAdhemerval Zanella2024-03-128-675/+1
| | | | | | | | | | | | | | | | | | Similar to strstr (1e9a550ba4), power8 strcasestr does not show much improvement compared to the generic implementation. The geomean on bench-strcasestr shows: __strcasestr_power8 __strcasestr_ppc power10 1159 1120 power9 1640 1469 power8 1787 1904 The strcasestr uses the same 'trick' as power7 strstr to detect potential quadradic behavior, which only adds overheads for input that trigger quadradic behavior and it is really a hack. Checked on powerpc64le-linux-gnu. Reviewed-by: DJ Delorie <dj@redhat.com>
* riscv: Fix alignment-ignorant memcpy implementationAdhemerval Zanella2024-03-129-167/+215
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The memcpy optimization (commit 587a1290a1af7bee6db) has a series of mistakes: - The implementation is wrong: the chunk size calculation is wrong leading to invalid memory access. - It adds ifunc supports as default, so --disable-multi-arch does not work as expected for riscv. - It mixes Linux files (memcpy ifunc selection which requires the vDSO/syscall mechanism) with generic support (the memcpy optimization itself). - There is no __libc_ifunc_impl_list, which makes testing only check the selected implementation instead of all supported by the system. This patch also simplifies the required bits to enable ifunc: there is no need to memcopy.h; nor to add Linux-specific files. The __memcpy_noalignment tail handling now uses a branchless strategy similar to aarch64 (overlap 32-bits copies for sizes 4..7 and byte copies for size 1..3). Checked on riscv64 and riscv32 by explicitly enabling the function on __libc_ifunc_impl_list on qemu-system. Changes from v1: * Implement the memcpy in assembly to correctly handle RISCV strict-alignment. Reviewed-by: Evan Green <evan@rivosinc.com> Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
* linux/sigsetops: fix type confusion (bug 31468)Andreas Schwab2024-03-122-9/+9
| | | | | | Each mask in the sigset array is an unsigned long, so fix __sigisemptyset to use that instead of int. The __sigword function returns a simple array index, so it can return int instead of unsigned long.
* LoongArch: Correct {__ieee754, _}_scalb -> {__ieee754, _}_scalbfcaiyinyu2024-03-121-1/+1
|
* x86-64: Simplify minimum ISA check ifdef conditional with ifSunil K Pandey2024-03-031-11/+8
| | | | | | | | | Replace minimum ISA check ifdef conditional with if. Since MINIMUM_X86_ISA_LEVEL and AVX_X86_ISA_LEVEL are compile time constants, compiler will perform constant folding optimization, getting same results. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* riscv: Add and use alignment-ignorant memcpyEvan Green2024-03-015-0/+258
| | | | | | | | | | | | For CPU implementations that can perform unaligned accesses with little or no performance penalty, create a memcpy implementation that does not bother aligning buffers. It will use a block of integer registers, a single integer register, and fall back to bytewise copy for the remainder. Signed-off-by: Evan Green <evan@rivosinc.com> Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
* riscv: Add ifunc helper method to hwprobe.hEvan Green2024-03-011-0/+29
| | | | | | | | Add a little helper method so it's easier to fetch a single value from the hwprobe function when used within an ifunc selector. Signed-off-by: Evan Green <evan@rivosinc.com> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
* riscv: Enable multi-arg ifunc resolversEvan Green2024-03-011-0/+27
| | | | | | | | | | | | | | | | | | | | | | | | RISC-V is apparently the first architecture to pass more than one argument to ifunc resolvers. The helper macros in libc-symbols.h, __ifunc_resolver(), __ifunc(), and __ifunc_hidden(), are incompatible with this. These macros have an "arg" (non-final) parameter that represents the parameter signature of the ifunc resolver. The result is an inability to pass the required comma through in a single preprocessor argument. Rearrange the __ifunc_resolver() macro to be variadic, and pass the types as those variable parameters. Move the guts of __ifunc() and __ifunc_hidden() into new macros, __ifunc_args(), and __ifunc_args_hidden(), that pass the variable arguments down through to __ifunc_resolver(). Then redefine __ifunc() and __ifunc_hidden(), which are used in a bunch of places, to simply shuffle the arguments down into __ifunc_args[_hidden]. Finally, define a riscv-ifunc.h header, which provides convenience macros to those looking to write ifunc selectors that use both arguments. Signed-off-by: Evan Green <evan@rivosinc.com> Reviewed-by: Florian Weimer <fweimer@redhat.com> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
* riscv: Add __riscv_hwprobe pointer to ifunc callsEvan Green2024-03-012-4/+15
| | | | | | | | | | | | | | | | | | The new __riscv_hwprobe() function is designed to be used by ifunc selector functions. This presents a challenge for applications and libraries, as ifunc selectors are invoked before all relocations have been performed, so an external call to __riscv_hwprobe() from an ifunc selector won't work. To address this, pass a pointer to the __riscv_hwprobe() function into ifunc selectors as the second argument (alongside dl_hwcap, which was already being passed). Include a typedef as well for convenience, so that ifunc users don't have to go through contortions to call this routine. Users will need to remember to check the second argument for NULL, to account for older glibcs that don't pass the function. Signed-off-by: Evan Green <evan@rivosinc.com> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
* riscv: Add hwprobe vdso call supportEvan Green2024-03-014-2/+16
| | | | | | | | | The new riscv_hwprobe syscall also comes with a vDSO for faster answers to your most common questions. Call in today to speak with a kernel representative near you! Signed-off-by: Evan Green <evan@rivosinc.com> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
* linux: Introduce INTERNAL_VSYSCALLEvan Green2024-03-011-0/+12
| | | | | | | | | Add an INTERNAL_VSYSCALL() macro that makes a vDSO call, falling back to a regular syscall, but without setting errno. Instead, the return value is plumbed straight out of the macro. Signed-off-by: Evan Green <evan@rivosinc.com> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
* riscv: Add Linux hwprobe syscall supportEvan Green2024-03-016-2/+125
| | | | | | | | | | Add awareness and a thin wrapper function around a new Linux system call that allows callers to get architecture and microarchitecture information about the CPUs from the kernel. This can be used to do things like dynamically choose a memcpy implementation. Signed-off-by: Evan Green <evan@rivosinc.com> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
* x86-64: Update _dl_tlsdesc_dynamic to preserve AMX registersH.J. Lu2024-02-2914-6/+299
| | | | | | | | | | | | | | | | _dl_tlsdesc_dynamic should also preserve AMX registers which are caller-saved. Add X86_XSTATE_TILECFG_ID and X86_XSTATE_TILEDATA_ID to x86-64 TLSDESC_CALL_STATE_SAVE_MASK. Compute the AMX state size and save it in xsave_state_full_size which is only used by _dl_tlsdesc_dynamic_xsave and _dl_tlsdesc_dynamic_xsavec. This fixes the AMX part of BZ #31372. Tested on AMX processor. AMX test is enabled only for compilers with the fix for https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114098 GCC 14 and GCC 11/12/13 branches have the bug fix. Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
* x86_64: Suppress false positive valgrind errorH.J. Lu2024-02-282-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When strcmp-avx2.S is used as the default, elf/tst-valgrind-smoke fails with ==1272761== Conditional jump or move depends on uninitialised value(s) ==1272761== at 0x4022C98: strcmp (strcmp-avx2.S:462) ==1272761== by 0x400B05B: _dl_name_match_p (dl-misc.c:75) ==1272761== by 0x40085F3: _dl_map_object (dl-load.c:1966) ==1272761== by 0x401AEA4: map_doit (rtld.c:644) ==1272761== by 0x4001488: _dl_catch_exception (dl-catch.c:237) ==1272761== by 0x40015AE: _dl_catch_error (dl-catch.c:256) ==1272761== by 0x401B38F: do_preload (rtld.c:816) ==1272761== by 0x401C116: handle_preload_list (rtld.c:892) ==1272761== by 0x401EDF5: dl_main (rtld.c:1842) ==1272761== by 0x401A79E: _dl_sysdep_start (dl-sysdep.c:140) ==1272761== by 0x401BEEE: _dl_start_final (rtld.c:494) ==1272761== by 0x401BEEE: _dl_start (rtld.c:581) ==1272761== by 0x401AD87: ??? (in */elf/ld.so) The assembly codes are: 0x0000000004022c80 <+144>: vmovdqu 0x20(%rdi),%ymm0 0x0000000004022c85 <+149>: vpcmpeqb 0x20(%rsi),%ymm0,%ymm1 0x0000000004022c8a <+154>: vpcmpeqb %ymm0,%ymm15,%ymm2 0x0000000004022c8e <+158>: vpandn %ymm1,%ymm2,%ymm1 0x0000000004022c92 <+162>: vpmovmskb %ymm1,%ecx 0x0000000004022c96 <+166>: inc %ecx => 0x0000000004022c98 <+168>: jne 0x4022c32 <strcmp+66> strcmp-avx2.S has 32-byte vector loads of strings which are shorter than 32 bytes: (gdb) p (char *) ($rdi + 0x20) $6 = 0x1ffeffea20 "memcheck-amd64-linux.so" (gdb) p (char *) ($rsi + 0x20) $7 = 0x4832640 "core-amd64-linux.so" (gdb) call (int) strlen ((char *) ($rsi + 0x20)) $8 = 19 (gdb) call (int) strlen ((char *) ($rdi + 0x20)) $9 = 23 (gdb) It triggers the valgrind error. The above code is safe since the loads don't cross the page boundary. Update tst-valgrind-smoke.sh to accept an optional suppression file and pass a suppression file to valgrind when strcmp-avx2.S is the default implementation of strcmp. Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
* x86: Don't check XFD against /proc/cpuinfoH.J. Lu2024-02-281-0/+3
| | | | | Since /proc/cpuinfo doesn't report XFD, don't check it against /proc/cpuinfo.
* x86-64: Don't use SSE resolvers for ISA level 3 or aboveH.J. Lu2024-02-282-12/+20
| | | | | | | | | | | | | | | | When glibc is built with ISA level 3 or above enabled, SSE resolvers aren't available and glibc fails to build: ld: .../elf/librtld.os: in function `init_cpu_features': .../elf/../sysdeps/x86/cpu-features.c:1200:(.text+0x1445f): undefined reference to `_dl_runtime_resolve_fxsave' ld: .../elf/librtld.os: relocation R_X86_64_PC32 against undefined hidden symbol `_dl_runtime_resolve_fxsave' can not be used when making a shared object /usr/local/bin/ld: final link failed: bad value For ISA level 3 or above, don't use _dl_runtime_resolve_fxsave nor _dl_tlsdesc_dynamic_fxsave. This fixes BZ #31429. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* x86: Update _dl_tlsdesc_dynamic to preserve caller-saved registersH.J. Lu2024-02-2818-213/+651
| | | | | | | | | | | | | | | | | | | | | | | | | | | Compiler generates the following instruction sequence for GNU2 dynamic TLS access: leaq tls_var@TLSDESC(%rip), %rax call *tls_var@TLSCALL(%rax) or leal tls_var@TLSDESC(%ebx), %eax call *tls_var@TLSCALL(%eax) CALL instruction is transparent to compiler which assumes all registers, except for EFLAGS and RAX/EAX, are unchanged after CALL. When _dl_tlsdesc_dynamic is called, it calls __tls_get_addr on the slow path. __tls_get_addr is a normal function which doesn't preserve any caller-saved registers. _dl_tlsdesc_dynamic saved and restored integer caller-saved registers, but didn't preserve any other caller-saved registers. Add _dl_tlsdesc_dynamic IFUNC functions for FNSAVE, FXSAVE, XSAVE and XSAVEC to save and restore all caller-saved registers. This fixes BZ #31372. Add GLRO(dl_x86_64_runtime_resolve) with GLRO(dl_x86_tlsdesc_dynamic) to optimize elf_machine_runtime_setup. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* sysdeps/unix/sysv/linux/x86_64/Makefile: Add the end markerH.J. Lu2024-02-281-3/+6
| | | | Add the end marker to tests, tests-container and modules-names.
* s390: Improve static-pie configure testsAdhemerval Zanella2024-02-282-231/+40
| | | | | | | | | | | | | | | | | | | | Instead of tying based on the linker name and version, check for the required support: * whether it does not generate dynamic TLS relocations in PIE (binutils PR ld/22263); * if it accepts --no-dynamic-linker (by using -static-pie); * and if it adds a DT_JMPREL pointing to .rela.iplt with static pie. The patch also trims the comments, for binutils one of the tests should already cover it. The kernel ones are not clear which version should have the backport, nor it is something that glibc can do much about it. Finally, the glibc is somewhat confusing, since it refers to commits not related to s390x. Checked with a build for s390x-linux-gnu. Reviewed-by: Stefan Liebler <stli@linux.ibm.com>
* x86: Change ENQCMD test to CHECK_FEATURE_PRESENTH.J. Lu2024-02-271-1/+1
| | | | | | Since ENQCMD is mainly used in kernel, change the ENQCMD test to CHECK_FEATURE_PRESENT. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* aarch64/fpu: Sync libmvec routines from 2.39 and before with AORJoe Ramsay2024-02-2618-105/+111
| | | | | | | This includes a fix for big-endian in AdvSIMD log, some cosmetic changes, and numerous small optimisations mainly around inlining and using indexed variants of MLA intrinsics. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* S390: Do not clobber r7 in clone [BZ #31402]Stefan Liebler2024-02-263-12/+63
| | | | | | | | | | | | Starting with commit e57d8fc97b90127de4ed3e3a9cdf663667580935 "S390: Always use svc 0" clone clobbers the call-saved register r7 in error case: function or stack is NULL. This patch restores the saved registers also in the error case. Furthermore the existing test misc/tst-clone is extended to check all error cases and that clone does not clobber registers in this error case.
* x86_64: Exclude SSE, AVX and FMA4 variants in libm multiarchSunil K Pandey2024-02-2562-295/+953
| | | | | | | | | | | | | | | | | | | When glibc is built with ISA level 3 or higher by default, the resulting glibc binaries won't run on SSE or FMA4 processors. Exclude SSE, AVX and FMA4 variants in libm multiarch when ISA level 3 or higher is enabled by default. When glibc is built with ISA level 2 enabled by default, only keep SSE4.1 variant. Fixes BZ 31335. NB: elf/tst-valgrind-smoke test fails with ISA level 4, because valgrind doesn't support AVX512 instructions: https://bugs.kde.org/show_bug.cgi?id=383010 Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Save APX registers in ld.so trampolineH.J. Lu2024-02-251-6/+46
| | | | | | | | | Add APX registers to STATE_SAVE_MASK so that APX registers are saved in ld.so trampoline. This fixes BZ #31371. Also update STATE_SAVE_OFFSET and STATE_SAVE_MASK for i386 which will be used by i386 _dl_tlsdesc_dynamic. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* tests: gracefully handle AppArmor userns containmentSimon Chopin2024-02-231-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Recent AppArmor containment allows restricting unprivileged user namespaces, which is enabled by default on recent Ubuntu systems. When this happens, as is common with Linux Security Modules, the syscall will fail with -EACCESS. When that happens, the affected tests will now be considered unsupported rather than simply failing. Further information: * https://gitlab.com/apparmor/apparmor/-/wikis/unprivileged_userns_restriction * https://ubuntu.com/blog/ubuntu-23-10-restricted-unprivileged-user-namespaces * https://manpages.ubuntu.com/manpages/jammy/man5/apparmor.d.5.html (for the return code) V2: * Fix duplicated line in check_unshare_hints * Also handle similar failure in tst-pidfd_getpid V3: * Comment formatting * Aded some more documentation on syscall return value Signed-off-by: Simon Chopin <simon.chopin@canonical.com>
* powerpc: Remove power7 strstr optimizationAdhemerval Zanella2024-02-238-671/+1
| | | | | | | | | | | | | | | The optimization is not faster than the generic algorithm, using the bench-strstr the geometric mean running on a POWER10 machine using gcc 13.1.1 is 482.47 while the default __strstr_ppc is 340.97 (which uses the generic implementation). Also, there is no need to redirect the internal str*/mem* call to optimized version, internal ifunc is supported and enabled for internal calls (meaning that the generic implementation will use any asm optimization if available). Checked on powerpc64le-linux-gnu. Reviewed-by: Peter Bergner <bergner@linux.ibm.com>