about summary refs log tree commit diff
path: root/sysdeps/generic
Commit message (Collapse)AuthorAgeFilesLines
* sparc: Fix la_symbind for bind-now (BZ 23734)Adhemerval Zanella2023-07-122-3/+3
| | | | | | | | | | | | | | | | | | | | | The sparc ABI has multiple cases on how to handle JMP_SLOT relocations, (sparc_fixup_plt/sparc64_fixup_plt). For BINDNOW, _dl_audit_symbind will be responsible to setup the final relocation value; while for lazy binding _dl_fixup/_dl_profile_fixup will call the audit callback and tail cail elf_machine_fixup_plt (which will call sparc64_fixup_plt). This patch fixes by issuing the SPARC specific routine on bindnow and forwarding the audit value to elf_machine_fixup_plt for lazy resolution. It fixes the la_symbind for bind-now tests on sparc64 and sparcv9: elf/tst-audit24a elf/tst-audit24b elf/tst-audit24c elf/tst-audit24d Checked on sparc64-linux-gnu and sparcv9-linux-gnu. Tested-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
* aarch64: Add vector implementations of cos routinesJoe Ramsay2023-06-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the loop-over-scalar placeholder routines with optimised implementations from Arm Optimized Routines (AOR). Also add some headers containing utilities for aarch64 libmvec routines, and update libm-test-ulps. Data tables for new routines are used via a pointer with a barrier on it, in order to prevent overly aggressive constant inlining in GCC. This allows a single adrp, combined with offset loads, to be used for every constant in the table. Special-case handlers are marked NOINLINE in order to confine the save/restore overhead of switching from vector to normal calling standard. This way we only incur the extra memory access in the exceptional cases. NOINLINE definitions have been moved to math_private.h in order to reduce duplication. AOR exposes a config option, WANT_SIMD_EXCEPT, to enable selective masking (and later fixing up) of invalid lanes, in order to trigger fp exceptions correctly (AdvSIMD only). This is tested and maintained in AOR, however it is configured off at source level here for performance reasons. We keep the WANT_SIMD_EXCEPT blocks in routine sources to greatly simplify the upstreaming process from AOR to glibc. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* Fix misspellings in sysdeps/ -- BZ 25337Paul Pluzhnikov2023-05-303-4/+4
|
* Add voice-admit DSCP code point from RFC-5865Ronan Pigott2023-05-221-0/+5
|
* elf: Stop including tls.h in ldsodefs.hSergey Bugaev2023-04-101-1/+0
| | | | | | | Nothing in there needs tls.h Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-Id: <20230319151017.531737-24-bugaevc@gmail.com>
* Remove --enable-tunables configure optionAdhemerval Zanella Netto2023-03-292-24/+4
| | | | | | | | | | | | And make always supported. The configure option was added on glibc 2.25 and some features require it (such as hwcap mask, huge pages support, and lock elisition tuning). It also simplifies the build permutations. Changes from v1: * Remove glibc.rtld.dynamic_sort changes, it is orthogonal and needs more discussion. * Cleanup more code. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* Move libc_freeres_ptrs and libc_subfreeres to hidden/weak functionsAdhemerval Zanella Netto2023-03-273-0/+65
| | | | | | | | | | | | | | | | | | | | They are both used by __libc_freeres to free all library malloc allocated resources to help tooling like mtrace or valgrind with memory leak tracking. The current scheme uses assembly markers and linker script entries to consolidate the free routine function pointers in the RELRO segment and to be freed buffers in BSS. This patch changes it to use specific free functions for libc_freeres_ptrs buffers and call the function pointer array directly with call_function_static_weak. It allows the removal of both the internal macros and the linker script sections. Checked on x86_64-linux-gnu, i686-linux-gnu, and aarch64-linux-gnu. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* LoongArch: Add support for ldconfig.caiyinyu2023-03-131-1/+3
|
* elf: Restore ldconfig libc6 implicit soname logic [BZ #30125]Joan Bruguera2023-02-201-2/+0
| | | | | | | | | | | | | | | | | | | | | | While cleaning up old libc version support, the deprecated libc4 code was accidentally kept in `implicit_soname`, instead of the libc6 code. This causes additional symlinks to be created by `ldconfig` for libraries without a soname, e.g. a library `libsomething.123.456.789` without a soname will create a `libsomething.123` -> `libsomething.123.456.789` symlink. As the libc6 version of the `implicit_soname` code is a trivial `xstrdup`, just inline it and remove `implicit_soname` altogether. Some further simplification looks possible (e.g. the call to `create_links` looks like a no-op if `soname == NULL`, other than the verbose printfs), but logic is kept as-is for now. Fixes: BZ #30125 Fixes: 8ee878592c4a ("Assume only FLAG_ELF_LIBC6 suport") Signed-off-by: Joan Bruguera <joanbrugueram@gmail.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* string: Remove string_private.hAdhemerval Zanella2023-02-171-21/+0
| | | | | | Now that _STRING_ARCH_unaligned is not used anymore. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
* Fix typos in commentsSamuel Thibault2023-02-121-1/+1
|
* Add string vectorized find and detection functionsAdhemerval Zanella2023-02-066-0/+402
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds generic string find and detection meant to be used in generic vectorized string implementation. The idea is to decompose the basic string operation so each architecture can reimplement if it provides any specialized hardware instruction. The 'string-misc.h' provides miscellaneous functions: - extractbyte: extracts the byte from an specific index. - repeat_bytes: setup an word by replicate the argument on each byte. The 'string-fza.h' provides zero byte detection functions: - find_zero_low, find_zero_all, find_eq_low, find_eq_all, find_zero_eq_low, find_zero_eq_all, and find_zero_ne_all The 'string-fzb.h' provides boolean zero byte detection functions: - has_zero: determine if any byte within a word is zero. - has_eq: determine byte equality between two words. - has_zero_eq: determine if any byte within a word is zero along with byte equality between two words. The 'string-fzi.h' provides positions for string-fza.h results: - index_first: return index of first zero byte within a word. - index_last: return index of first byte different between two words. The 'string-fzc.h' provides a combined version of fza and fzi: - index_first_zero_eq: return index of first zero byte within a word or first byte different between two words. - index_first_zero_ne: return index of first zero byte within a word or first byte equal between two words. - index_last_zero: return index of last zero byte within a word. - index_last_eq: return index of last byte different between two words. The 'string-shift.h' provides a way to mask off parts of a work based on some alignmnet (to handle unaligned arguments): - shift_find, shift_find_last. Co-authored-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* Parameterize OP_T_THRES from memcopy.hRichard Henderson2023-02-062-3/+26
| | | | | | | | | | | | It moves OP_T_THRES out of memcopy.h to its own header and adjust each architecture that redefines it. Checked with a build and check with run-built-tests=no for all major Linux ABIs. Co-authored-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* Parameterize op_t from memcopy.hAdhemerval Zanella2023-02-062-4/+26
| | | | | | | | | | It moves the op_t definition out to an specific header, adds the attribute 'may-alias', and cleanup its duplicated definitions. Checked with a build and check with run-built-tests=no for all major Linux ABIs. Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* Update copyright dates with scripts/update-copyrightsJoseph Myers2023-01-06170-170/+170
|
* Use GCC builtins for logb functions if desired.Xiaolin Tang2022-11-292-0/+5
| | | | | | | | This patch is using the corresponding GCC builtin for logbf, logb, logbl and logbf128 if the USE_FUNCTION_BUILTIN macros are defined to one in math-use-builtins-function.h. Co-Authored-By: Xi Ruoyao <xry111@xry111.site>
* Use GCC builtins for llrint functions if desired.Xiaolin Tang2022-11-292-0/+5
| | | | | | | | This patch is using the corresponding GCC builtin for llrintf, llrint, llrintl and llrintf128 if the USE_FUNCTION_BUILTIN macros are defined to one in math-use-builtins-function.h. Co-Authored-By: Xi Ruoyao <xry111@xry111.site>
* Use GCC builtins for lrint functions if desired.Xiaolin Tang2022-11-292-0/+5
| | | | | | | | This patch is using the corresponding GCC builtin for lrintf, lrint, lrintl and lrintf128 if the USE_FUNCTION_BUILTIN macros are defined to one in math-use-builtins-function.h. Co-Authored-By: Xi Ruoyao <xry111@xry111.site>
* elf: Introduce <dl-call_tls_init_tp.h> and call_tls_init_tp (bug 29249)Florian Weimer2022-11-032-1/+35
| | | | | | | | This makes it more likely that the compiler can compute the strlen argument in _startup_fatal at compile time, which is required to avoid a dependency on strlen this early during process startup. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* elf: Rework exception handling in the dynamic loader [BZ #25486]Florian Weimer2022-11-032-13/+4
| | | | | | | | | | | | | | | | | | | | | | | The old exception handling implementation used function interposition to replace the dynamic loader implementation (no TLS support) with the libc implementation (TLS support). This results in problems if the link order between the dynamic loader and libc is reversed (bug 25486). The new implementation moves the entire implementation of the exception handling functions back into the dynamic loader, using THREAD_GETMEM and THREAD_SETMEM for thread-local data support. These depends on Hurd support for these macros, added in commit b65a82e4e757c1e6cb7073916 ("hurd: Add THREAD_GET/SETMEM/_NC"). One small obstacle is that the exception handling facilities are used before the TCB has been set up, so a check is needed if the TCB is available. If not, a regular global variable is used to store the exception handling information. Also rename dl-error.c to dl-catch.c, to avoid confusion with the dlerror function. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* stdlib/strfrom: Add copysign to fix NAN issue on riscv (BZ #29501)Letu Ren2022-10-281-0/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | According to the specification of ISO/IEC TS 18661-1:2014, The strfromd, strfromf, and strfroml functions are equivalent to snprintf(s, n, format, fp) (7.21.6.5), except the format string contains only the character %, an optional precision that does not contain an asterisk *, and one of the conversion specifiers a, A, e, E, f, F, g, or G, which applies to the type (double, float, or long double) indicated by the function suffix (rather than by a length modifier). Use of these functions with any other 20 format string results in undefined behavior. strfromf will convert the arguement with type float to double first. According to the latest version of IEEE754 which is published in 2019, Conversion of a quiet NaN from a narrower format to a wider format in the same radix, and then back to the same narrower format, should not change the quiet NaN payload in any way except to make it canonical. When either an input or result is a NaN, this standard does not interpret the sign of a NaN. However, operations on bit strings—copy, negate, abs, copySign—specify the sign bit of a NaN result, sometimes based upon the sign bit of a NaN operand. The logical predicates totalOrder and isSignMinus are also affected by the sign bit of a NaN operand. For all other operations, this standard does not specify the sign bit of a NaN result, even when there is only one input NaN, or when the NaN is produced from an invalid operation. converting NAN or -NAN with type float to double doesn't need to keep the signbit. As a result, this test case isn't mandatory. The problem is that according to RISC-V ISA manual in chapter 11.3 of riscv-isa-20191213, Except when otherwise stated, if the result of a floating-point operation is NaN, it is the canonical NaN. The canonical NaN has a positive sign and all significand bits clear except the MSB, a.k.a. the quiet bit. For single-precision floating-point, this corresponds to the pattern 0x7fc00000. which means that conversion -NAN from float to double won't keep the signbit. Since glibc ought to be consistent here between types and architectures, this patch adds copysign to fix this problem if the string is NAN. This patch adds two different functions under sysdeps directory to work around the issue. This patch has been tested on x86_64 and riscv64. Resolves: BZ #29501 v2: Change from macros to different inline functions. v3: Add unlikely check to isnan. v4: Fix wrong commit message header. v5: Fix style: add space before parentheses. v6: Add copyright. Signed-off-by: Letu Ren <fantasquex@gmail.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Introduce to _dl_call_finiFlorian Weimer2022-10-271-0/+8
| | | | | | | | | | This consolidates the destructor invocations from _dl_fini and dlclose. Remove the micro-optimization that avoids calling _dl_call_fini if they are no destructors (as dlclose is quite expensive anyway). The debug log message is now printed unconditionally. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* ld.so: Export tls_init_tp_called as __rtld_tls_init_tp_calledFlorian Weimer2022-10-271-0/+3
| | | | | | | This allows the rest of dynamic loader to check whether the TCB has been set up (and THREAD_GETMEM and THREAD_SETMEM will work). Reviewed-by: Siddhesh Poyarekar <siddhesh@gotplt.org>
* Use PTR_MANGLE and PTR_DEMANGLE unconditionally in C sourcesFlorian Weimer2022-10-181-5/+1
| | | | | | | | | | | | | | | | | In the future, this will result in a compilation failure if the macros are unexpectedly undefined (due to header inclusion ordering or header inclusion missing altogether). Assembler sources are more difficult to convert. In many cases, they are hand-optimized for the mangling and no-mangling variants, which is why they are not converted. sysdeps/s390/s390-32/__longjmp.c and sysdeps/s390/s390-64/__longjmp.c are special: These are C sources, but most of the implementation is in assembler, so the PTR_DEMANGLE macro has to be undefined in some cases, to match the assembler style. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Introduce <pointer_guard.h>, extracted from <sysdep.h>Florian Weimer2022-10-182-1/+30
| | | | | | | | | | | | | | This allows us to define a generic no-op version of PTR_MANGLE and PTR_DEMANGLE. In the future, we can use PTR_MANGLE and PTR_DEMANGLE unconditionally in C sources, avoiding an unintended loss of hardening due to missing include files or unlucky header inclusion ordering. In i386 and x86_64, we can avoid a <tls.h> dependency in the C code by using the computed constant from <tcb-offsets.h>. <sysdep.h> no longer includes these definitions, so there is no cyclic dependency anymore when computing the <tcb-offsets.h> constants. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Remove -fno-tree-loop-distribute-patterns usage on dl-supportAdhemerval Zanella2022-10-101-0/+38
| | | | | | | | | | | | | | Besides the option being gcc specific, this approach is still fragile and not future proof since we do not know if this will be the only optimization option gcc will add that transforms loops to memset (or any libcall). This patch adds a new header, dl-symbol-redir-ifunc.h, that can b used to redirect the compiler generated libcalls to port the generic memset implementation if required. Checked on x86_64-linux-gnu and aarch64-linux-gnu. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* elf: Remove _dl_string_hwcapJavier Pello2022-10-061-2/+0
| | | | | | | | Removal of legacy hwcaps support from the dynamic loader left no users of _dl_string_hwcap. Signed-off-by: Javier Pello <devel@otheo.eu> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Remove hwcap parameter from add_to_cache signatureJavier Pello2022-10-061-1/+1
| | | | | | | | Last commit made it so that the value passed for that parameter was always 0 at its only call site. Signed-off-by: Javier Pello <devel@otheo.eu> Reviewed-by: Florian Weimer <fweimer@redhat.com>
* m68k: Enforce 4-byte alignment on internal locks (BZ #29537)Adhemerval Zanella2022-09-201-0/+25
| | | | | | | | | A new internal definition, __LIBC_LOCK_ALIGNMENT, is used to force the 4-byte alignment only for m68k, other architecture keep the natural alignment of the type used internally (and hppa does not require 16-byte alignment for kernel-assisted CAS). Reviewed-by: Florian Weimer <fweimer@redhat.com>
* elf: Rename _dl_sort_maps parameter from skip to force_firstFlorian Weimer2022-09-061-2/+4
| | | | | | | The new implementation will not be able to skip an arbitrary number of objects. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Revert "Detect ld.so and libc.so version inconsistency during startup"Florian Weimer2022-08-251-4/+0
| | | | | | | | | | | | | | | | This reverts commit 6f85dbf102ad7982409ba0fe96886caeb6389fef. Once this change hits the release branches, it will require relinking of all statically linked applications before static dlopen works again, for the majority of updates on release branches: The NEWS file is regularly updated with bug references, so the __libc_early_init suffix changes, and static dlopen cannot find the function anymore. While this ABI check is still technically correct (we do require rebuilding & relinking after glibc updates to keep static dlopen working), it is too drastic for stable release branches. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Detect ld.so and libc.so version inconsistency during startupFlorian Weimer2022-08-241-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The files NEWS, include/link.h, and sysdeps/generic/ldsodefs.h contribute to the version fingerprint used for detection. The fingerprint can be further refined using the --with-extra-version-id configure argument. _dl_call_libc_early_init is replaced with _dl_lookup_libc_early_init. The new function is used store a pointer to libc.so's __libc_early_init function in the libc_map_early_init member of the ld.so namespace structure. This function pointer can then be called directly, so the separate invocation function is no longer needed. The versioned symbol lookup needs the symbol versioning data structures, so the initialization of libc_map and libc_map_early_init is now done from _dl_check_map_versions, after this information becomes available. (_dl_map_object_from_fd does not set this up in time, so the initialization code had to be moved from there.) This means that the separate initialization code can be removed from dl_main because _dl_check_map_versions covers all maps, including the initial executable loaded by the kernel. The lookup still happens before relocation and the invocation of IFUNC resolvers, so IFUNC resolvers are protected from ABI mismatch. The __libc_early_init function pointer is not protected because so little code runs between the pointer write and the invocation (only dynamic linker code and IFUNC resolvers). Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* arc4random: simplify design for better safetyJason A. Donenfeld2022-07-274-35/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rather than buffering 16 MiB of entropy in userspace (by way of chacha20), simply call getrandom() every time. This approach is doubtlessly slower, for now, but trying to prematurely optimize arc4random appears to be leading toward all sorts of nasty properties and gotchas. Instead, this patch takes a much more conservative approach. The interface is added as a basic loop wrapper around getrandom(), and then later, the kernel and libc together can work together on optimizing that. This prevents numerous issues in which userspace is unaware of when it really must throw away its buffer, since we avoid buffering all together. Future improvements may include userspace learning more from the kernel about when to do that, which might make these sorts of chacha20-based optimizations more possible. The current heuristic of 16 MiB is meaningless garbage that doesn't correspond to anything the kernel might know about. So for now, let's just do something conservative that we know is correct and won't lead to cryptographic issues for users of this function. This patch might be considered along the lines of, "optimization is the root of all evil," in that the much more complex implementation it replaces moves too fast without considering security implications, whereas the incremental approach done here is a much safer way of going about things. Once this lands, we can take our time in optimizing this properly using new interplay between the kernel and userspace. getrandom(0) is used, since that's the one that ensures the bytes returned are cryptographically secure. But on systems without it, we fallback to using /dev/urandom. This is unfortunate because it means opening a file descriptor, but there's not much of a choice. Secondly, as part of the fallback, in order to get more or less the same properties of getrandom(0), we poll on /dev/random, and if the poll succeeds at least once, then we assume the RNG is initialized. This is a rough approximation, as the ancient "non-blocking pool" initialized after the "blocking pool", not before, and it may not port back to all ancient kernels, though it does to all kernels supported by glibc (≥3.2), so generally it's the best approximation we can do. The motivation for including arc4random, in the first place, is to have source-level compatibility with existing code. That means this patch doesn't attempt to litigate the interface itself. It does, however, choose a conservative approach for implementing it. Cc: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org> Cc: Florian Weimer <fweimer@redhat.com> Cc: Cristian Rodríguez <crrodriguez@opensuse.org> Cc: Paul Eggert <eggert@cs.ucla.edu> Cc: Mark Harris <mark.hsj@gmail.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: linux-crypto@vger.kernel.org Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* aarch64: Add optimized chacha20Adhemerval Zanella Netto2022-07-221-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It adds vectorized ChaCha20 implementation based on libgcrypt cipher/chacha20-aarch64.S. It is used as default and only little-endian is supported (BE uses generic code). As for generic implementation, the last step that XOR with the input is omited. The final state register clearing is also omitted. On a virtualized Linux on Apple M1 it shows the following improvements (using formatted bench-arc4random data): GENERIC MB/s ----------------------------------------------- arc4random [single-thread] 380.89 arc4random_buf(16) [single-thread] 500.73 arc4random_buf(32) [single-thread] 552.61 arc4random_buf(48) [single-thread] 566.82 arc4random_buf(64) [single-thread] 574.01 arc4random_buf(80) [single-thread] 581.02 arc4random_buf(96) [single-thread] 591.19 arc4random_buf(112) [single-thread] 592.29 arc4random_buf(128) [single-thread] 596.43 ----------------------------------------------- OPTIMIZED MB/s ----------------------------------------------- arc4random [single-thread] 569.60 arc4random_buf(16) [single-thread] 825.78 arc4random_buf(32) [single-thread] 987.03 arc4random_buf(48) [single-thread] 1042.39 arc4random_buf(64) [single-thread] 1075.50 arc4random_buf(80) [single-thread] 1094.68 arc4random_buf(96) [single-thread] 1130.16 arc4random_buf(112) [single-thread] 1129.58 arc4random_buf(128) [single-thread] 1137.91 ----------------------------------------------- Checked on aarch64-linux-gnu.
* stdlib: Add arc4random, arc4random_buf, and arc4random_uniform (BZ #4417)Adhemerval Zanella Netto2022-07-224-6/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The implementation is based on scalar Chacha20 with per-thread cache. It uses getrandom or /dev/urandom as fallback to get the initial entropy, and reseeds the internal state on every 16MB of consumed buffer. To improve performance and lower memory consumption the per-thread cache is allocated lazily on first arc4random functions call, and if the memory allocation fails getentropy or /dev/urandom is used as fallback. The cache is also cleared on thread exit iff it was initialized (so if arc4random is not called it is not touched). Although it is lock-free, arc4random is still not async-signal-safe (the per thread state is not updated atomically). The ChaCha20 implementation is based on RFC8439 [1], omitting the final XOR of the keystream with the plaintext because the plaintext is a stream of zeros. This strategy is similar to what OpenBSD arc4random does. The arc4random_uniform is based on previous work by Florian Weimer, where the algorithm is based on Jérémie Lumbroso paper Optimal Discrete Uniform Generation from Coin Flips, and Applications (2013) [2], who credits Donald E. Knuth and Andrew C. Yao, The complexity of nonuniform random number generation (1976), for solving the general case. The main advantage of this method is the that the unit of randomness is not the uniform random variable (uint32_t), but a random bit. It optimizes the internal buffer sampling by initially consuming a 32-bit random variable and then sampling byte per byte. Depending of the upper bound requested, it might lead to better CPU utilization. Checked on x86_64-linux-gnu, aarch64-linux, and powerpc64le-linux-gnu. Co-authored-by: Florian Weimer <fweimer@redhat.com> Reviewed-by: Yann Droneaud <ydroneaud@opteya.com> [1] https://datatracker.ietf.org/doc/html/rfc8439 [2] https://arxiv.org/pdf/1304.1916.pdf
* Fix hurd namespace issues for internal signal functionsAdhemerval Zanella2022-07-041-3/+3
| | | | | | | It was introduced by "Refactor internal-signals.h (a1bdd81664aa681364d)". Use the internal symbols instead. Checked with a build for i686-gnu.
* Refactor internal-signals.hAdhemerval Zanella2022-06-301-25/+6
| | | | | | | | | | | | | | | | | | The main drive is to optimize the internal usage and required size when sigset_t is embedded in other data structures. On Linux, the current supported signal set requires up to 8 bytes (16 on mips), was lower than the user defined sigset_t (128 bytes). A new internal type internal_sigset_t is added, along with the functions to operate on it similar to the ones for sigset_t. The internal-signals.h is also refactored to remove unused functions Besides small stack usage on some functions (posix_spawn, abort) it lower the struct pthread by about 120 bytes (112 on mips). Checked on x86_64-linux-gnu. Reviewed-by: Arjun Shankar <arjun@redhat.com>
* elf: Remove ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATAFangrui Song2022-06-151-11/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If an executable has copy relocations for extern protected data, that can only work if the library containing the definition is built with assumptions (a) the compiler emits GOT-generating relocations (b) the linker produces R_*_GLOB_DAT instead of R_*_RELATIVE. Otherwise the library uses its own definition directly and the executable accesses a stale copy. Note: the GOT relocations defeat the purpose of protected visibility as an optimization, but allow rtld to make the executable and library use the same copy when copy relocations are present, but it turns out this never worked perfectly. ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA has strange semantics when both a.so and b.so define protected var and the executable copy relocates var: b.so accesses its own copy even with GLOB_DAT. The behavior change is from commit 62da1e3b00b51383ffa7efc89d8addda0502e107 (x86) and then copied to nios2 (ae5eae7cfc9c4a8297ff82ec6b794faca1976ecc) and arc (0e7d930c4c11de896fe807f67fa1eb756c9c1e05). Without ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA, b.so accesses the copy relocated data like a.so. There is now a warning for copy relocation on protected symbol since commit 7374c02b683b7110b853a32496a619410364d70b. It's extremely unlikely anyone relies on the ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA behavior, so let's remove it: this removes a check in the symbol lookup code.
* elf: Refine direct extern access diagnostics to protected symbolFangrui Song2022-06-141-23/+27
| | | | | | | | | | | | | | | | | | | | | | Refine commit 349b0441dab375099b1d7f6909c1742286a67da9: 1. Copy relocations for extern protected data do not work properly, regardless whether GNU_PROPERTY_1_NEEDED_INDIRECT_EXTERN_ACCESS is used. It makes sense to produce a warning unconditionally. 2. Non-zero value of an undefined function symbol may break pointer equality, but may be benign in many cases (many programs don't take the address in the shared object then compare it with the address in the executable). Reword the diagnostic to be clearer. 3. Remove the unneeded condition !(undef_map->l_1_needed & GNU_PROPERTY_1_NEEDED_INDIRECT_EXTERN_ACCESS). If the executable does not not have GNU_PROPERTY_1_NEEDED_INDIRECT_EXTERN_ACCESS (can only occur in error cases), the diagnostic should be emitted as well. When the defining shared object has GNU_PROPERTY_1_NEEDED_INDIRECT_EXTERN_ACCESS, report an error to apply the intended enforcement.
* elf: Remove _dl_skip_argsAdhemerval Zanella2022-05-301-4/+0
| | | | | | Now that no architecture uses it anymore. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* math: Add math-use-builtins-fabs (BZ#29027)Adhemerval Zanella2022-05-232-0/+4
| | | | | | | | | | | | | | | | | | Both float, double, and _Float128 are assumed to be supported (float and double already only uses builtins). Only long double is parametrized due GCC bug 29253 which prevents its usage on powerpc. It allows to remove i686, ia64, x86_64, powerpc, and sparc arch specific implementation. On ia64 it also fixes the sNAN handling: math/test-float64x-fabs math/test-ldouble-fabs Checked on x86_64-linux-gnu, i686-linux-gnu, powerpc-linux-gnu, powerpc64-linux-gnu, sparc64-linux-gnu, and ia64-linux-gnu.
* elf: Optimize _dl_new_hash in dl-new-hash.hNoah Goldstein2022-05-231-0/+109
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Unroll slightly and enforce good instruction scheduling. This improves performance on out-of-order machines. The unrolling allows for pipelined multiplies. As well, as an optional sysdep, reorder the operations and prevent reassosiation for better scheduling and higher ILP. This commit only adds the barrier for x86, although it should be either no change or a win for any architecture. Unrolling further started to induce slowdowns for sizes [0, 4] but can help the loop so if larger sizes are the target further unrolling can be beneficial. Results for _dl_new_hash Benchmarked on Tigerlake: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz Time as Geometric Mean of N=30 runs Geometric of all benchmark New / Old: 0.674 type, length, New Time, Old Time, New Time / Old Time fixed, 0, 2.865, 2.72, 1.053 fixed, 1, 3.567, 2.489, 1.433 fixed, 2, 2.577, 3.649, 0.706 fixed, 3, 3.644, 5.983, 0.609 fixed, 4, 4.211, 6.833, 0.616 fixed, 5, 4.741, 9.372, 0.506 fixed, 6, 5.415, 9.561, 0.566 fixed, 7, 6.649, 10.789, 0.616 fixed, 8, 8.081, 11.808, 0.684 fixed, 9, 8.427, 12.935, 0.651 fixed, 10, 8.673, 14.134, 0.614 fixed, 11, 10.69, 15.408, 0.694 fixed, 12, 10.789, 16.982, 0.635 fixed, 13, 12.169, 18.411, 0.661 fixed, 14, 12.659, 19.914, 0.636 fixed, 15, 13.526, 21.541, 0.628 fixed, 16, 14.211, 23.088, 0.616 fixed, 32, 29.412, 52.722, 0.558 fixed, 64, 65.41, 142.351, 0.459 fixed, 128, 138.505, 295.625, 0.469 fixed, 256, 291.707, 601.983, 0.485 random, 2, 12.698, 12.849, 0.988 random, 4, 16.065, 15.857, 1.013 random, 8, 19.564, 21.105, 0.927 random, 16, 23.919, 26.823, 0.892 random, 32, 31.987, 39.591, 0.808 random, 64, 49.282, 71.487, 0.689 random, 128, 82.23, 145.364, 0.566 random, 256, 152.209, 298.434, 0.51 Co-authored-by: Alexander Monakov <amonakov@ispras.ru> Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* linux: Add pidfd_getfdAdhemerval Zanella2022-05-171-0/+1
| | | | | | | | | | This was added on Linux 5.6 (8649c322f75c96e7ced2fec201e123b2b073bf09) as a way to retrieve a file descriptors for another process though pidfd (created either with CLONE_PIDFD or pidfd_getfd). The functionality is similar to recvmmsg SCM_RIGHTS. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
* rtld: Remove DL_ARGV_NOT_RELRO and make _dl_skip_args constSzabolcs Nagy2022-05-171-10/+3
| | | | | | | | | | | _dl_skip_args is always 0, so the target specific code that modifies argv after relro protection is applied is no longer used. After the patch relro protection is applied to _dl_argv consistently on all targets. Reviewed-by: Florian Weimer <fweimer@redhat.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Remove dl-librecon.h header.Adhemerval Zanella2022-05-162-27/+0
| | | | | | | | | | | | | | | | | | | | | | | The Linux version used by i686 and m68k provide three overrrides for generic code: 1. DISTINGUISH_LIB_VERSIONS to print additional information when libc5 is used by a dependency. 2. EXTRA_LD_ENVVARS to that enabled LD_LIBRARY_VERSION environment variable. 3. EXTRA_UNSECURE_ENVVARS to add two environment variables related to aout support. None are really requires, it has some decades since libc5 or aout suppported was removed and Linux even remove support for aout files. The LD_LIBRARY_VERSION is also dead code, dl_correct_cache_id is not used anywhere. Checked on x86_64-linux-gnu and i686-linux-gnu. Reviewed-by: Florian Weimer <fweimer@redhat.com>
* elf: Remove ldconfig kernel version checkAdhemerval Zanella2022-05-162-11/+5
| | | | Now that it was removed on libc.so.
* Remove kernel version checkAdhemerval Zanella2022-05-161-6/+0
| | | | | | | | | | | | | | | | | | | | The kernel version check is used to avoid glibc to run on older kernels where some syscall are not available and fallback code are not enabled to handle graciously fail. However, it does not prevent if the kernel does not correctly advertise its version through vDSO note, uname or procfs. Also kernel version checks are sometime not desirable by users, where they want to deploy on different system with different kernel version knowing the minimum set of syscall is always presented on such systems. The kernel version check has been removed along with the LD_ASSUME_KERNEL environment variable. The minimum kernel used to built glibc is still provided through NT_GNU_ABI_TAG ELF note and also printed when libc.so is issued. Checked on x86_64-linux-gnu.
* csu: Implement and use _dl_early_allocate during static startupFlorian Weimer2022-05-161-0/+5
| | | | | | | | | | | This implements mmap fallback for a brk failure during TLS allocation. scripts/tls-elf-edit.py is updated to support the new patching method. The script no longer requires that in the input object is of ET_DYN type. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* stdio: Remove the usage of $(fno-unit-at-a-time) for siglist.cAdhemerval Zanella2022-05-132-13/+15
| | | | | | | | | | | | | | | | | | The siglist.c is built with -fno-toplevel-reorder to avoid compiler to reorder the compat assembly directives due an assembler issue [1] (fixed on 2.39). This patch removes the compiler flags by split the compat symbol generation in two phases. First the __sys_siglist and __sys_sigabbrev without any compat symbol directive is preprocessed to generate an assembly source code. This generate assembly is then used as input on a platform agnostic siglist.S which then creates the compat definitions. This prevents compiler to move any compat directive prior the _sys_errlist definition itself. Checked on a make check run-built-tests=no on all affected ABIs. Reviewed-by: Fangrui Song <maskray@google.com>
* sysdeps: Add 'get_fast_jitter' interace in fast-jitter.hNoah Goldstein2022-04-271-0/+42
| | | | | | | | | | | | | | 'get_fast_jitter' is meant to be used purely for performance purposes. In all cases it's used it should be acceptable to get no randomness (see default case). An example use case is in setting jitter for retries between threads at a lock. There is a performance benefit to having jitter, but only if the jitter can be generated very quickly and ultimately there is no serious issue if no jitter is generated. The implementation generally uses 'HP_TIMING_NOW' iff it is inlined (avoid any potential syscall paths). Reviewed-by: H.J. Lu <hjl.tools@gmail.com>