summary refs log tree commit diff
Commit message (Collapse)AuthorAgeFilesLines
* nptl: Extract <bits/atomic_wide_counter.h> from pthread_cond_common.cFlorian Weimer2021-11-179-198/+310
| | | | | | | | | | | | | And make it an installed header. This addresses a few aliasing violations (which do not seem to result in miscompilation due to the use of atomics), and also enables use of wide counters in other parts of the library. The debug output in nptl/tst-cond22 has been adjusted to print the 32-bit values instead because it avoids a big-endian/little-endian difference. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86-64: Create microbenchmark infrastructure for libmvecSunil K Pandey2021-11-164-0/+642
| | | | | | | | | | | | | Add python script to generate libmvec microbenchmark from the input values for each libmvec function using skeleton benchmark template. Creates double and float benchmarks with vector length 1, 2, 4, 8, and 16 for each libmvec function. Vector length 1 corresponds to scalar version of function and is included for vector function perf comparison. Co-authored-by: Haochen Jiang <haochen.jiang@intel.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* elf: hidden visibility for __minimal_malloc functionsAdhemerval Zanella2021-11-161-0/+5
| | | | | | | | Since b05fae4d8e34, __minimal malloc code is used during static startup before PIE self-relocation (_dl_relocate_static_pie). So it requires the same fix done for other objects by 47618209d05a. Checked on aarch64, x86_64, and i686 with and without static-pie.
* elf: Use a temporary file to generate Makefile fragments [BZ #28550]H.J. Lu2021-11-161-2/+8
| | | | | | | | | | | | 1. Use a temporary file to generate Makefile fragments for DSO sorting tests and use -include on them. 2. Add Makefile fragments to postclean-generated so that a "make clean" removes the autogenerated fragments and a subsequent "make" regenerates them. This partially fixes BZ #28550. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* dso-ordering-test.py: Put all sources in one directory [BZ #28550]H.J. Lu2021-11-151-13/+34
| | | | | | | | | | | | | | | | | | | | | | Put all sources for DSO sorting tests in the dso-sort-tests-src directory and compile test relocatable objects with $(objpfx)tst-dso-ordering1-dir/tst-dso-ordering1-a.os: $(objpfx)dso-sort-tests-src/tst-dso-ordering1-a.c $(compile.c) $(OUTPUT_OPTION) to avoid random $< values from $(before-compile) when compiling test relocatable objects with $(objpfx)%$o: $(objpfx)%.c $(before-compile); $$(compile-command.c) compile-command.c = $(compile.c) $(OUTPUT_OPTION) $(compile-mkdep-flags) compile.c = $(CC) $< -c $(CFLAGS) $(CPPFLAGS) for 3 "make -j 28" parallel builds on a machine with 112 cores at the same time. This partially fixes BZ #28550. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* elf: Move LAV_CURRENT to link_lavcurrent.hAdhemerval Zanella2021-11-153-2/+27
| | | | No functional change.
* Move assignment out of the CAS conditionH.J. Lu2021-11-152-8/+6
| | | | | | | | | | | | | | | | | | | | | | | | Update commit 49302b8fdf9103b6fc0a398678668a22fa19574c Author: H.J. Lu <hjl.tools@gmail.com> Date: Thu Nov 11 06:54:01 2021 -0800 Avoid extra load with CAS in __pthread_mutex_clocklock_common [BZ #28537] Replace boolean CAS with value CAS to avoid the extra load. and commit 0b82747dc48d5bf0871bdc6da8cb6eec1256355f Author: H.J. Lu <hjl.tools@gmail.com> Date: Thu Nov 11 06:31:51 2021 -0800 Avoid extra load with CAS in __pthread_mutex_lock_full [BZ #28537] Replace boolean CAS with value CAS to avoid the extra load. by moving assignment out of the CAS condition.
* Add a comment for --enable-initfini-array [BZ #27945]H.J. Lu2021-11-131-1/+3
| | | | | Document that --enable-initfini-array is enabled by default in GCC 12, which can be removed when GCC 12 becomes the minimum requirement.
* tst-tzset: output reason when creating 4GiB file failsStafford Horne2021-11-131-0/+8
| | | | | | | Currently, if the temporary file creation fails the create_tz_file function returns NULL. The NULL pointer is then passed to setenv which causes a SIGSEGV. Rather than failing with a SIGSEGV print a warning and exit.
* Add LLL_MUTEX_READ_LOCK [BZ #28537]H.J. Lu2021-11-121-0/+7
| | | | | | | | | | | | | | | | CAS instruction is expensive. From the x86 CPU's point of view, getting a cache line for writing is more expensive than reading. See Appendix A.2 Spinlock in: https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-lock-scaling-analysis-paper.pdf The full compare and swap will grab the cache line exclusive and cause excessive cache line bouncing. Add LLL_MUTEX_READ_LOCK to do an atomic load and skip CAS in spinlock loop if compare may fail to reduce cache line bouncing on contended locks. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* Avoid extra load with CAS in __pthread_mutex_clocklock_common [BZ #28537]H.J. Lu2021-11-121-5/+5
| | | | | | Replace boolean CAS with value CAS to avoid the extra load. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* Avoid extra load with CAS in __pthread_mutex_lock_full [BZ #28537]H.J. Lu2021-11-121-5/+5
| | | | | | Replace boolean CAS with value CAS to avoid the extra load. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* String: Split memcpy tests so that parallel build is fasterNoah Goldstein2021-11-104-298/+367
| | | | | | | | | | | No bug. This commit splits test-memcpy.c into test-memcpy.c and test-memcpy-large.c. The idea is parallel builds will be able to run both in parallel speeding up the process. Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86: Shrink memcmp-sse4.S code sizeNoah Goldstein2021-11-101-1621/+646
| | | | | | | | | | | | | | | | | | | | | | | | No bug. This implementation refactors memcmp-sse4.S primarily with minimizing code size in mind. It does this by removing the lookup table logic and removing the unrolled check from (256, 512] bytes. memcmp-sse4 code size reduction : -3487 bytes wmemcmp-sse4 code size reduction: -1472 bytes The current memcmp-sse4.S implementation has a large code size cost. This has serious adverse affects on the ICache / ITLB. While in micro-benchmarks the implementations appears fast, traces of real-world code have shown that the speed in micro benchmarks does not translate when the ICache/ITLB are not primed, and that the cost of the code size has measurable negative affects on overall application performance. See https://research.google/pubs/pub48320/ for more details. Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* Support C2X printf %b, %BJoseph Myers2021-11-1011-31/+245
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C2X adds a printf %b format (see <http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2630.pdf>, accepted for C2X), for outputting integers in binary. It also has recommended practice for a corresponding %B format (like %b, but %#B starts the output with 0B instead of 0b). Add support for these formats to glibc. One existing test uses %b as an example of an unknown format, to test how glibc printf handles unknown formats; change that to %v. Use of %b and %B as user-registered format specifiers continues to work (and we already have a test that covers that, tst-printfsz.c). Note that C2X also has scanf %b support, plus support for binary constants starting 0b in strtol (base 0 and 2) and scanf %i (strtol base 0 and scanf %i coming from a previous paper that added binary integer literals). I intend to implement those features in a separate patch or patches; as discussed in the thread starting at <https://sourceware.org/pipermail/libc-alpha/2020-December/120414.html>, they will be more complicated because they involve adding extra public symbols to ensure compatibility with existing code that might not expect 0b constants to be handled by strtol base 0 and 2 and scanf %i, whereas simply adding a new format specifier poses no such compatibility concerns. Note that the actual conversion from integer to string uses existing code in _itoa.c. That code has special cases for bases 8, 10 and 16, probably so that the compiler can optimize division by an integer constant in the code for those bases. If desired such special cases could easily be added for base 2 as well, but that would be an optimization, not actually needed for these printf formats to work. Tested for x86_64 and x86. Also tested with build-many-glibcs.py for aarch64-linux-gnu with GCC mainline to make sure that the test does indeed build with GCC 12 (where format checking warnings are enabled for most of the test).
* Update syscall lists for Linux 5.15Joseph Myers2021-11-1028-4/+31
| | | | | | | | | | | | | | Linux 5.15 has one new syscall, process_mrelease (and also enables the clone3 syscall for RV32). It also has a macro __NR_SYSCALL_MASK for Arm, which is not a syscall but matches the pattern used for syscall macro names. Add __NR_SYSCALL_MASK to the names filtered out in the code dealing with syscall lists, update syscall-names.list for the new syscall and regenerate the arch-syscall.h headers with build-many-glibcs.py update-syscalls. Tested with build-many-glibcs.py.
* s390: Use long branches across object boundaries (jgh instead of jh)Florian Weimer2021-11-102-2/+2
| | | | | | | | | | Depending on the layout chosen by the linker, the 16-bit displacement of the jh instruction is insufficient to reach the target label. Analysis of the linker failure was carried out by Nick Clifton. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Reviewed-by: Stefan Liebler <stli@linux.ibm.com>
* Remove the unused +mkdep/+make-deps/s-proto.S/s-proto-cancel.SH.J. Lu2021-11-105-29/+0
| | | | | | | | | | | | | | | Since commit d73f5331ce5370ca5a879229e3842f5de98689cd Author: Roland McGrath <roland@gnu.org> Date: Fri May 2 02:20:45 2003 +0000 2003-05-01 Roland McGrath <roland@redhat.com> dependency is generated by passing -MD -MF to compiler. Remove the unused +mkdep, +make-deps, s-proto.S and s-proto-cancel.S. This fixes BZ #28554.
* Fix build a chec failures after b05fae4d8e34Adhemerval Zanella2021-11-092-1/+2
| | | | | | | | | | The include cleanup on dl-minimal.c removed too much for some targets. Also for Hurd, __sbrk is removed from localplt.data now that tunables allocated memory through mmap. Checked with a build for all affected architectures.
* elf: Use the minimal malloc on tunables_strdupAdhemerval Zanella2021-11-095-119/+157
| | | | | | | | | | | | | | | | | | | | The rtld_malloc functions are moved to its own file so it can be used on csu code. Also, the functiosn are renamed to __minimal_* (since there are now used not only on loader code). Using the __minimal_malloc on tunables_strdup() avoids potential issues with sbrk() calls while processing the tunables (I see sporadic elf/tst-dso-ordering9 on powerpc64le with different tests failing due ASLR). Also, using __minimal_malloc over plain mmap optimizes the memory allocation on both static and dynamic case (since it will any unused space in either the last page of data segments, avoiding mmap() call, or from the previous mmap() call). Checked on x86_64-linux-gnu, i686-linux-gnu, and powerpc64le-linux-gnu. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* Fix memmove call in vfprintf-internal.c:group_numberJoseph Myers2021-11-081-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A recent GCC mainline change introduces errors of the form: vfprintf-internal.c: In function 'group_number': vfprintf-internal.c:2093:15: error: 'memmove' specified bound between 9223372036854775808 and 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Werror=stringop-overflow=] 2093 | memmove (w, s, (front_ptr -s) * sizeof (CHAR_T)); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is a genuine bug in the glibc code: s > front_ptr is always true at this point in the code, and the intent is clearly for the subtraction to be the other way round. The other arguments to the memmove call here also appear to be wrong; w and s point just *after* the destination and source for copying the rest of the number, so the size needs to be subtracted to get appropriate pointers for the copying. Adjust the memmove call to conform to the apparent intent of the code, so fixing the -Wstringop-overflow error. Now, if the original code were ever executed, a buffer overrun would result. However, I believe this code (introduced in commit edc1686af0c0fc2eb535f1d38cdf63c1a5a03675, "vfprintf: Reuse work_buffer in group_number", so in glibc 2.26) is unreachable in prior glibc releases (so there is no need for a bug in Bugzilla, no need to consider any backports unless someone wants to build older glibc releases with GCC 12 and no possibility of this buffer overrun resulting in a security issue). work_buffer is 1000 bytes / 250 wide characters. This case is only reachable if an initial part of the number, plus a grouped copy of the rest of the number, fail to fit in that space; that is, if the grouped number fails to fit in the space. In the wide character case, grouping is always one wide character, so even with a locale (of which there aren't any in glibc) grouping every digit, a number would need to occupy at least 125 wide characters to overflow, and a 64-bit integer occupies at most 23 characters in octal including a leading 0. In the narrow character case, the multibyte encoding of the grouping separator would need to be at least 42 bytes to overflow, again supposing grouping every digit, but MB_LEN_MAX is 16. So even if we admit the case of artificially constructed locales not shipped with glibc, given that such a locale would need to use one of the character sets supported by glibc, this code cannot be reached at present. (And POSIX only actually specifies the ' flag for grouping for decimal output, though glibc acts on it for other bases as well.) With binary output (if you consider use of grouping there to be valid), you'd need a 15-byte multibyte character for overflow; I don't know if any supported character set has such a character (if, again, we admit constructed locales using grouping every digit and a grouping separator chosen to have a multibyte encoding as long as possible, as well as accepting use of grouping with binary), but given that we have this code at all (clearly it's not *correct*, or in accordance with the principle of avoiding arbitrary limits, to skip grouping on running out of internal space like that), I don't think it should need any further changes for binary printf support to go in. On the other hand, support for large sizes of _BitInt in printf (see the N2858 proposal) *would* require something to be done about such arbitrary limits (presumably using dynamic allocation in printf again, for sufficiently large _BitInt arguments only - currently only floating-point uses dynamic allocation, and, as previously discussed, that could actually be replaced by bounded allocation given smarter code). Tested with build-many-glibcs.py for aarch64-linux-gnu (GCC mainline). Also tested natively for x86_64.
* locale: Fix localedata/sort-test undefined behaviorAdhemerval Zanella2021-11-081-3/+8
| | | | | | | The collate-test.c triggers UB with an signed integer overflow, which results in an error on some architectures (powerpc32). Checked on x86_64, i686, and powerpc.
* test-memcpy.c: Double TIMEOUT to (8 * 60)H.J. Lu2021-11-072-1/+4
| | | | | | | | | | | | | | | | | | | | | commit d585ba47fcda99fdf228e3e45a01b11a15efbc5a Author: Noah Goldstein <goldstein.w.n@gmail.com> Date: Mon Nov 1 00:49:48 2021 -0500 string: Make tests birdirectional test-memcpy.c This commit updates the memcpy tests to test both dst > src and dst < src. This is because there is logic in the code based on the Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com> significantly increased the number of tests. On Intel Core i7-1165G7, test-memcpy takes 120 seconds to run when machine is idle. Double TIMEOUT to (8 * 60) for test-memcpy to avoid timeout when machine is under heavy load. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* hurd: Remove unused __libc_close_rangeSamuel Thibault2021-11-071-1/+0
| | | | That was just cargo-culted.
* hurd: Implement close_range and closefromSergey Bugaev2021-11-076-1/+135
| | | | | | | | | | | | | | | | | | The close_range () function implements the same API as the Linux and FreeBSD syscalls. It operates atomically and reliably. The specified upper bound is clamped to the actual size of the file descriptor table; it is expected that the most common use case is with last = UINT_MAX. Like in the Linux syscall, it is also possible to pass the CLOSE_RANGE_CLOEXEC flag to mark the file descriptors in the range cloexec instead of acually closing them. Also, add a Hurd version of the closefrom () function. Since unlike on Linux, close_range () cannot fail due to being unuspported by the running kernel, a fallback implementation is never necessary. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-Id: <20211106153524.82700-1-bugaevc@gmail.com>
* x86: Double size of ERMS rep_movsb_threshold in dl-cacheinfo.hNoah Goldstein2021-11-062-14/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | No bug. This patch doubles the rep_movsb_threshold when using ERMS. Based on benchmarks the vector copy loop, especially now that it handles 4k aliasing, is better for these medium ranged. On Skylake with ERMS: Size, Align1, Align2, dst>src,(rep movsb) / (vec copy) 4096, 0, 0, 0, 0.975 4096, 0, 0, 1, 0.953 4096, 12, 0, 0, 0.969 4096, 12, 0, 1, 0.872 4096, 44, 0, 0, 0.979 4096, 44, 0, 1, 0.83 4096, 0, 12, 0, 1.006 4096, 0, 12, 1, 0.989 4096, 0, 44, 0, 0.739 4096, 0, 44, 1, 0.942 4096, 12, 12, 0, 1.009 4096, 12, 12, 1, 0.973 4096, 44, 44, 0, 0.791 4096, 44, 44, 1, 0.961 4096, 2048, 0, 0, 0.978 4096, 2048, 0, 1, 0.951 4096, 2060, 0, 0, 0.986 4096, 2060, 0, 1, 0.963 4096, 2048, 12, 0, 0.971 4096, 2048, 12, 1, 0.941 4096, 2060, 12, 0, 0.977 4096, 2060, 12, 1, 0.949 8192, 0, 0, 0, 0.85 8192, 0, 0, 1, 0.845 8192, 13, 0, 0, 0.937 8192, 13, 0, 1, 0.939 8192, 45, 0, 0, 0.932 8192, 45, 0, 1, 0.927 8192, 0, 13, 0, 0.621 8192, 0, 13, 1, 0.62 8192, 0, 45, 0, 0.53 8192, 0, 45, 1, 0.516 8192, 13, 13, 0, 0.664 8192, 13, 13, 1, 0.659 8192, 45, 45, 0, 0.593 8192, 45, 45, 1, 0.575 8192, 2048, 0, 0, 0.854 8192, 2048, 0, 1, 0.834 8192, 2061, 0, 0, 0.863 8192, 2061, 0, 1, 0.857 8192, 2048, 13, 0, 0.63 8192, 2048, 13, 1, 0.629 8192, 2061, 13, 0, 0.627 8192, 2061, 13, 1, 0.62 Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86: Optimize memmove-vec-unaligned-erms.SNoah Goldstein2021-11-066-224/+381
| | | | | | | | | | | | | | | | | | | | | | No bug. The optimizations are as follows: 1) Always align entry to 64 bytes. This makes behavior more predictable and makes other frontend optimizations easier. 2) Make the L(more_8x_vec) cases 4k aliasing aware. This can have significant benefits in the case that: 0 < (dst - src) < [256, 512] 3) Align before `rep movsb`. For ERMS this is roughly a [0, 30%] improvement and for FSRM [-10%, 25%]. In addition to these primary changes there is general cleanup throughout to optimize the aligning routines and control flow logic. Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* benchtests: Add partial overlap case in bench-memmove-walk.cNoah Goldstein2021-11-061-15/+46
| | | | | | | | This commit adds a new partial overlap benchmark. This is generally the most interesting performance case for memmove and was missing. Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* benchtests: Add additional cases to bench-memcpy.c and bench-memmove.cNoah Goldstein2021-11-062-9/+66
| | | | | | | | | | | | This commit adds more benchmarks for the common memcpy/memmove benchmarks. The most signifcant cases are the half page offsets. The current versions leaves dst and src near page aligned which leads to false 4k aliasing on x86_64. This can add noise due to false dependencies from one run to the next. As well, this seems like more of an edge case that common case so it shouldn't be the only thing Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* string: Make tests birdirectional test-memcpy.cNoah Goldstein2021-11-062-28/+214
| | | | | | | | This commit updates the memcpy tests to test both dst > src and dst < src. This is because there is logic in the code based on the Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* Remove the last trace of generate-md5 [BZ #28554]H.J. Lu2021-11-061-1/+1
| | | | | | | | | | | | generate-md5 was removed by commit d73f5331ce5370ca5a879229e3842f5de98689cd Author: Roland McGrath <roland@gnu.org> Date: Fri May 2 02:20:45 2003 +0000 2003-05-01 Roland McGrath <roland@redhat.com> Remove its last trace. This fixes BZ #28554.
* Revert "benchtests: Add acosf function to bench-math"Sunil K Pandey2021-11-052-2710/+0
| | | | This reverts commit 79d0fc65395716c1d95931064c7bf37852203c66.
* Configure GCC with --enable-initfini-array [BZ #27945]H.J. Lu2021-11-051-0/+1
| | | | | | | | | | | | | | | | Starting from GCC 12, the .init_array and .fini_array sections are enabled unconditionally by commit 13a39886940331149173b25d6ebde0850668d8b9 Author: H.J. Lu <hjl.tools@gmail.com> Date: Tue Jun 8 16:09:24 2021 -0700 Always enable DT_INIT_ARRAY/DT_FINI_ARRAY on Linux configure GCC with --enable-initfini-array to enable them when using GCC release branches. Fixes BZ #27945.
* elf: Earlier missing dynamic segment check in _dl_map_object_from_fdFlorian Weimer2021-11-051-10/+12
| | | | | | | | | Separated debuginfo files have PT_DYNAMIC with p_filesz == 0. We need to check for that before the _dl_map_segments call because that could attempt to write to mappings that extend beyond the end of the file, resulting in SIGBUS. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* gconv: Do not emit spurious NUL character in ISO-2022-JP-3 (bug 28524)Nikita Popov2021-11-043-9/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | | Bugfix 27256 has introduced another issue: In conversion from ISO-2022-JP-3 encoding, it is possible to force iconv to emit extra NUL character on internal state reset. To do this, it is sufficient to feed iconv with escape sequence which switches active character set. The simplified check 'data->__statep->__count != ASCII_set' introduced by the aforementioned bugfix picks that case and behaves as if '\0' character has been queued thus emitting it. To eliminate this issue, these steps are taken: * Restore original condition '(data->__statep->__count & ~7) != ASCII_set'. It is necessary since bits 0-2 may contain number of buffered input characters. * Check that queued character is not NUL. Similar step is taken for main conversion loop. Bundled test case follows following logic: * Try to convert ISO-2022-JP-3 escape sequence switching active character set * Reset internal state by providing NULL as input buffer * Ensure that nothing has been converted. Signed-off-by: Nikita Popov <npv1310@gmail.com>
* [powerpc] Tighten contraints for asm constant parametersPaul A. Clarke2021-11-033-9/+9
| | | | | | | | | | | | There are a few places where only known numeric values are acceptable for `asm` parameters, yet the constraint "i" is used. "i" can include "symbolic constants whose values will be known only at assembly time or later." Use "n" instead of "i" where known numeric values are required. Suggested-by: Segher Boessenkool <segher@kernel.crashing.org> Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
* elf: Do not run DSO sorting if tunables is not enabledAdhemerval Zanella2021-11-031-0/+2
| | | | | | Since the argorithm selection requires tunables. Checked on x86_64-linux-gnu with --enable-tunables=no.
* riscv: Build with -mno-relax if linker does not support R_RISCV_ALIGNAdhemerval Zanella2021-11-033-0/+52
| | | | | | | | | | | It allows build both glibc and tests with lld (Since lld does not support R_RISCV_ALIGN linker relaxation). Checked with a build for riscv32-linux-gnu-rv32imafdc-ilp32d and riscv64-linux-gnu-rv64imafdc-lp64d. Reviewed-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Fangrui Song <maskray@google.com>
* x86-64: Replace movzx with movzblFangrui Song2021-11-022-4/+4
| | | | | | | | | | | | | Clang cannot assemble movzx in the AT&T dialect mode. ../sysdeps/x86_64/strcmp.S:2232:16: error: invalid operand for instruction movzx (%rsi), %ecx ^~~~ Change movzx to movzbl, which follows the AT&T dialect and is used elsewhere in the file. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* regex: Unnest nested functions in regcomp.cFangrui Song2021-11-021-223/+241
| | | | | | | | | | | | | | | | | This refactor moves four functions out of a nested scope and converts them into static always_inline functions. collseqwc, table_size, symb_table, extra are now initialized to zero because they are passed as function arguments. On x86-64, .text is 16 byte larger likely due to the 4 stores. This is nothing compared to the amount of work that regcomp has to do looking up the collation weights, or other functions. If the non-buildable `sysdeps/generic/dl-machine.h` doesn't count, this patch removes the last `auto inline` usage from glibc. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* Use Linux 5.15 in build-many-glibcs.pyJoseph Myers2021-11-021-1/+1
| | | | | | | This patch makes build-many-glibcs.py use Linux 5.15. Tested with build-many-glibcs.py (host-libraries, compilers and glibcs builds).
* elf: Assume disjointed .rela.dyn and .rela.plt for loaderAdhemerval Zanella2021-11-021-23/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The patch removes the the ELF_DURING_STARTUP optimization and assume both .rel.dyn and .rel.plt might not be subsequent. This allows some code simplification since relocation will be handled independently where it is done on bootstrap. At least on x86_64_64, I can not measure any performance implications. Running 10000 time the command LD_DEBUG=statistics ./elf/ld.so ./libc.so And filtering the "total startup time in dynamic loader" result, the geometric mean is: patched master Ryzen 7 5900x 24140 24952 i7-4510U 45957 45982 (The results do show some variation, I did not make any statistical analysis). It also allows build arm with lld, since it inserts ".ARM.exidx" between ".rel.dyn" and ".rel.plt" for the loader. Checked on x86_64-linux-gnu and arm-linux-gnueabihf. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* i386: Explain why __HAVE_64B_ATOMICS has to be 0Florian Weimer2021-11-021-0/+4
|
* benchtests: Add hypotfAdhemerval Zanella2021-11-012-0/+1008
| | | | | | | Based on random input arguments. About 85% tuples have exponents of the two arguments close together (+-1 range). Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* benchtests: Make hypot input randomAdhemerval Zanella2021-11-011-12/+1003
| | | | | | | | Instead of inputs based on the algorithm implementation details. About 85% tuples have exponents of the two arguments close together (+-1 range). Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* arm: Use have-mtls-dialect-gnu2 to check for ARM TLS descriptors supportAdhemerval Zanella2021-11-011-6/+1
| | | | | | | The lld linker does not support TLSDESC for arm. The have-arm-tls-desc is a leftover of 56583289b1 to support NaCL. Reviewed-by: Fangrui Song <maskray@google.com>
* arm: Use internal symbol for _dl_argv on _dl_start_userAdhemerval Zanella2021-11-011-1/+1
| | | | | | | | | | | | The lld does not support R_ARM_GOTOFF32 to preemptible symbol (_dl_argv has default visibility). Use the internal alias instead (one option would to use HIDDEN_JUMPTARGET, bu the macro is not defined for !__ASSEMBLER__ and I made this patch arm-specific to avoid require to check extensivelly on other architecture it this might break something). Checked on arm-linux-gnueabihf. Reviewed-by: Fangrui Song <maskray@google.com>
* x86-64: Remove Prefer_AVX2_STRCMPH.J. Lu2021-11-015-15/+2
| | | | | | | | | Remove Prefer_AVX2_STRCMP to enable EVEX strcmp. When comparing 2 32-byte strings, EVEX strcmp has been improved to require 1 load, 1 VPTESTM, 1 VPCMP, 1 KMOVD and 1 INCL instead of 2 loads, 3 VPCMPs, 2 KORDs, 1 KMOVD and 1 TESTL while AVX2 strcmp requires 1 load, 2 VPCMPEQs, 1 VPMINU, 1 VPMOVMSKB and 1 TESTL. EVEX strcmp is now faster than AVX2 strcmp by up to 40% on Tiger Lake and Ice Lake.
* x86-64: Improve EVEX strcmp with masked loadH.J. Lu2021-11-011-218/+243
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In strcmp-evex.S, to compare 2 32-byte strings, replace VMOVU (%rdi, %rdx), %YMM0 VMOVU (%rsi, %rdx), %YMM1 /* Each bit in K0 represents a mismatch in YMM0 and YMM1. */ VPCMP $4, %YMM0, %YMM1, %k0 VPCMP $0, %YMMZERO, %YMM0, %k1 VPCMP $0, %YMMZERO, %YMM1, %k2 /* Each bit in K1 represents a NULL in YMM0 or YMM1. */ kord %k1, %k2, %k1 /* Each bit in K1 represents a NULL or a mismatch. */ kord %k0, %k1, %k1 kmovd %k1, %ecx testl %ecx, %ecx jne L(last_vector) with VMOVU (%rdi, %rdx), %YMM0 VPTESTM %YMM0, %YMM0, %k2 /* Each bit cleared in K1 represents a mismatch or a null CHAR in YMM0 and 32 bytes at (%rsi, %rdx). */ VPCMP $0, (%rsi, %rdx), %YMM0, %k1{%k2} kmovd %k1, %ecx incl %ecx jne L(last_vector) It makes EVEX strcmp faster than AVX2 strcmp by up to 40% on Tiger Lake and Ice Lake. Co-Authored-By: Noah Goldstein <goldstein.w.n@gmail.com>
* benchtests: Add acosf function to bench-mathSunil K Pandey2021-10-292-0/+2710
| | | | | | | | | | | | | | | | | | Add acosf function to bench-math and copy acosf-inputs to benchtests. Motivation for this patch is to prepare for upcoming libmvec new functions. Float and double version of libmvec functions stays together. acosf-inputs file generated from acos-inputs file using following scaling formula: f = d * (FLT_MAX/DBL_MAX) Where d is input(double) and f is output(float). If scaled float value is duplicate in new input file, nextafterf() function used to find next float value, ensuring no duplicates. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>