about summary refs log tree commit diff
path: root/sysdeps/x86_64
Commit message (Collapse)AuthorAgeFilesLines
* elf: Fix slow tls access after dlopen [BZ #19924]Szabolcs Nagy2023-09-011-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In short: __tls_get_addr checks the global generation counter and if the current dtv is older then _dl_update_slotinfo updates dtv up to the generation of the accessed module. So if the global generation is newer than generation of the module then __tls_get_addr keeps hitting the slow dtv update path. The dtv update path includes a number of checks to see if any update is needed and this already causes measurable tls access slow down after dlopen. It may be possible to detect up-to-date dtv faster. But if there are many modules loaded (> TLS_SLOTINFO_SURPLUS) then this requires at least walking the slotinfo list. This patch tries to update the dtv to the global generation instead, so after a dlopen the tls access slow path is only hit once. The modules with larger generation than the accessed one were not necessarily synchronized before, so additional synchronization is needed. This patch uses acquire/release synchronization when accessing the generation counter. Note: in the x86_64 version of dl-tls.c the generation is only loaded once, since relaxed mo is not faster than acquire mo load. I have not benchmarked this. Tested by Adhemerval Zanella on aarch64, powerpc, sparc, x86 who reported that it fixes the performance issue of bug 19924. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86_64: Add log1p with FMAH.J. Lu2023-08-213-0/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On Skylake, it changes log1p bench performance by: Before After Improvement max 63.349 58.347 8% min 4.448 5.651 -30% mean 12.0674 10.336 14% The minimum code path is if (hx < 0x3FDA827A) /* x < 0.41422 */ { if (__glibc_unlikely (ax >= 0x3ff00000)) /* x <= -1.0 */ { ... } if (__glibc_unlikely (ax < 0x3e200000)) /* |x| < 2**-29 */ { math_force_eval (two54 + x); /* raise inexact */ if (ax < 0x3c900000) /* |x| < 2**-54 */ { ... } else return x - x * x * 0.5; FMA and non-FMA code sequences look similar. Non-FMA version is slightly faster. Since log1p is called by asinh and atanh, it improves asinh performance by: Before After Improvement max 75.645 63.135 16% min 10.074 10.071 0% mean 15.9483 14.9089 6% and improves atanh performance by: Before After Improvement max 91.768 75.081 18% min 15.548 13.883 10% mean 18.3713 16.8011 8%
* x86_64: Add expm1 with FMAH.J. Lu2023-08-143-0/+48
| | | | | | | | | | | | | | | | | | On Skylake, it improves expm1 bench performance by: Before After Improvement max 70.204 68.054 3% min 20.709 16.2 22% mean 22.1221 16.7367 24% NB: Add extern long double __expm1l (long double); extern long double __expm1f128 (long double); for __typeof (__expm1l) and __typeof (__expm1f128) when __expm1 is defined since __expm1 may be expanded in their declarations which causes the build failure.
* x86_64: Add log2 with FMAH.J. Lu2023-08-113-0/+48
| | | | | | | | | On Skylake, it improves log2 bench performance by: Before After Improvement max 208.779 63.827 69% min 9.977 6.55 34% mean 10.366 6.8191 34%
* x86_64: Sort fpu/multiarch/MakefileH.J. Lu2023-08-101-20/+74
| | | | | | Sort Makefile variables using scripts/sort-makefile-lines.py. No code generation changes observed in libm. No regressions on x86_64.
* x86_64: Fix build with --disable-multiarch (BZ 30721)Adhemerval Zanella2023-08-103-1/+5
| | | | | | | | | | | | | | With multiarch disabled, the default memmove implementation provides the fortify routines for memcpy, mempcpy, and memmove. However, it does not provide the internal hidden definitions used when building with fortify enabled. The memset has a similar issue. Checked on x86_64-linux-gnu building with different options: default and --disable-multi-arch plus default, --disable-default-pie, --enable-fortify-source={2,3}, and --enable-fortify-source={2,3} with --disable-default-pie. Tested-by: Andreas K. Huettel <dilfridge@gentoo.org> Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* Update x86_64 libm-test-ulps (x32 ABI)Andreas K. Hüttel2023-07-191-13/+14
| | | | | | | | | Based on feedback by Mike Gilbert <floppym@gentoo.org> Linux-6.1.38-dist x86_64 AMD Phenom-tm- II X6 1055T Processor -march=amdfam10 failures occur for x32 ABI Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org>
* configure: Use autoconf 2.71Siddhesh Poyarekar2023-07-172-21/+27
| | | | | | | | | | | | | | Bump autoconf requirement to 2.71 to allow regenerating configure on more recent distributions. autoconf 2.71 has been in Fedora since F36 and is the current version in Debian stable (bookworm). It appears to be current in Gentoo as well. All sysdeps configure and preconfigure scripts have also been regenerated; all changes are trivial transformations that do not affect functionality. Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* wchar: Avoid PLT entries with _FORTIFY_SOURCEFrédéric Bérat2023-07-051-0/+4
| | | | | | | | | | The change is meant to avoid unwanted PLT entries for the wmemset and wcrtomb routines when _FORTIFY_SOURCE is set. On top of that, ensure that *_chk routines have their hidden builtin definitions available. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* string: Ensure *_chk routines have their hidden builtin definition availableFrédéric Bérat2023-07-058-0/+20
| | | | | | | If libc_hidden_builtin_{def,proto} isn't properly set for *_chk routines, there are unwanted PLT entries in libc.so. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* ld.so: Always use MAP_COPY to map the first segment [BZ #30452]H.J. Lu2023-06-303-0/+9
| | | | | | | | | | | The first segment in a shared library may be read-only, not executable. To support LD_PREFER_MAP_32BIT_EXEC on such shared libraries, we also check MAP_DENYWRITE to decide if MAP_32BIT should be passed to mmap. Normally the first segment is mapped with MAP_COPY, which is defined as (MAP_PRIVATE | MAP_DENYWRITE). But if the segment alignment is greater than the page size, MAP_COPY isn't used to allocate enough space to ensure that the segment can be properly aligned. Map the first segment with MAP_COPY in this case to fix BZ #30452.
* x86: Make dl-cache.h and readelflib.c not Linux-specificSergey Bugaev2023-06-261-0/+51
| | | | | | | These files could be useful to any port that wants to use ld.so.cache. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* sysdeps/{i386, x86_64}/mempcpy_chk.S: fix linknamespace for __mempcpy_chkFrederic Berat2023-06-221-1/+1
| | | | | | | | | | | | | On i386 and x86_64, for libc.a specifically, __mempcpy_chk calls mempcpy which leads POSIX routines to call non-POSIX mempcpy indirectly. This leads the linknamespace test to fail when glibc is built with __FORTIFY_SOURCE=3. Since calling mempcpy doesn't bring any benefit for libc.a, directly call __mempcpy instead. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* x86-64: Use YMM registers in memcmpeq-evex.SH.J. Lu2023-06-011-1/+1
| | | | | | | Since the assembly source file with -evex suffix should use YMM registers, not ZMM registers, include x86-evex256-vecs.h by default to use YMM registers in memcmpeq-evex.S Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* Fix misspellings in sysdeps/x86_64 -- BZ 25337.Paul Pluzhnikov2023-05-2337-105/+105
| | | | | | | Applying this commit results in bit-identical rebuild of libc.so.6 math/libm.so.6 elf/ld-linux-x86-64.so.2 mathvec/libmvec.so.1 Reviewed-by: Florian Weimer <fweimer@redhat.com>
* Fix misspellings in sysdeps/x86_64/fpu/multiarch -- BZ 25337.Paul Pluzhnikov2023-05-23112-169/+169
| | | | | | | Applying this commit results in a bit-identical rebuild of mathvec/libmvec.so.1 (which is the only binary that gets rebuilt). Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* Enable libmvec support for AArch64Joe Ramsay2023-05-033-104/+54
| | | | | | | | | | | | | | | | | | | | | | | This patch enables libmvec on AArch64. The proposed change is mainly implementing build infrastructure to add the new routines to ABI, tests and benchmarks. I have demonstrated how this all fits together by adding implementations for vector cos, in both single and double precision, targeting both Advanced SIMD and SVE. The implementations of the routines themselves are just loops over the scalar routine from libm for now, as we are more concerned with getting the plumbing right at this point. We plan to contribute vector routines from the Arm Optimized Routines repo that are compliant with requirements described in the libmvec wiki. Building libmvec requires minimum GCC 10 for SVE ACLE. To avoid raising the minimum GCC by such a big jump, we allow users to disable libmvec if their compiler is too old. Note that at this point users have to manually call the vector math functions. This seems to be acceptable to some downstream users. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* nptl: move tst-x86-64-tls-1 to nptl-only testsSamuel Thibault2023-05-013-2/+4
| | | | It is essentially nptl-only.
* hurd: Implement prefer_map_32bit_exec tunableSergey Bugaev2023-04-245-0/+119
| | | | | | | This makes the prefer_map_32bit_exec tunable no longer Linux-specific. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-Id: <20230423215526.346009-4-bugaevc@gmail.com>
* hurd: Add sys/ucontext.h and sigcontext.h for x86_64Sergey Bugaev2023-04-101-0/+157
| | | | | | | This is based on the Linux port's version, but laid out to match Mach's struct i386_thread_state, much like the i386 version does. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
* x86_64: Fix asm constraints in feraiseexcept (bug 30305)Florian Weimer2023-04-031-2/+2
| | | | | | | | | The divss instruction clobbers its first argument, and the constraints need to reflect that. Fortunately, with GCC 12, generated code does not actually change, so there is no externally visible bug. Suggested-by: Jakub Jelinek <jakub@redhat.com> Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* x86_64: Add rtld-stpncpy & rtld-strncpySergey Bugaev2023-04-032-0/+36
| | | | | | | | Just like the other existing rtld-str* files, this provides rtld with usable versions of stpncpy and strncpy. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-Id: <20230319151017.531737-22-bugaevc@gmail.com>
* htl: Add tcb-offsets.sym for x86_64Sergey Bugaev2023-04-032-0/+28
| | | | | | | | The source code is the same as sysdeps/i386/htl/tcb-offsets.sym, but of course the produced tcb-offsets.h will be different. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-Id: <20230319151017.531737-21-bugaevc@gmail.com>
* Remove --enable-tunables configure optionAdhemerval Zanella Netto2023-03-291-2/+0
| | | | | | | | | | | | And make always supported. The configure option was added on glibc 2.25 and some features require it (such as hwcap mask, huge pages support, and lock elisition tuning). It also simplifies the build permutations. Changes from v1: * Remove glibc.rtld.dynamic_sort changes, it is orthogonal and needs more discussion. * Cleanup more code. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* benchtests: Move libmvec benchtest inputs to benchtests directoryJoe Ramsay2023-03-2753-213201/+1
| | | | | | | This allows other targets to use the same inputs for their own libmvec microbenchmarks without having to duplicate them in their own subdirectory. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* htl: Add pthreadtypes-arch.h for x86_64Sergey Bugaev2023-02-271-0/+36
| | | | | Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-Id: <20230221211932.296459-5-bugaevc@gmail.com>
* x86_64: Update libm test ulpsH.J. Lu2023-02-271-0/+1
| | | | | | | | | | | | Update libm test ulps for commit 3efbf11fdf15ed991d2c41743921c524a867e145 Author: Paul Zimmermann <Paul.Zimmermann@inria.fr> Date: Tue Feb 14 11:24:59 2023 +0100 update auto-libm-test-out-hypot Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* hurd, htl: Add some x86_64-specific codeSergey Bugaev2023-02-121-0/+29
| | | | | Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-Id: <20230212111044.610942-12-bugaevc@gmail.com>
* htl: Generalize i386 pt-machdep.h to x86Samuel Thibault2023-02-121-0/+1
|
* string: Add libc_hidden_proto for memrchrAdhemerval Zanella2023-02-082-0/+2
| | | | | | | Although static linker can optimize it to local call, it follows the internal scheme to provide hidden proto and definitions. Reviewed-by: Carlos Eduardo Seo <carlos.seo@linaro.org>
* string: Add libc_hidden_proto for strchrnulAdhemerval Zanella2023-02-082-0/+5
| | | | | | | Although static linker can optimize it to local call, it follows the internal scheme to provide hidden proto and definitions. Reviewed-by: Carlos Eduardo Seo <carlos.seo@linaro.org>
* Parameterize op_t from memcopy.hAdhemerval Zanella2023-02-061-0/+24
| | | | | | | | | | It moves the op_t definition out to an specific header, adds the attribute 'may-alias', and cleanup its duplicated definitions. Checked with a build and check with run-built-tests=no for all major Linux ABIs. Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* x86: Fix strncat-avx2.S reading past length [BZ #30065]Noah Goldstein2023-01-311-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Occurs when `src` has no null-term. Two cases: 1) Zero-length check is doing: ``` test %rdx, %rdx jl L(zero_len) ``` which doesn't actually check zero (was at some point `decq` and the flag never got updated). The fix is just make the flag `jle` i.e: ``` test %rdx, %rdx jle L(zero_len) ``` 2) Length check in page-cross case checking if we should continue is doing: ``` cmpq %r8, %rdx jb L(page_cross_small) ``` which means we will continue searching for null-term if length ends at the end of a page and there was no null-term in `src`. The fix is to make the flag: ``` cmpq %r8, %rdx jbe L(page_cross_small) ```
* Update copyright dates with scripts/update-copyrightsJoseph Myers2023-01-061168-1168/+1168
|
* stdio-common: Convert vfprintf and related functions to buffersFlorian Weimer2022-12-191-18/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | vfprintf is entangled with vfwprintf (of course), __printf_fp, __printf_fphex, __vstrfmon_l_internal, and the strfrom family of functions. The latter use the internal snprintf functionality, so vsnprintf is converted as well. The simples conversion is __printf_fphex, followed by __vstrfmon_l_internal and __printf_fp, and finally __vfprintf_internal and __vfwprintf_internal. __vsnprintf_internal and strfrom* are mostly consuming the new interfaces, so they are comparatively simple. __printf_fp is a public symbol, so the FILE *-based interface had to preserved. The __printf_fp rewrite does not change the actual binary-to-decimal conversion algorithm, and digits are still not emitted directly to the target buffer. However, the staging buffer now uses bytes instead of wide characters, and one buffer copy is eliminated. The changes are at least performance-neutral in my testing. Floating point printing and snprintf improved measurably, so that this Lua script for i=1,5000000 do print(i, i * math.pi) end runs about 5% faster for me. To preserve fprintf performance for a simple "%d" format, this commit has some logic changes under LABEL (unsigned_number) to avoid additional function calls. There are certainly some very easy performance improvements here: binary, octal and hexadecimal formatting can easily avoid the temporary work buffer (the number of digits can be computed ahead-of-time using one of the __builtin_clz* built-ins). Decimal formatting can use a specialized version of _itoa_word for base 10. The existing (inconsistent) width handling between strfmon and printf is preserved here. __print_fp_buffer_1 would have to use __translated_number_width to achieve ISO conformance for printf. Test expectations in libio/tst-vtables-common.c are adjusted because the internal staging buffer merges all virtual function calls into one. In general, stack buffer usage is greatly reduced, particularly for unbuffered input streams. __printf_fp can still use a large buffer in binary128 mode for %g, though. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86: Prevent SIGSEGV in memcmp-sse2 when data is concurrently modified [BZ ↵Noah Goldstein2022-12-151-1/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | #29863] In the case of INCORRECT usage of `memcmp(a, b, N)` where `a` and `b` are concurrently modified as `memcmp` runs, there can be a SIGSEGV in `L(ret_nonzero_vec_end_0)` because the sequential logic assumes that `(rdx - 32 + rax)` is a positive 32-bit integer. To be clear, this change does not mean the usage of `memcmp` is supported. The program behaviour is undefined (UB) in the presence of data races, and `memcmp` is incorrect when the values of `a` and/or `b` are modified concurrently (data race). This UB may manifest itself as a SIGSEGV. That being said, if we can allow the idiomatic use cases, like those in yottadb with opportunistic concurrency control (OCC), to execute without a SIGSEGV, at no cost to regular use cases, then we can aim to minimize harm to those existing users. The fix replaces a 32-bit `addl %edx, %eax` with the 64-bit variant `addq %rdx, %rax`. The 1-extra byte of code size from using the 64-bit instruction doesn't contribute to overall code size as the next target is aligned and has multiple bytes of `nop` padding before it. As well all the logic between the add and `ret` still fits in the same fetch block, so the cost of this change is basically zero. The relevant sequential logic can be seen in the following pseudo-code: ``` /* * rsi = a * rdi = b * rdx = len - 32 */ /* cmp a[0:15] and b[0:15]. Since length is known to be [17, 32] in this case, this check is also assumed to cover a[0:(31 - len)] and b[0:(31 - len)]. */ movups (%rsi), %xmm0 movups (%rdi), %xmm1 PCMPEQ %xmm0, %xmm1 pmovmskb %xmm1, %eax subl %ecx, %eax jnz L(END_NEQ) /* cmp a[len-16:len-1] and b[len-16:len-1]. */ movups 16(%rsi, %rdx), %xmm0 movups 16(%rdi, %rdx), %xmm1 PCMPEQ %xmm0, %xmm1 pmovmskb %xmm1, %eax subl %ecx, %eax jnz L(END_NEQ2) ret L(END2): /* Position first mismatch. */ bsfl %eax, %eax /* The sequential version is able to assume this value is a positive 32-bit value because the first check included bytes in range a[0:(31 - len)] and b[0:(31 - len)] so `eax` must be greater than `31 - len` so the minimum value of `edx` + `eax` is `(len - 32) + (32 - len) >= 0`. In the concurrent case, however, `a` or `b` could have been changed so a mismatch in `eax` less or equal than `(31 - len)` is possible (the new low bound is `(16 - len)`. This can result in a negative 32-bit signed integer, which when zero extended to 64-bits is a random large value this out out of bounds. */ addl %edx, %eax /* Crash here because 32-bit negative number in `eax` zero extends to out of bounds 64-bit offset. */ movzbl 16(%rdi, %rax), %ecx movzbl 16(%rsi, %rax), %eax ``` This fix is quite simple, just make the `addl %edx, %eax` 64 bit (i.e `addq %rdx, %rax`). This prevents the 32-bit zero extension and since `eax` is still a low bound of `16 - len` the `rdx + rax` is bound by `(len - 32) - (16 - len) >= -16`. Since we have a fixed offset of `16` in the memory access this must be in bounds.
* x86-64 strncpy: Properly handle the length parameter [BZ# 29839]H.J. Lu2022-12-022-0/+8
| | | | | | | | | | | | On x32, the size_t parameter may be passed in the lower 32 bits of a 64-bit register with the non-zero upper 32 bits. The string/memory functions written in assembly can only use the lower 32 bits of a 64-bit register as length or must clear the upper 32 bits before using the full 64-bit register for length. This pach fixes strncpy for x32. Tested on x86-64 and x32. On x86-64, libc.so is the same with and without the fix. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* x86-64 strncat: Properly handle the length parameter [BZ# 24097]H.J. Lu2022-12-025-1/+73
| | | | | | | | | | | | On x32, the size_t parameter may be passed in the lower 32 bits of a 64-bit register with the non-zero upper 32 bits. The string/memory functions written in assembly can only use the lower 32 bits of a 64-bit register as length or must clear the upper 32 bits before using the full 64-bit register for length. This pach fixes strncat for x32. Tested on x86-64 and x32. On x86-64, libc.so is the same with and without the fix. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* x86/fpu: Factor out shared avx2/avx512 code in svml_{s|d}_wrapper_impl.hNoah Goldstein2022-11-273-342/+192
| | | | | | | Code is exactly the same for the two so better to only maintain one version. All math and mathvec tests pass on x86.
* x86/fpu: Cleanup code in svml_{s|d}_wrapper_impl.hNoah Goldstein2022-11-272-242/+172
| | | | | | | 1. Remove unnecessary spills. 2. Fix some small nit missed optimizations. All math and mathvec tests pass on x86.
* x86/fpu: Reformat svml_{s|d}_wrapper_impl.hNoah Goldstein2022-11-272-510/+510
| | | | | Just reformat with the style convention used in other x86 assembler files. This doesn't change libm.so or libmvec.so.
* x86/fpu: Fix misspelled evex512 section in variety of svml filesNoah Goldstein2022-11-2721-21/+21
| | | | | | | | | | | | ``` .section .text.evex512, "ax", @progbits ``` With misspelled as: ``` .section .text.exex512, "ax", @progbits ```
* x86/fpu: Add missing ISA sections to variety of svml filesNoah Goldstein2022-11-27198-198/+198
| | | | Many sse4/avx2/avx512 files where just in .text.
* x86: Add avx2 optimized functions for the wchar_t strcpy familyNoah Goldstein2022-11-0827-18/+115
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implemented: wcscat-avx2 (+ 744 bytes wcscpy-avx2 (+ 539 bytes) wcpcpy-avx2 (+ 577 bytes) wcsncpy-avx2 (+1108 bytes) wcpncpy-avx2 (+1214 bytes) wcsncat-avx2 (+1085 bytes) Performance Changes: Times are from N = 10 runs of the benchmark suite and are reported as geometric mean of all ratios of New Implementation / Best Old Implementation. Best Old Implementation was determined with the highest ISA implementation. wcscat-avx2 -> 0.975 wcscpy-avx2 -> 0.591 wcpcpy-avx2 -> 0.698 wcsncpy-avx2 -> 0.730 wcpncpy-avx2 -> 0.711 wcsncat-avx2 -> 0.954 Code Size Changes: This change increase the size of libc.so by ~5.5kb bytes. For reference the patch optimizing the normal strcpy family functions decreases libc.so by ~5.2kb. Full check passes on x86-64 and build succeeds for all ISA levels w/ and w/o multiarch.
* x86: Add evex optimized functions for the wchar_t strcpy familyNoah Goldstein2022-11-0833-7/+858
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implemented: wcscat-evex (+ 905 bytes) wcscpy-evex (+ 674 bytes) wcpcpy-evex (+ 709 bytes) wcsncpy-evex (+1358 bytes) wcpncpy-evex (+1467 bytes) wcsncat-evex (+1213 bytes) Performance Changes: Times are from N = 10 runs of the benchmark suite and are reported as geometric mean of all ratios of New Implementation / Best Old Implementation. Best Old Implementation was determined with the highest ISA implementation. wcscat-evex -> 0.991 wcscpy-evex -> 0.587 wcpcpy-evex -> 0.695 wcsncpy-evex -> 0.719 wcpncpy-evex -> 0.694 wcsncat-evex -> 0.979 Code Size Changes: This change increase the size of libc.so by ~6.3kb bytes. For reference the patch optimizing the normal strcpy family functions decreases libc.so by ~5.7kb. Full check passes on x86-64 and build succeeds for all ISA levels w/ and w/o multiarch.
* x86: Optimize and shrink st{r|p}{n}{cat|cpy}-avx2 functionsNoah Goldstein2022-11-0813-1234/+1594
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Optimizations are: 1. Use more overlapping stores to avoid branches. 2. Reduce how unrolled the aligning copies are (this is more of a code-size save, its a negative for some sizes in terms of perf). 3. For st{r|p}n{cat|cpy} re-order the branches to minimize the number that are taken. Performance Changes: Times are from N = 10 runs of the benchmark suite and are reported as geometric mean of all ratios of New Implementation / Old Implementation. strcat-avx2 -> 0.998 strcpy-avx2 -> 0.937 stpcpy-avx2 -> 0.971 strncpy-avx2 -> 0.793 stpncpy-avx2 -> 0.775 strncat-avx2 -> 0.962 Code Size Changes: function -> Bytes New / Bytes Old -> Ratio strcat-avx2 -> 685 / 1639 -> 0.418 strcpy-avx2 -> 560 / 903 -> 0.620 stpcpy-avx2 -> 592 / 939 -> 0.630 strncpy-avx2 -> 1176 / 2390 -> 0.492 stpncpy-avx2 -> 1268 / 2438 -> 0.520 strncat-avx2 -> 1042 / 2563 -> 0.407 Notes: 1. Because of the significant difference between the implementations they are split into three files. strcpy-avx2.S -> strcpy, stpcpy, strcat strncpy-avx2.S -> strncpy strncat-avx2.S > strncat I couldn't find a way to merge them without making the ifdefs incredibly difficult to follow. Full check passes on x86-64 and build succeeds for all ISA levels w/ and w/o multiarch.
* x86: Optimize and shrink st{r|p}{n}{cat|cpy}-evex functionsNoah Goldstein2022-11-087-1173/+2115
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Optimizations are: 1. Use more overlapping stores to avoid branches. 2. Reduce how unrolled the aligning copies are (this is more of a code-size save, its a negative for some sizes in terms of perf). 3. Improve the loop a bit (similiar to what we do in strlen with 2x vpminu + kortest instead of 3x vpminu + kmov + test). 4. For st{r|p}n{cat|cpy} re-order the branches to minimize the number that are taken. Performance Changes: Times are from N = 10 runs of the benchmark suite and are reported as geometric mean of all ratios of New Implementation / Old Implementation. stpcpy-evex -> 0.922 strcat-evex -> 0.985 strcpy-evex -> 0.880 strncpy-evex -> 0.831 stpncpy-evex -> 0.780 strncat-evex -> 0.958 Code Size Changes: function -> Bytes New / Bytes Old -> Ratio strcat-evex -> 819 / 1874 -> 0.437 strcpy-evex -> 700 / 1074 -> 0.652 stpcpy-evex -> 735 / 1094 -> 0.672 strncpy-evex -> 1397 / 2611 -> 0.535 stpncpy-evex -> 1489 / 2691 -> 0.553 strncat-evex -> 1184 / 2832 -> 0.418 Notes: 1. Because of the significant difference between the implementations they are split into three files. strcpy-evex.S -> strcpy, stpcpy, strcat strncpy-evex.S -> strncpy strncat-evex.S > strncat I couldn't find a way to merge them without making the ifdefs incredibly difficult to follow. 2. All implementations can be made evex512 by including "x86-evex512-vecs.h" at the top. 3. All implementations have an optional define: `USE_EVEX_MASKED_STORE` Setting to one uses evex-masked stores for handling short strings. This saves code size and branches. It's disabled for all implementations are the moment as there are some serious drawbacks to masked stores in certain cases, but that may be fixed on future architectures. Full check passes on x86-64 and build succeeds for all ISA levels w/ and w/o multiarch.
* x86: Use VMM API in memcmpeq-evex.S and minor changesNoah Goldstein2022-11-081-100/+155
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Changes to generated code are: 1. In a few places use `vpcmpeqb` instead of `vpcmpneq` to save a byte of code size. 2. Add a branch for length <= (VEC_SIZE * 6) as opposed to doing the entire block of [VEC_SIZE * 4 + 1, VEC_SIZE * 8] in a single basic-block (the space to add the extra branch without changing code size is bought with the above change). Change (2) has roughly a 20-25% speedup for sizes in [VEC_SIZE * 4 + 1, VEC_SIZE * 6] and negligible to no-cost for [VEC_SIZE * 6 + 1, VEC_SIZE * 8] From N=10 runs on Tigerlake: align1,align2 ,length ,result ,New Time ,Cur Time ,New Time / Old Time 0 ,0 ,129 ,0 ,5.404 ,6.887 ,0.785 0 ,0 ,129 ,1 ,5.308 ,6.826 ,0.778 0 ,0 ,129 ,18446744073709551615 ,5.359 ,6.823 ,0.785 0 ,0 ,161 ,0 ,5.284 ,6.827 ,0.774 0 ,0 ,161 ,1 ,5.317 ,6.745 ,0.788 0 ,0 ,161 ,18446744073709551615 ,5.406 ,6.778 ,0.798 0 ,0 ,193 ,0 ,6.804 ,6.802 ,1.000 0 ,0 ,193 ,1 ,6.950 ,6.754 ,1.029 0 ,0 ,193 ,18446744073709551615 ,6.792 ,6.719 ,1.011 0 ,0 ,225 ,0 ,6.625 ,6.699 ,0.989 0 ,0 ,225 ,1 ,6.776 ,6.735 ,1.003 0 ,0 ,225 ,18446744073709551615 ,6.758 ,6.738 ,0.992 0 ,0 ,256 ,0 ,5.402 ,5.462 ,0.989 0 ,0 ,256 ,1 ,5.364 ,5.483 ,0.978 0 ,0 ,256 ,18446744073709551615 ,5.341 ,5.539 ,0.964 Rewriting with VMM API allows for memcmpeq-evex to be used with evex512 by including "x86-evex512-vecs.h" at the top. Complete check passes on x86-64.
* x86: Use VMM API in memcmp-evex-movbe.S and minor changesNoah Goldstein2022-11-081-133/+175
| | | | | | | | | | The only change to the existing generated code is `tzcnt` -> `bsf` to save a byte of code size here and there. Rewriting with VMM API allows for memcmp-evex-movbe to be used with evex512 by including "x86-evex512-vecs.h" at the top. Complete check passes on x86-64.
* x86_64: Implement evex512 version of strrchr and wcsrchrSunil K Pandey2022-11-035-0/+297
| | | | | | | | | | | | | | | | | | | | | | | | | | | Changes from v1: Use vec api for register. Replace VPCMP with VPCMPEQ Restructure and remove 1 unconditional jump. Change page cross logic to use sall. This patch implements following evex512 version of string functions. evex512 version takes up to 30% less cycle as compared to evex, depending on length and alignment. - strrchr function using 512 bit vectors. - wcsrchr function using 512 bit vectors. Code size data: strrchr-evex.o 879 byte strrchr-evex512.o 601 byte (-32%) wcsrchr-evex.o 882 byte wcsrchr-evex512.o 572 byte (-35%) Placeholder function, not used by any processor at the moment. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>