| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Occurs when `src` has no null-term.
Two cases:
1) Zero-length check is doing:
```
test %rdx, %rdx
jl L(zero_len)
```
which doesn't actually check zero (was at some point `decq` and the
flag never got updated).
The fix is just make the flag `jle` i.e:
```
test %rdx, %rdx
jle L(zero_len)
```
2) Length check in page-cross case checking if we should continue is
doing:
```
cmpq %r8, %rdx
jb L(page_cross_small)
```
which means we will continue searching for null-term if length ends at
the end of a page and there was no null-term in `src`.
The fix is to make the flag:
```
cmpq %r8, %rdx
jbe L(page_cross_small)
```
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vfprintf is entangled with vfwprintf (of course), __printf_fp,
__printf_fphex, __vstrfmon_l_internal, and the strfrom family of
functions. The latter use the internal snprintf functionality,
so vsnprintf is converted as well.
The simples conversion is __printf_fphex, followed by
__vstrfmon_l_internal and __printf_fp, and finally
__vfprintf_internal and __vfwprintf_internal. __vsnprintf_internal
and strfrom* are mostly consuming the new interfaces, so they
are comparatively simple.
__printf_fp is a public symbol, so the FILE *-based interface
had to preserved.
The __printf_fp rewrite does not change the actual binary-to-decimal
conversion algorithm, and digits are still not emitted directly to
the target buffer. However, the staging buffer now uses bytes
instead of wide characters, and one buffer copy is eliminated.
The changes are at least performance-neutral in my testing.
Floating point printing and snprintf improved measurably, so that
this Lua script
for i=1,5000000 do
print(i, i * math.pi)
end
runs about 5% faster for me. To preserve fprintf performance for
a simple "%d" format, this commit has some logic changes under
LABEL (unsigned_number) to avoid additional function calls. There
are certainly some very easy performance improvements here: binary,
octal and hexadecimal formatting can easily avoid the temporary work
buffer (the number of digits can be computed ahead-of-time using one
of the __builtin_clz* built-ins). Decimal formatting can use a
specialized version of _itoa_word for base 10.
The existing (inconsistent) width handling between strfmon and printf
is preserved here. __print_fp_buffer_1 would have to use
__translated_number_width to achieve ISO conformance for printf.
Test expectations in libio/tst-vtables-common.c are adjusted because
the internal staging buffer merges all virtual function calls into
one.
In general, stack buffer usage is greatly reduced, particularly for
unbuffered input streams. __printf_fp can still use a large buffer
in binary128 mode for %g, though.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
#29863]
In the case of INCORRECT usage of `memcmp(a, b, N)` where `a` and `b`
are concurrently modified as `memcmp` runs, there can be a SIGSEGV
in `L(ret_nonzero_vec_end_0)` because the sequential logic
assumes that `(rdx - 32 + rax)` is a positive 32-bit integer.
To be clear, this change does not mean the usage of `memcmp` is
supported. The program behaviour is undefined (UB) in the
presence of data races, and `memcmp` is incorrect when the values
of `a` and/or `b` are modified concurrently (data race). This UB
may manifest itself as a SIGSEGV. That being said, if we can
allow the idiomatic use cases, like those in yottadb with
opportunistic concurrency control (OCC), to execute without a
SIGSEGV, at no cost to regular use cases, then we can aim to
minimize harm to those existing users.
The fix replaces a 32-bit `addl %edx, %eax` with the 64-bit variant
`addq %rdx, %rax`. The 1-extra byte of code size from using the
64-bit instruction doesn't contribute to overall code size as the
next target is aligned and has multiple bytes of `nop` padding
before it. As well all the logic between the add and `ret` still
fits in the same fetch block, so the cost of this change is
basically zero.
The relevant sequential logic can be seen in the following
pseudo-code:
```
/*
* rsi = a
* rdi = b
* rdx = len - 32
*/
/* cmp a[0:15] and b[0:15]. Since length is known to be [17, 32]
in this case, this check is also assumed to cover a[0:(31 - len)]
and b[0:(31 - len)]. */
movups (%rsi), %xmm0
movups (%rdi), %xmm1
PCMPEQ %xmm0, %xmm1
pmovmskb %xmm1, %eax
subl %ecx, %eax
jnz L(END_NEQ)
/* cmp a[len-16:len-1] and b[len-16:len-1]. */
movups 16(%rsi, %rdx), %xmm0
movups 16(%rdi, %rdx), %xmm1
PCMPEQ %xmm0, %xmm1
pmovmskb %xmm1, %eax
subl %ecx, %eax
jnz L(END_NEQ2)
ret
L(END2):
/* Position first mismatch. */
bsfl %eax, %eax
/* The sequential version is able to assume this value is a
positive 32-bit value because the first check included bytes in
range a[0:(31 - len)] and b[0:(31 - len)] so `eax` must be
greater than `31 - len` so the minimum value of `edx` + `eax` is
`(len - 32) + (32 - len) >= 0`. In the concurrent case, however,
`a` or `b` could have been changed so a mismatch in `eax` less or
equal than `(31 - len)` is possible (the new low bound is `(16 -
len)`. This can result in a negative 32-bit signed integer, which
when zero extended to 64-bits is a random large value this out
out of bounds. */
addl %edx, %eax
/* Crash here because 32-bit negative number in `eax` zero
extends to out of bounds 64-bit offset. */
movzbl 16(%rdi, %rax), %ecx
movzbl 16(%rsi, %rax), %eax
```
This fix is quite simple, just make the `addl %edx, %eax` 64 bit (i.e
`addq %rdx, %rax`). This prevents the 32-bit zero extension
and since `eax` is still a low bound of `16 - len` the `rdx + rax`
is bound by `(len - 32) - (16 - len) >= -16`. Since we have a
fixed offset of `16` in the memory access this must be in bounds.
|
|
|
|
|
|
|
|
|
|
|
|
| |
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes strncpy for x32. Tested on x86-64 and x32. On x86-64,
libc.so is the same with and without the fix.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes strncat for x32. Tested on x86-64 and x32. On x86-64,
libc.so is the same with and without the fix.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
|
|
|
|
|
|
|
| |
Code is exactly the same for the two so better to only maintain one
version.
All math and mathvec tests pass on x86.
|
|
|
|
|
|
|
| |
1. Remove unnecessary spills.
2. Fix some small nit missed optimizations.
All math and mathvec tests pass on x86.
|
|
|
|
|
| |
Just reformat with the style convention used in other x86 assembler
files. This doesn't change libm.so or libmvec.so.
|
|
|
|
|
|
|
|
|
|
|
|
| |
```
.section .text.evex512, "ax", @progbits
```
With misspelled as:
```
.section .text.exex512, "ax", @progbits
```
|
|
|
|
| |
Many sse4/avx2/avx512 files where just in .text.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implemented:
wcscat-avx2 (+ 744 bytes
wcscpy-avx2 (+ 539 bytes)
wcpcpy-avx2 (+ 577 bytes)
wcsncpy-avx2 (+1108 bytes)
wcpncpy-avx2 (+1214 bytes)
wcsncat-avx2 (+1085 bytes)
Performance Changes:
Times are from N = 10 runs of the benchmark suite and are reported
as geometric mean of all ratios of New Implementation / Best Old
Implementation. Best Old Implementation was determined with the
highest ISA implementation.
wcscat-avx2 -> 0.975
wcscpy-avx2 -> 0.591
wcpcpy-avx2 -> 0.698
wcsncpy-avx2 -> 0.730
wcpncpy-avx2 -> 0.711
wcsncat-avx2 -> 0.954
Code Size Changes:
This change increase the size of libc.so by ~5.5kb bytes. For
reference the patch optimizing the normal strcpy family functions
decreases libc.so by ~5.2kb.
Full check passes on x86-64 and build succeeds for all ISA levels w/
and w/o multiarch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implemented:
wcscat-evex (+ 905 bytes)
wcscpy-evex (+ 674 bytes)
wcpcpy-evex (+ 709 bytes)
wcsncpy-evex (+1358 bytes)
wcpncpy-evex (+1467 bytes)
wcsncat-evex (+1213 bytes)
Performance Changes:
Times are from N = 10 runs of the benchmark suite and are reported
as geometric mean of all ratios of New Implementation / Best Old
Implementation. Best Old Implementation was determined with the
highest ISA implementation.
wcscat-evex -> 0.991
wcscpy-evex -> 0.587
wcpcpy-evex -> 0.695
wcsncpy-evex -> 0.719
wcpncpy-evex -> 0.694
wcsncat-evex -> 0.979
Code Size Changes:
This change increase the size of libc.so by ~6.3kb bytes. For
reference the patch optimizing the normal strcpy family functions
decreases libc.so by ~5.7kb.
Full check passes on x86-64 and build succeeds for all ISA levels w/
and w/o multiarch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Optimizations are:
1. Use more overlapping stores to avoid branches.
2. Reduce how unrolled the aligning copies are (this is more of a
code-size save, its a negative for some sizes in terms of
perf).
3. For st{r|p}n{cat|cpy} re-order the branches to minimize the
number that are taken.
Performance Changes:
Times are from N = 10 runs of the benchmark suite and are
reported as geometric mean of all ratios of
New Implementation / Old Implementation.
strcat-avx2 -> 0.998
strcpy-avx2 -> 0.937
stpcpy-avx2 -> 0.971
strncpy-avx2 -> 0.793
stpncpy-avx2 -> 0.775
strncat-avx2 -> 0.962
Code Size Changes:
function -> Bytes New / Bytes Old -> Ratio
strcat-avx2 -> 685 / 1639 -> 0.418
strcpy-avx2 -> 560 / 903 -> 0.620
stpcpy-avx2 -> 592 / 939 -> 0.630
strncpy-avx2 -> 1176 / 2390 -> 0.492
stpncpy-avx2 -> 1268 / 2438 -> 0.520
strncat-avx2 -> 1042 / 2563 -> 0.407
Notes:
1. Because of the significant difference between the
implementations they are split into three files.
strcpy-avx2.S -> strcpy, stpcpy, strcat
strncpy-avx2.S -> strncpy
strncat-avx2.S > strncat
I couldn't find a way to merge them without making the
ifdefs incredibly difficult to follow.
Full check passes on x86-64 and build succeeds for all ISA levels w/
and w/o multiarch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Optimizations are:
1. Use more overlapping stores to avoid branches.
2. Reduce how unrolled the aligning copies are (this is more of a
code-size save, its a negative for some sizes in terms of
perf).
3. Improve the loop a bit (similiar to what we do in strlen with
2x vpminu + kortest instead of 3x vpminu + kmov + test).
4. For st{r|p}n{cat|cpy} re-order the branches to minimize the
number that are taken.
Performance Changes:
Times are from N = 10 runs of the benchmark suite and are
reported as geometric mean of all ratios of
New Implementation / Old Implementation.
stpcpy-evex -> 0.922
strcat-evex -> 0.985
strcpy-evex -> 0.880
strncpy-evex -> 0.831
stpncpy-evex -> 0.780
strncat-evex -> 0.958
Code Size Changes:
function -> Bytes New / Bytes Old -> Ratio
strcat-evex -> 819 / 1874 -> 0.437
strcpy-evex -> 700 / 1074 -> 0.652
stpcpy-evex -> 735 / 1094 -> 0.672
strncpy-evex -> 1397 / 2611 -> 0.535
stpncpy-evex -> 1489 / 2691 -> 0.553
strncat-evex -> 1184 / 2832 -> 0.418
Notes:
1. Because of the significant difference between the
implementations they are split into three files.
strcpy-evex.S -> strcpy, stpcpy, strcat
strncpy-evex.S -> strncpy
strncat-evex.S > strncat
I couldn't find a way to merge them without making the
ifdefs incredibly difficult to follow.
2. All implementations can be made evex512 by including
"x86-evex512-vecs.h" at the top.
3. All implementations have an optional define:
`USE_EVEX_MASKED_STORE`
Setting to one uses evex-masked stores for handling short
strings. This saves code size and branches. It's disabled
for all implementations are the moment as there are some
serious drawbacks to masked stores in certain cases, but
that may be fixed on future architectures.
Full check passes on x86-64 and build succeeds for all ISA levels w/
and w/o multiarch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changes to generated code are:
1. In a few places use `vpcmpeqb` instead of `vpcmpneq` to save a
byte of code size.
2. Add a branch for length <= (VEC_SIZE * 6) as opposed to doing
the entire block of [VEC_SIZE * 4 + 1, VEC_SIZE * 8] in a
single basic-block (the space to add the extra branch without
changing code size is bought with the above change).
Change (2) has roughly a 20-25% speedup for sizes in [VEC_SIZE * 4 +
1, VEC_SIZE * 6] and negligible to no-cost for [VEC_SIZE * 6 + 1,
VEC_SIZE * 8]
From N=10 runs on Tigerlake:
align1,align2 ,length ,result ,New Time ,Cur Time ,New Time / Old Time
0 ,0 ,129 ,0 ,5.404 ,6.887 ,0.785
0 ,0 ,129 ,1 ,5.308 ,6.826 ,0.778
0 ,0 ,129 ,18446744073709551615 ,5.359 ,6.823 ,0.785
0 ,0 ,161 ,0 ,5.284 ,6.827 ,0.774
0 ,0 ,161 ,1 ,5.317 ,6.745 ,0.788
0 ,0 ,161 ,18446744073709551615 ,5.406 ,6.778 ,0.798
0 ,0 ,193 ,0 ,6.804 ,6.802 ,1.000
0 ,0 ,193 ,1 ,6.950 ,6.754 ,1.029
0 ,0 ,193 ,18446744073709551615 ,6.792 ,6.719 ,1.011
0 ,0 ,225 ,0 ,6.625 ,6.699 ,0.989
0 ,0 ,225 ,1 ,6.776 ,6.735 ,1.003
0 ,0 ,225 ,18446744073709551615 ,6.758 ,6.738 ,0.992
0 ,0 ,256 ,0 ,5.402 ,5.462 ,0.989
0 ,0 ,256 ,1 ,5.364 ,5.483 ,0.978
0 ,0 ,256 ,18446744073709551615 ,5.341 ,5.539 ,0.964
Rewriting with VMM API allows for memcmpeq-evex to be used with
evex512 by including "x86-evex512-vecs.h" at the top.
Complete check passes on x86-64.
|
|
|
|
|
|
|
|
|
|
| |
The only change to the existing generated code is `tzcnt` -> `bsf` to
save a byte of code size here and there.
Rewriting with VMM API allows for memcmp-evex-movbe to be used with
evex512 by including "x86-evex512-vecs.h" at the top.
Complete check passes on x86-64.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changes from v1:
Use vec api for register.
Replace VPCMP with VPCMPEQ
Restructure and remove 1 unconditional jump.
Change page cross logic to use sall.
This patch implements following evex512 version of string functions.
evex512 version takes up to 30% less cycle as compared to evex,
depending on length and alignment.
- strrchr function using 512 bit vectors.
- wcsrchr function using 512 bit vectors.
Code size data:
strrchr-evex.o 879 byte
strrchr-evex512.o 601 byte (-32%)
wcsrchr-evex.o 882 byte
wcsrchr-evex512.o 572 byte (-35%)
Placeholder function, not used by any processor at the moment.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
|
|
|
|
|
|
|
|
| |
This makes it more likely that the compiler can compute the strlen
argument in _startup_fatal at compile time, which is required to
avoid a dependency on strlen this early during process startup.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old exception handling implementation used function interposition
to replace the dynamic loader implementation (no TLS support) with the
libc implementation (TLS support). This results in problems if the
link order between the dynamic loader and libc is reversed (bug 25486).
The new implementation moves the entire implementation of the
exception handling functions back into the dynamic loader, using
THREAD_GETMEM and THREAD_SETMEM for thread-local data support.
These depends on Hurd support for these macros, added in commit
b65a82e4e757c1e6cb7073916 ("hurd: Add THREAD_GET/SETMEM/_NC").
One small obstacle is that the exception handling facilities are used
before the TCB has been set up, so a check is needed if the TCB is
available. If not, a regular global variable is used to store the
exception handling information.
Also rename dl-error.c to dl-catch.c, to avoid confusion with the
dlerror function.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GCC 13 has added more _FloatN and _FloatNx versions of existing
<math.h> and <complex.h> built-in functions, for use in libstdc++-v3.
This breaks the glibc build because of how those functions are defined
as aliases to functions with the same ABI but different types. Add
appropriate -fno-builtin-* options for compiling relevant files, as
already done for the case of long double functions aliasing double
ones and based on the list of files used there.
I fixed some mistakes in that list of double files that I noticed
while implementing this fix, but there may well be more such
(harmless) cases, in this list or the new one (files that don't
actually exist or don't define the named functions as aliases so don't
need the options). I did try to exclude cases where glibc doesn't
define certain functions for _FloatN or _FloatNx types at all from the
new uses of -fno-builtin-* options. As with the options for double
files (see the commit message for commit
49348beafe9ba150c9bd48595b3f372299bddbb0, "Fix build with GCC 10 when
long double = double."), it's deliberate that the options are used
even if GCC currently doesn't have a built-in version of a given
functions, so providing some level of future-proofing against more
such built-in functions being added in future.
Tested with build-many-glibcs.py for aarch64-linux-gnu
powerpc-linux-gnu powerpc64le-linux-gnu x86_64-linux-gnu (compilers
and glibcs builds) with GCC mainline.
|
|
|
|
|
|
|
|
|
|
|
| |
This patch improves following functionality
- Replace VPCMP with VPCMPEQ.
- Replace page cross check logic with sall.
- Remove extra lea from align_more.
- Remove uncondition loop jump.
- Use bsf to check max length in first vector.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch implements following evex512 version of string functions.
evex512 version takes up to 30% less cycle as compared to evex,
depending on length and alignment.
- strchrnul function using 512 bit vectors.
- strchr function using 512 bit vectors.
- wcschr function using 512 bit vectors.
Code size data:
strchrnul-evex.o 599 byte
strchrnul-evex512.o 569 byte (-5%)
strchr-evex.o 639 byte
strchr-evex512.o 595 byte (-7%)
wcschr-evex.o 644 byte
wcschr-evex512.o 607 byte (-6%)
Placeholder function, not used by any processor at the moment.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
|
|
|
|
|
|
|
| |
Generic implementation on top of __bswap_32 always expands
inline to either bswap or movbe depending on -march=*.
Signed-off-by: Cristian Rodríguez <crrodriguez@opensuse.org>
|
|
|
|
|
|
|
| |
`testb` saves a bit of code size is the imm-operand can be encoded
1-bytes.
Tested on x86-64.
|
|
|
|
|
|
|
| |
`testb` saves a bit of code size is the imm-operand can be encoded
1-bytes.
Tested on x86-64.
|
|
|
|
|
|
|
| |
`testb` saves a bit of code size is the imm-operand can be encoded
1-bytes.
Tested on x86-64.
|
|
|
|
|
|
|
| |
`testb` saves a bit of code size is the imm-operand can be encoded
1-bytes.
Tested on x86-64.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unused at the moment, but evex512 strcmp, strncmp, strcasecmp{l}, and
strncasecmp{l} functions can be added by including strcmp-evex.S with
"x86-evex512-vecs.h" defined.
In addition save code size a bit in a few places.
1. tzcnt ... -> bsf ...
2. vpcmp{b|d} $0 ... -> vpcmpeq{b|d}
This saves a touch of code size but has minimal net affect.
Full check passes on x86-64.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit b412213eee0afa3b51dfe92b736dfc7c981309f5
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date: Tue Oct 18 17:44:07 2022 -0700
x86: Optimize strrchr-evex.S and implement with VMM headers
Added `vpcompress{b|d}` to the page-cross logic with is an
AVX512-VBMI2 instruction. This is not supported on SKX. Since the
page-cross logic is relatively cold and the benefit is minimal
revert the page-cross case back to the old logic which is supported
on SKX.
Tested on x86-64.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Optimization is:
1. Cache latest result in "fast path" loop with `vmovdqu` instead of
`kunpckdq`. This helps if there are more than one matches.
Code Size Changes:
strrchr-evex.S : +30 bytes (Same number of cache lines)
Net perf changes:
Reported as geometric mean of all improvements / regressions from N=10
runs of the benchtests. Value as New Time / Old Time so < 1.0 is
improvement and 1.0 is regression.
strrchr-evex.S : 0.932 (From cases with higher match frequency)
Full results attached in email.
Full check passes on x86-64.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Optimizations are:
1. Use the fact that lzcnt(0) -> VEC_SIZE for memchr to save a branch
in short string case.
2. Save several instructions in len = [VEC_SIZE, 4 * VEC_SIZE] case.
3. Use more code-size efficient instructions.
- tzcnt ... -> bsf ...
- vpcmpb $0 ... -> vpcmpeq ...
Code Size Changes:
memrchr-evex.S : -29 bytes
Net perf changes:
Reported as geometric mean of all improvements / regressions from N=10
runs of the benchtests. Value as New Time / Old Time so < 1.0 is
improvement and 1.0 is regression.
memrchr-evex.S : 0.949 (Mostly from improvements in small strings)
Full results attached in email.
Full check passes on x86-64.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Optimizations are:
1. Use the fact that bsf(0) leaves the destination unchanged to save a
branch in short string case.
2. Restructure code so that small strings are given the hot path.
- This is a net-zero on the benchmark suite but in general makes
sense as smaller sizes are far more common.
3. Use more code-size efficient instructions.
- tzcnt ... -> bsf ...
- vpcmpb $0 ... -> vpcmpeq ...
4. Align labels less aggressively, especially if it doesn't save fetch
blocks / causes the basic-block to span extra cache-lines.
The optimizations (especially for point 2) make the strnlen and
strlen code essentially incompatible so split strnlen-evex
to a new file.
Code Size Changes:
strlen-evex.S : -23 bytes
strnlen-evex.S : -167 bytes
Net perf changes:
Reported as geometric mean of all improvements / regressions from N=10
runs of the benchtests. Value as New Time / Old Time so < 1.0 is
improvement and 1.0 is regression.
strlen-evex.S : 0.992 (No real change)
strnlen-evex.S : 0.947
Full results attached in email.
Full check passes on x86-64.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Size Optimizations:
1. Condence hot path for better cache-locality.
- This is most impact for strchrnul where the logic strings with
len <= VEC_SIZE or with a match in the first VEC no fits entirely
in the first cache line.
2. Reuse common targets in first 4x VEC and after the loop.
3. Don't align targets so aggressively if it doesn't change the number
of fetch blocks it will require and put more care in avoiding the
case where targets unnecessarily split cache lines.
4. Align the loop better for DSB/LSD
5. Use more code-size efficient instructions.
- tzcnt ... -> bsf ...
- vpcmpb $0 ... -> vpcmpeq ...
6. Align labels less aggressively, especially if it doesn't save fetch
blocks / causes the basic-block to span extra cache-lines.
Code Size Changes:
strchr-evex.S : -63 bytes
strchrnul-evex.S: -48 bytes
Net perf changes:
Reported as geometric mean of all improvements / regressions from N=10
runs of the benchtests. Value as New Time / Old Time so < 1.0 is
improvement and 1.0 is regression.
strchr-evex.S (Fixed) : 0.971
strchr-evex.S (Rand) : 0.932
strchrnul-evex.S : 0.965
Full results attached in email.
Full check passes on x86-64.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Optimizations are:
1. Use the fact that tzcnt(0) -> VEC_SIZE for memchr to save a branch
in short string case.
2. Restructure code so that small strings are given the hot path.
- This is a net-zero on the benchmark suite but in general makes
sense as smaller sizes are far more common.
3. Use more code-size efficient instructions.
- tzcnt ... -> bsf ...
- vpcmpb $0 ... -> vpcmpeq ...
4. Align labels less aggressively, especially if it doesn't save fetch
blocks / causes the basic-block to span extra cache-lines.
The optimizations (especially for point 2) make the memchr and
rawmemchr code essentially incompatible so split rawmemchr-evex
to a new file.
Code Size Changes:
memchr-evex.S : -107 bytes
rawmemchr-evex.S : -53 bytes
Net perf changes:
Reported as geometric mean of all improvements / regressions from N=10
runs of the benchtests. Value as New Time / Old Time so < 1.0 is
improvement and 1.0 is regression.
memchr-evex.S : 0.928
rawmemchr-evex.S : 0.986 (Less targets cross cache lines)
Full results attached in email.
Full check passes on x86-64.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch implements following evex512 version of string functions.
evex512 version takes up to 30% less cycle as compared to evex,
depending on length and alignment.
- memchr function using 512 bit vectors.
- rawmemchr function using 512 bit vectors.
- wmemchr function using 512 bit vectors.
Code size data:
memchr-evex.o 762 byte
memchr-evex512.o 576 byte (-24%)
rawmemchr-evex.o 461 byte
rawmemchr-evex512.o 412 byte (-11%)
wmemchr-evex.o 794 byte
wmemchr-evex512.o 552 byte (-30%)
Placeholder function, not used by any processor at the moment.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the future, this will result in a compilation failure if the
macros are unexpectedly undefined (due to header inclusion ordering
or header inclusion missing altogether).
Assembler sources are more difficult to convert. In many cases,
they are hand-optimized for the mangling and no-mangling variants,
which is why they are not converted.
sysdeps/s390/s390-32/__longjmp.c and sysdeps/s390/s390-64/__longjmp.c
are special: These are C sources, but most of the implementation is
in assembler, so the PTR_DEMANGLE macro has to be undefined in some
cases, to match the assembler style.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows us to define a generic no-op version of PTR_MANGLE and
PTR_DEMANGLE. In the future, we can use PTR_MANGLE and PTR_DEMANGLE
unconditionally in C sources, avoiding an unintended loss of hardening
due to missing include files or unlucky header inclusion ordering.
In i386 and x86_64, we can avoid a <tls.h> dependency in the C
code by using the computed constant from <tcb-offsets.h>. <sysdep.h>
no longer includes these definitions, so there is no cyclic dependency
anymore when computing the <tcb-offsets.h> constants.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
| |
This way, we can define the pointer guard macros without including
<sysdep.h> on x86-64. Other architectures will not have such an
inclusion dependency, and the implied header file inclusion would
create a porting hazard.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
|
| |
To avoid duplicate the VMM / GPR / mask insn macros in all incoming
evex512 files use the macros defined in 'reg-macros.h' and
'{vec}-macros.h'
This commit does not change libc.so
Tested build on x86-64
|
|
|
|
|
|
| |
This commit does not change libc.so
Tested build on x86-64
|
|
|
|
|
|
|
|
| |
Replace %VEC(n) -> %VMM(n)
This commit does not change libc.so
Tested build on x86-64
|
|
|
|
|
|
|
|
| |
Replace %VEC(n) -> %VMM(n)
This commit does not change libc.so
Tested build on x86-64
|
|
|
|
|
|
|
|
| |
Replace %VEC(n) -> %VMM(n)
This commit does not change libc.so
Tested build on x86-64
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Copy so that backport will be easier.
2) Make section only define if there is not a previous definition
3) Add `VEC_lo` definition for proper reg-width but in the
ymm/zmm0-15 range.
4) Add macros for accessing GPRs based on VEC_SIZE
This is to make it easier to do think like:
```
vpcmpb %VEC(0), %VEC(1), %k0
kmov{d|q} %k0, %{eax|rax}
test %{eax|rax}
```
It adds macro s.t any GPR can get the proper width with:
`V{upcase_GPR_name}`
and any mask insn can get the proper width with:
`{upcase_mask_insn_without_postfix}`
This commit does not change libc.so
Tested build on x86-64
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Besides the option being gcc specific, this approach is still fragile
and not future proof since we do not know if this will be the only
optimization option gcc will add that transforms loops to memset
(or any libcall).
This patch adds a new header, dl-symbol-redir-ifunc.h, that can b
used to redirect the compiler generated libcalls to port the generic
memset implementation if required.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This was to test loading of shared libraries from platform
subdirectories, but this functionality is going away in the
following commits.
Signed-off-by: Javier Pello <devel@otheo.eu>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
| |
The compiler might transform __stpcpy calls (which are routed to
__builtin_stpcpy as an optimization) to strcpy and x86_64 strcpy
multiarch implementation does not build any working symbol due
ISA_SHOULD_BUILD not being evaluated for IS_IN(rtld).
Checked on x86_64-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The AVX2 strrchr and wcsrchr implementation uses the 'blsmsk'
instruction which belongs to the BMI1 CPU feature and the 'shrx'
instruction, which belongs to the BMI2 CPU feature.
Fixes: df7e295d18ff ("x86: Optimize {str|wcs}rchr-avx2")
Partially resolves: BZ #29611
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The AVX2 memrchr implementation uses the 'shlxl' instruction, which
belongs to the BMI2 CPU feature and uses the 'lzcnt' instruction, which
belongs to the LZCNT CPU feature.
Fixes: af5306a735eb ("x86: Optimize memrchr-avx2.S")
Partially resolves: BZ #29611
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
|