| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.
Tested build on x86_64 and x86_32 with/without multiarch.
|
|
|
|
|
|
|
| |
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.
Tested build on x86_64 and x86_32 with/without multiarch.
|
|
|
|
|
|
|
| |
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.
Tested build on x86_64 and x86_32 with/without multiarch.
|
|
|
|
|
|
|
| |
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.
Tested build on x86_64 and x86_32 with/without multiarch.
|
|
|
|
|
|
|
| |
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.
Tested build on x86_64 and x86_32 with/without multiarch.
|
|
|
|
|
|
|
| |
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.
Tested build on x86_64 and x86_32 with/without multiarch.
|
|
|
|
|
|
|
| |
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.
Tested build on x86_64 and x86_32 with/without multiarch.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.
Because strcmp-sse2.S implements so many functions (more from
avx2/evex/sse42) add a new file 'strcmp-naming.h' to assist in
getting the correct symbol name for all the function across
multiarch/non-multiarch builds.
Tested build on x86_64 and x86_32 with/without multiarch.
|
|
|
|
|
|
| |
The previous macro name can be confusing given that both
`__strcasecmp_l_nonascii` and `__strcasecmp_nonascii` are
functions and we use the `_l` version.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The intrinsics are not available before GCC7 and using standard
operators generates code of equivalent or better quality.
Removed:
_cvtmask64_u64
_kshiftri_mask64
_kand_mask64
Geometric Mean of 5 Runs of Full Benchmark Suite New / Old: 0.958
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These functions all have optimized versions:
__strncat_sse2_unaligned, __strncpy_sse2_unaligned, and
stpncpy_sse2_unaligned which are faster than their respective generic
implementations. Since the sse2 versions can run on baseline x86_64,
we should use these as the baseline implementation and can remove the
generic implementations.
Geometric mean of N=20 runs of the entire benchmark suite on:
11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz (Tigerlake)
__strncat_sse2_unaligned / __strncat_generic: .944
__strncpy_sse2_unaligned / __strncpy_generic: .726
__stpncpy_sse2_unaligned / __stpncpy_generic: .650
Tested build with and without multiarch and full check with multiarch.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove redundant strcspn-generic, strpbrk-generic and strspn-generic
from sysdep_routines in sysdeps/x86_64/multiarch/Makefile added by
commit c69f960b017b2cdf39335739009526a72fb20379
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date: Sun Jul 3 21:28:07 2022 -0700
x86: Add support for building str{c|p}{brk|spn} with explicit ISA level
since they have been added to sysdep_routines in sysdeps/x86_64/Makefile.
|
|
|
|
|
| |
Don't mark symbols as hidden in strcmp-avx2.S, strcmp-evex.S and
strcmp-sse42.S since they are marked as hidden in the IFUNC selectors.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Refactor files so that all implementations are in the multiarch
directory
- Moved the implementation portion of memcmp sse2 from memcmp.S to
multiarch/memcmp-sse2.S
- The non-multiarch file now only includes one of the
implementations in the multiarch directory based on the compiled
ISA level (only used for non-multiarch builds. Otherwise we go
through the ifunc selector).
2. Add ISA level build guards to different implementations.
- I.e memcmp-avx2-movsb.S which is ISA level 3 will only build if
compiled ISA level <= 3. Otherwise there is no reason to include
it as we will always use one of the ISA level 4
implementations (memcmp-evex-movbe.S).
3. Add new multiarch/rtld-{w}memcmp{eq}.S that just include the
non-multiarch {w}memcmp{eq}.S which will in turn select the best
implementation based on the compiled ISA level.
4. Refactor the ifunc selector and ifunc implementation list to use
the ISA level aware wrapper macros that allow functions below the
compiled ISA level (with a guranteed replacement) to be skipped.
Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}
And m32 with and without multiarch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Refactor files so that all implementations are in the multiarch
directory
- Moved the implementation portion of memset sse2 from memset.S to
multiarch/memset-sse2.S
- The non-multiarch file now only includes one of the
implementations in the multiarch directory based on the compiled
ISA level (only used for non-multiarch builds. Otherwise we go
through the ifunc selector).
2. Add ISA level build guards to different implementations.
- I.e memset-avx2-unaligned-erms.S which is ISA level 3 will only
build if compiled ISA level <= 3. Otherwise there is no reason
to include it as we will always use one of the ISA level 4
implementations (memset-evex-unaligned-erms.S).
3. Add new multiarch/rtld-memset.S that just include the
non-multiarch memset.S which will in turn select the best
implementation based on the compiled ISA level.
4. Refactor the ifunc selector and ifunc implementation list to use
the ISA level aware wrapper macros that allow functions below the
compiled ISA level (with a guranteed replacement) to be skipped.
Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}
And m32 with and without multiarch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Refactor files so that all implementations are in the multiarch
directory
- Moved the implementation portion of memmove sse2 from memmove.S
to multiarch/memmove-sse2.S
- The non-multiarch file now only includes one of the
implementations in the multiarch directory based on the compiled
ISA level (only used for non-multiarch builds. Otherwise we go
through the ifunc selector).
2. Add ISA level build guards to different implementations.
- I.e memmove-avx2-unaligned-erms.S which is ISA level 3 will only
build if compiled ISA level <= 3. Otherwise there is no reason
to include it as we will always use one of the ISA level 4
implementations (memmove-evex-unaligned-erms.S).
3. Add new multiarch/rtld-memmove.S that just include the
non-multiarch memmove.S which will in turn select the best
implementation based on the compiled ISA level.
4. Refactor the ifunc selector and ifunc implementation list to use
the ISA level aware wrapper macros that allow functions below the
compiled ISA level (with a guranteed replacement) to be skipped.
Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}
And m32 with and without multiarch.
isa raising memmove
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The changes for these functions are different than the others because
the best implementation (sse4_2) requires the generic
implementation as a fallback to be built as well.
Changes are:
1. Add non-multiarch functions for str{c|p}{brk|spn}.c to statically
select the best implementation based on the configured ISA build
level.
2. Add stubs for str{c|p}{brk|spn}-generic and varshift.c to in the
sysdeps/x86_64 directory so that the the sse4 implementation will
have all of its dependencies for the non-multiarch / rtld build
when ISA level >= 2.
3. Add new multiarch/rtld-strcspn.c that just include the
non-multiarch strcspn.c which will in turn select the best
implementation based on the compiled ISA level.
4. Refactor the ifunc selector and ifunc implementation list to use
the ISA level aware wrapper macros that allow functions below the
compiled ISA level (with a guranteed replacement) to be skipped.
Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}
And m32 with and without multiarch.
|
|
|
|
|
| |
Just for clarities sake and so that if a future implementation is
added we remember to add the check.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Was missing to for the multiarch build rtld-strncmp-sse4_2.os was
being built and exporting symbols:
build/glibc/string/rtld-strncmp-sse4_2.os:
0000000000000000 T __strncmp_sse42
Introduced in:
commit 11ffcacb64a939c10cfc713746b8ec88837f5c4a
Author: H.J. Lu <hjl.tools@gmail.com>
Date: Wed Jun 21 12:10:50 2017 -0700
x86-64: Implement strcmp family IFUNC selectors in C
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Was missing to for the multiarch build rtld-strcspn-sse4.os was
being built and exporting symbols:
build/glibc/string/rtld-strcspn-sse4.os:
U ___m128i_shift_right
U __strcspn_generic
0000000000000000 T __strcspn_sse42
U strlen
build/glibc/string/rtld-varshift.os:
0000000000000000 R ___m128i_shift_right
Introduced in:
commit 06e51c8f3de38761f8855700841bc49cf495c8c0
Author: H.J. Lu <hongjiu.lu@intel.com>
Date: Fri Jul 3 02:48:56 2009 -0700
Add SSE4.2 support for strcspn, strpbrk, and strspn on x86-64.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Was missing to for the multiarch build rtld-memmove-ssse3.os was
being built and exporting symbols:
>$ nm string/rtld-memmove-ssse3.os
U __GI___chk_fail
0000000000000020 T __memcpy_chk_ssse3
0000000000000040 T __memcpy_ssse3
0000000000000020 T __memmove_chk_ssse3
0000000000000040 T __memmove_ssse3
0000000000000000 T __mempcpy_chk_ssse3
0000000000000010 T __mempcpy_ssse3
U __x86_shared_cache_size_half
Introduced after 2.35 in:
commit 26b2478322db94edc9e0e8f577b2f71d291e5acb
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date: Thu Apr 14 11:47:40 2022 -0500
x86: Reduce code size of mem{move|pcpy|cpy}-ssse3
|
|
|
|
|
|
|
| |
Properly indent X86_IFUNC_IMPL_ADD_VN arguments for memchr, rawmemchr
and wmemchr.
Co-authored-by: H.J. Lu <hjl.tools@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Remove sse2 instructions when using the avx512 or avx version.
2. Fixup some format nits in how the address offsets where aligned.
3. Use more space efficient instructions in the conditional AVX
restoral.
- vpcmpeqq -> vpcmpeqb
- cmp imm32, r; jz -> inc r; jz
4. Use `rep movsb` instead of `rep movsq`. The former is guranteed to
be fast with the ERMS flags, the latter is not. The latter also
wastes an instruction in size setup.
|
|
|
|
|
|
| |
The primary memmove_{impl}_unaligned_erms implementations don't
interact with this function. Putting them in same file both
wastes space and unnecessarily bloats a hot code section.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implementation wise:
1. Remove the VZEROUPPER as memset_{impl}_unaligned_erms does not
use the L(stosb) label that was previously defined.
2. Don't give the hotpath (fallthrough) to zero size.
Code positioning wise:
Move memset_{chk}_erms to its own file. Leaving it in between the
memset_{impl}_unaligned both adds unnecessary complexity to the
file and wastes space in a relatively hot cache section.
|
|
|
|
| |
This was simply missing and meant we weren't testing it properly.
|
|
|
|
|
|
|
| |
When glibc is built with x86-64 ISA level v3, SSE run-time resolvers
aren't used. For x86-64 ISA level v4 build, both SSE and AVX resolvers
are unused. Check the minimum x86-64 ISA level to exclude the unused
run-time resolvers.
|
|
|
|
|
|
|
|
|
| |
Add third argument to X86_ISA_CPU_FEATURES_ARCH_P macro so the runtime
CPU_FEATURES_ARCH_P check can be inverted if the
MINIMUM_X86_ISA_LEVEL is not high enough to constantly evaluate
the check.
Use this new macro to correct the backwards check in ifunc-evex.h
|
|
|
|
| |
This is in accordance with other files in the multiarch directory.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The memcmp-sse4 was removed in:
commit 7cbc03d03091d5664060924789afe46d30a5477e
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date: Fri Apr 15 12:28:00 2022 -0500
x86: Remove memcmp-sse4.S
so this file does nothing.
|
|
|
|
|
| |
Previously was missing but the two implementations shouldn't get in
the sse2 (generic) text section.
|
|
|
|
|
|
|
|
|
| |
The function was tuned around 64-byte entry alignment and performs
better for all sizes with it.
As well different code boths where explicitly written to touch the
minimum number of cache line i.e sizes <= 32 touch only the entry
cache line.
|
|
|
|
|
|
|
|
|
|
| |
The sanity tests where meant to ensure that the default implementation
was only being built without multiarch with the exception of the
multiarch/rtld-*.S files.
The code used IS_IN (rtld) to check if the build for was for an
multiarch/rtld-*.S file which is incorrect as IS_IN (rtld) is set for
the non-multiarch build as well.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most of these don't really matter as there was no dirty upper state
but we should generally avoid stray sse when its not needed.
The one case that really matters is in svml_d_tanh4_core_avx2.S:
blendvps %xmm0, %xmm8, %xmm7
When there was a dirty upper state.
Tested on x86_64-linux
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Refactor files so that all implementations for in the multiarch
directory.
- Essentially moved sse2 {raw|w}memchr.S implementation to
multiarch/{raw|w}memchr-sse2.S
- The non-multiarch {raw|w}memchr.S file now only includes one of
the implementations in the multiarch directory based on the
compiled ISA level (only used for non-multiarch builds.
Otherwise we go through the ifunc selector).
2. Add ISA level build guards to different implementations.
- I.e memchr-avx2.S which is ISA level 3 will only build if
compiled ISA level <= 3. Otherwise there is no reason to include
it as we will always use one of the ISA level 4
implementations (memchr-evex{-rtm}.S).
3. Add new multiarch/rtld-{raw}memchr.S that just include the
non-multiarch {raw}memchr.S which will in turn select the best
implementation based on the compiled ISA level.
4. Refactor the ifunc selector and ifunc implementation list to use
the ISA level aware wrapper macros that allow functions below the
compiled ISA level (with a guranteed replacement) to be skipped.
- Guranteed replacement essentially means that for any ISA level
build there must be a function that the baseline of the ISA
supports. So for {raw|w}memchr.S since there is not ISA level 2
function, the ISA level 2 build still includes the ISA level
1 (sse2) function. Once we reach the ISA level 3 build, however,
{raw|w}memchr-avx2{-rtm}.S will always be sufficient so the ISA
level 1 implementation ({raw|w}memchr-sse2.S) will not be built.
Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}
And m32 with and without multiarch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Factor out some of the ISA level defines in isa-level.c to
standalone header isa-level.h
2. Add new headers with ISA level dependent macros for handling
ifuncs.
Note, this file does not change any code.
Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}
And m32 with and without multiarch.
|
|
|
|
|
|
|
|
|
|
| |
No functions are changed. It just renames generic implementations from
'{func}_sse2' to '{func}_generic'. This is just because the postfix
"_sse2" was overloaded and was used for files that had hand-optimized
sse2 assembly implementations and files that just redirected back
to the generic implementation.
Full xcheck passed on x86_64.
|
|
|
|
|
|
|
|
|
|
| |
The RTLD_BOOTSTRAP branch is used to relocate ld.so itself. It only
needs to handle RELATIVE, GLOB_DAT, and JUMP_SLOT. RELATIVE has been
handled (by _ELF_DYNAMIC_DO_RELOC due to DT_RELACOUNT, or RELR), so the
switch statement only needs to handle GLOB_DAT and JUMP_SLOT.
We can drop these `#if[n]def RTLD_BOOTSTRAP` and add a large
`# ifndef RTLD_BOOTSTRAP` instead.
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Fix incorrect lower-bound threshold in L(large_memcpy_2x).
Previously was using `__x86_rep_movsb_threshold` and should
have been using `__x86_shared_non_temporal_threshold`.
2. Avoid reloading __x86_shared_non_temporal_threshold before
the L(large_memcpy_4x) bounds check.
3. Document the second bounds check for L(large_memcpy_4x)
more clearly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an executable has copy relocations for extern protected data, that
can only work if the library containing the definition is built with
assumptions (a) the compiler emits GOT-generating relocations (b) the
linker produces R_*_GLOB_DAT instead of R_*_RELATIVE. Otherwise the
library uses its own definition directly and the executable accesses a
stale copy. Note: the GOT relocations defeat the purpose of protected
visibility as an optimization, but allow rtld to make the executable and
library use the same copy when copy relocations are present, but it
turns out this never worked perfectly.
ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA has strange semantics when both
a.so and b.so define protected var and the executable copy relocates
var: b.so accesses its own copy even with GLOB_DAT. The behavior change
is from commit 62da1e3b00b51383ffa7efc89d8addda0502e107 (x86) and then
copied to nios2 (ae5eae7cfc9c4a8297ff82ec6b794faca1976ecc) and arc
(0e7d930c4c11de896fe807f67fa1eb756c9c1e05).
Without ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA, b.so accesses the copy
relocated data like a.so.
There is now a warning for copy relocation on protected symbol since
commit 7374c02b683b7110b853a32496a619410364d70b. It's extremely
unlikely anyone relies on the ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA
behavior, so let's remove it: this removes a check in the symbol lookup
code.
|
|
|
|
|
|
|
|
|
| |
This has been missing since the the ifuncs where added.
The performance of SSE4.2 is preferable to to SSE2.
Measured on Tigerlake with N = 20 runs.
Geometric Mean of all benchmarks SSE4.2 / SSE2: 0.906
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a proper bounds check to __libc_ifunc_impl_list. This makes MAX_IFUNC
redundant and fixes several targets that will write outside the array.
To avoid unnecessary large diffs, pass the maximum in the argument 'i' to
IFUNC_IMPL_ADD - 'max' can be used in new ifunc definitions and existing
ones can be updated if desired.
Passes buildmanyglibc.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Optimizations are:
1. Reduce code size (-112 bytes).
2. Remove redundant move instructions.
3. Slightly improve instruction selection/scheduling where
possible.
4. Prefer registers which get short instruction encoding.
5. Reduce rodata size (-4k+ rodata is shared with avx2).
Result is roughly a 15-16% speedup:
Function, New Time, Old Time, New / Old
_ZGVbN4v_tanhf, 3.158, 3.749, 0.842
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Optimizations are:
1. Reduce code size (-81 bytes).
2. Remove redundant move instructions.
3. Slightly improve instruction selection/scheduling where
possible.
4. Prefer registers which get short instruction encoding.
5. Reduce rodata size (-32 bytes).
Result is roughly a 17-18% speedup:
Function, New Time, Old Time, New / Old
_ZGVdN8v_tanhf, 1.977, 2.402, 0.823
|
|
|
|
|
|
|
|
|
|
| |
tanhf-avx2 and tanhf-sse4 use the same data tables so we can save
over 4kb using a shared datatable. This does increase the memory
footprint of the sse4 version (as now all the targets are 32 bytes
instead of 16), generally it seems worth the code size save.
NB: This patch doesn't do anything itself, it is setup for future
patches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Optimizations are:
1. Reduce code size (-67 bytes).
2. Remove redundant move instructions.
3. Slightly improve instruction selection/scheduling where
possible.
4. Reduce rodata usage (-448 bytes).
Result is roughly a 14% speedup:
Function, New Time, Old Time, New / Old
_ZGVeN16v_tanhf, 0.649, 0.752, 0.863
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improvements are:
1. Reduce code size (-62 bytes).
2. Remove redundant move instructions.
3. Slightly improve instruction selection/scheduling where
possible.
4. Prefer registers which get short instruction encoding.
5. Reduce rodata usage (-16 bytes).
The throughput improvement is not significant as the port 0 bottleneck
is unavoidable.
Function, New Time, Old Time, New / Old
_ZGVbN4v_atanhf, 8.821, 8.903, 0.991
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improvements are:
1. Reduce code size (-60 bytes).
2. Remove redundant move instructions.
3. Slightly improve instruction selection/scheduling where
possible.
4. Prefer registers which get short instruction encoding.
5. Shrink rodata usage (-32 bytes).
The throughput improvement is not that significant (3-5%) as the
port 0 bottleneck is unavoidable.
Function, New Time, Old Time, New / Old
_ZGVdN8v_atanhf, 2.799, 2.923, 0.958
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improvements are:
1. Reduce code size (-64 bytes).
2. Remove redundant move instructions.
3. Slightly improve instruction selection/scheduling where
possible.
4. Reduce rodata size ([-128, -188] bytes).
The throughput improvement is not significant as the port 0 bottleneck
is unavoidable.
Function, New Time, Old Time, New / Old
_ZGVeN16v_atanhf, 1.39, 1.408, 0.987
|
|
|
|
| |
This ensures the load will never split a cache line.
|