| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
In the x86-64 clone wrapper, align child stack to 16 bytes per the
x86-64 psABI.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
|
|
|
|
|
|
| |
git mv -f sysdeps/unix/sysv/linux/createthread.c nptl/createthread.c
No functional change.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch optimizes the performance of memset for A64FX [1] which
implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB cache
per NUMA node.
The performance optimization makes use of Scalable Vector Register
with several techniques such as loop unrolling, memory access
alignment, cache zero fill and prefetch.
SVE assembler code for memset is implemented as Vector Length Agnostic
code so theoretically it can be run on any SOC which supports ARMv8-A
SVE standard.
We confirmed that all testcases have been passed by running 'make
check' and 'make xcheck' not only on A64FX but also on ThunderX2.
And also we confirmed that the SVE 512 bit vector register performance
is roughly 4 times better than Advanced SIMD 128 bit register and 8
times better than scalar 64 bit register by running 'make bench'.
[1] https://github.com/fujitsu/A64FX
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Reviewed-by: Szabolcs Nagy <Szabolcs.Nagy@arm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch optimizes the performance of memcpy/memmove for A64FX [1]
which implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB
cache per NUMA node.
The performance optimization makes use of Scalable Vector Register
with several techniques such as loop unrolling, memory access
alignment, cache zero fill, and software pipelining.
SVE assembler code for memcpy/memmove is implemented as Vector Length
Agnostic code so theoretically it can be run on any SOC which supports
ARMv8-A SVE standard.
We confirmed that all testcases have been passed by running 'make
check' and 'make xcheck' not only on A64FX but also on ThunderX2.
And also we confirmed that the SVE 512 bit vector register performance
is roughly 4 times better than Advanced SIMD 128 bit register and 8
times better than scalar 64 bit register by running 'make bench'.
[1] https://github.com/fujitsu/A64FX
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Reviewed-by: Szabolcs Nagy <Szabolcs.Nagy@arm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch is a test helper script to change Vector Length for child
process. This script can be used as test-wrapper for 'make check'.
Usage examples:
~/build$ make check subdirs=string \
test-wrapper='~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16'
~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16 \
make test t=string/test-memcpy
~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 32 \
./debugglibc.sh string/test-memmove
~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 64 \
./testrun.sh string/test-memset
|
|
|
|
|
|
|
|
|
| |
This patch defines BTI_C and BTI_J macros conditionally for
performance.
If HAVE_AARCH64_BTI is true, BTI_C and BTI_J are defined as HINT
instruction for ARMv8.5 BTI (Branch Target Identification).
If HAVE_AARCH64_BTI is false, both BTI_C and BTI_J are defined as
NOP.
|
|
|
|
|
| |
This patch checks if assembler supports '-march=armv8.2-a+sve' to
generate SVE code or not, and then define HAVE_AARCH64_SVE_ASM macro.
|
|
|
|
|
|
|
| |
Since the variable expands to nothing under Linux, it is no longer
necessary to clutter the makefiles with it.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
| |
Keep installing libpthread.a, so that -lpthread works.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
| |
When using scv for templated ASM syscalls, current code interprets any
negative return value as error, but the only valid error codes are in
the range -4095..-1 according to the ABI.
This commit also fixes 'signal.gen.test' strace test, where the issue
was first identified.
Reviewed-by: Matheus Castanho <msc@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Replace
if ((((uintptr_t) &_d) & (__alignof (double) - 1)) != 0)
which may be optimized out by compiler, with
int
__attribute__ ((weak, noclone, noinline))
is_aligned (void *p, int align)
{
return (((uintptr_t) p) & (align - 1)) != 0;
}
2. Add TEST_STACK_ALIGN_INIT to TEST_STACK_ALIGN.
3. Add a common TEST_STACK_ALIGN_INIT to check 16-byte stack alignment
for both i386 and x86-64.
4. Update powerpc to use TEST_STACK_ALIGN_INIT.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch changes the condition for copy 4x VEC so that if length is
exactly equal to 4 * VEC_SIZE it will use the 4x VEC case instead of
8x VEC case.
Results For Skylake memcpy-avx2-erms
size, al1 , al2 , Cur T , New T , Win , New / Cur
128 , 0 , 0 , 9.137 , 6.873 , New , 75.22
128 , 7 , 0 , 12.933 , 7.732 , New , 59.79
128 , 0 , 7 , 11.852 , 6.76 , New , 57.04
128 , 7 , 7 , 12.587 , 6.808 , New , 54.09
Results For Icelake memcpy-evex-erms
size, al1 , al2 , Cur T , New T , Win , New / Cur
128 , 0 , 0 , 9.963 , 5.416 , New , 54.36
128 , 7 , 0 , 16.467 , 8.061 , New , 48.95
128 , 0 , 7 , 14.388 , 7.644 , New , 53.13
128 , 7 , 7 , 14.546 , 7.642 , New , 52.54
Results For Tigerlake memcpy-evex-erms
size, al1 , al2 , Cur T , New T , Win , New / Cur
128 , 0 , 0 , 8.979 , 4.95 , New , 55.13
128 , 7 , 0 , 14.245 , 7.122 , New , 50.0
128 , 0 , 7 , 12.668 , 6.675 , New , 52.69
128 , 7 , 7 , 13.042 , 6.802 , New , 52.15
Results For Skylake memmove-avx2-erms
size, al1 , al2 , Cur T , New T , Win , New / Cur
128 , 0 , 32 , 6.181 , 5.691 , New , 92.07
128 , 32 , 0 , 6.165 , 5.752 , New , 93.3
128 , 0 , 7 , 13.923 , 9.37 , New , 67.3
128 , 7 , 0 , 12.049 , 10.182 , New , 84.5
Results For Icelake memmove-evex-erms
size, al1 , al2 , Cur T , New T , Win , New / Cur
128 , 0 , 32 , 5.479 , 4.889 , New , 89.23
128 , 32 , 0 , 5.127 , 4.911 , New , 95.79
128 , 0 , 7 , 18.885 , 13.547 , New , 71.73
128 , 7 , 0 , 15.565 , 14.436 , New , 92.75
Results For Tigerlake memmove-evex-erms
size, al1 , al2 , Cur T , New T , Win , New / Cur
128 , 0 , 32 , 5.275 , 4.815 , New , 91.28
128 , 32 , 0 , 5.376 , 4.565 , New , 84.91
128 , 0 , 7 , 19.426 , 14.273 , New , 73.47
128 , 7 , 0 , 15.924 , 14.951 , New , 93.89
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Only the placeholder compatibility symbols are left now.
The __errno_location symbol was removed (moved) using
scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The symbols were moved using scripts/move-symbol-to-libc.py.
The libpthread placeholder symbols need some changes because some
symbol versions have gone away completely. But
__errno_location@@GLIBC_2.0 still exists, so the GLIBC_2.0 version
is still there.
The internal __pthread_create symbol now points to the correct
function, so the sysdeps/nptl/thrd_create.c override is no longer
necessary.
There was an issue how the hidden alias of pthread_getattr_default_np
was defined, so this commit cleans up that aspects and removes the
GLIBC_PRIVATE export altogether.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the __nptl_tls_static_size_for_stack inline function instead,
and the GLRO (dl_tls_static_align) value directly.
The computation of GLRO (dl_tls_static_align) in
_dl_determine_tlsoffset ensures that the alignment is at least
TLS_TCB_ALIGN, which at least STACK_ALIGN (see allocate_stack).
Therefore, the additional rounding-up step is removed.
ALso move the initialization of the default stack size from
__pthread_initialize_minimal_internal to __pthread_early_init.
This introduces an extra system call during single-threaded startup,
but this simplifies the initialization sequence. No locking is
needed around the writes to __default_pthread_attr because the
process is single-threaded at this point.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
No bug. This commit makes a few small improvements to
memset-vec-unaligned-erms.S. The changes are 1) only aligning to 64
instead of 128. Either alignment will perform equally well in a loop
and 128 just increases the odds of having to do an extra iteration
which can be significant overhead for small values. 2) Align some
targets and the loop. 3) Remove an ALU from the alignment process. 4)
Reorder the last 4x VEC so that they are stored after the loop. 5)
Move the condition for leq 8x VEC to before the alignment
process. test-memset and test-wmemset are both passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
|
|
|
|
|
|
| |
This macro is always defined on Linux.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
When compiled with GCC 11.1 and -march=z14 -O3 build flags, running
ld.so (or any dynamically linked program) prints:
Fatal glibc error: CPU lacks VXE support (z14 or later required)
Co-Authored-By: Stefan Liebler <stli@linux.ibm.com>
Reviewed-by: Stefan Liebler <stli@linux.ibm.com>
|
|
|
|
|
|
|
| |
When built with GCC 11.1 and -mcpu=power9, ld.so prints this error
message when running on POWER8:
Fatal glibc error: CPU lacks ISA 3.00 support (POWER9 or later required)
|
|
|
|
| |
Reviewed-by: Stefan Liebler <stli@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
No bug. This commit optimizes memcmp-evex.S. The optimizations include
adding a new vec compare path for small sizes, reorganizing the entry
control flow, removing some unnecissary ALU instructions from the main
loop, and most importantly replacing the heavy use of vpcmp + kand
logic with vpxor + vptern. test-memcmp and test-wmemcmp are both
passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
No bug. This commit optimizes memcmp-avx2.S. The optimizations include
adding a new vec compare path for small sizes, reorganizing the entry
control flow, and removing some unnecissary ALU instructions from the
main loop. test-memcmp and test-wmemcmp are both passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
|
|
|
|
|
|
|
|
| |
The tst-timespec_getres (e5ac7bd679de5) triggers an issue on 32-bit
architecture on Linux older than 5.1, where the fallback syscall
is used.
Checked on powerpc-linux-gnu.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ISO C2X adds a timespec_getres function alongside the C11
timespec_get, with functionality similar to that of POSIX clock_getres
(including allowing a NULL pointer to be passed to the function).
Implement this function for glibc, similarly to the implementation of
timespec_get.
This includes a basic test like that of timespec_get, but no
documentation in the manual, given that TIME_UTC and timespec_get
aren't documented in the manual at all. The handling of 64-bit time
follows that in timespec_get; people maintaining patch series for
64-bit time will need to update them accordingly (to export
__timespec_getres64, redirect calls in time.h and run the test for
_TIME_BITS=64).
Tested for x86_64 and x86, and (previous version; only testcase
differs) with build-many-glibcs.py.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reuse code for optimized strlen to implement a faster version of rawmemchr.
This takes advantage of the same benefits provided by the strlen implementation,
but needs some extra steps. __strlen_power10 code should be unchanged after this
change.
rawmemchr returns a pointer to the char found, while strlen returns only the
length, so we have to take that into account when preparing the return value.
To quickly check 64B, the loop on __strlen_power10 merges the whole block into
16B by using unsigned minimum vector operations (vminub) and checks if there are
any \0 on the resulting vector. The same code is used by rawmemchr if the char c
is 0. However, this approach does not work when c != 0. We first need to
subtract each byte by c, so that the value we are looking for is converted to a
0, then taking the minimum and checking for nulls works again.
The new code branches after it has compared ~256 bytes and chooses which of the
two strategies above will be used in the main loop, based on the char c. This
extra branch adds some overhead (~5%) for length ~256, but is quickly amortized
by the faster loop for larger sizes.
Compared to __rawmemchr_power9, this version is ~20% faster for length < 256.
Because of the optimized main loop, the improvement becomes ~35% for c != 0
and ~50% for c = 0 for strings longer than 256.
Reviewed-by: Lucas A. M. Magalhaes <lamm@linux.ibm.com>
Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
|
|
|
|
|
|
|
|
| |
The symbol was moved using scripts/move-symbol-to-libc.py.
The GLIBC_2.11 version is now empty, so add a placeholder symbol.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
| |
The symbol was moved using scripts/move-symbol-to-libc.py.
The GLIBC_2.3.4 version is now empty, so add a placeholder symbol.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
| |
The symbol was moved using scripts/move-symbol-to-libc.py.
Add __libpthread_version_placeholder@@GLIBC_2.12 for the targets
that need it.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
| |
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
| |
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The symbol was moved using scripts/move-symbol-to-libc.py.
__libpthread_version_placeholder@@GLIBC_2.2 is needed by this change;
the Versions entry for GLIBC_2.2 in libpthread had leftover symbols
due to an error in a previous conflict resolution. The condition
for the placeholder symbol is complicated because some architectures
have earlier symbols at the GLIBC_2.2 symbol versions, so the
placeholder is not required there (yet).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
| |
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
| |
The symbol was moved using scripts/move-symbol-to-libc.py.
A new placeholder symbol __libpthread_version_placeholder@GLIBC_2.18
is needed to keep the GLIBC_2.18 symbol version in libpthread.
The __pthread_getattr_default_np@@GLIBC_PRIVATE export is used
from pthread_create.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
| |
This helps to clarify that the caching of these fields in libpthread
(in __static_tls_size, __static_tls_align_m1) is unnecessary.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
| |
All users have been converted to the __rtld_static_init mechanism.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
| |
The generic __rtld_static_init code handles GLRO (dl_pagesize),
GLRO (dl_auxv), GLRO (dl_hwcap), GLRO (dl_hwcap2).
GLRO (dl_cache_line_size) is handled in an __rtld_static_init_arch
override.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
| |
The generic __rtld_static_init code handles GLRO (dl_pagesize).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
| |
The generic __rtld_static_init code handles GLRO (dl_pagesize).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
| |
The generic __rtld_static_init code handles GLRO (dl_pagesize)
and GLRO (dl_clktck).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
| |
The generic __rtld_static_init code handles GLRO (dl_pagesize).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
| |
The generic __rtld_static_init code handles GLRO (dl_pagesize).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After static dlopen, a copy of ld.so is loaded into the inner
namespace, but that copy is not initialized at all. Some
architectures run into serious problems as result, which is why the
_dl_var_init mechanism was invented. With libpthread moving into
libc and parts into ld.so, more architectures impacted, so it makes
sense to switch to a generic mechanism which performs the partial
initialization.
As a result, getauxval now works after static dlopen (bug 20802).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The initialization of the report_events TCB field is now performed
in __tls_init_tp instead of __pthread_initialize_minimal_internal
(in libpthread).
The events interface is difficult to test because GDB stopped using it
in 2015. The td_thr_get_info change to ignore lookup issues is enough
to support GDB with this change.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
| |
The __libc_single_threaded symbol was accidentally added to this file
in commit 706ad1e7af37be1d25fc2359bda006d31fe0d11b.
|
|
|
|
|
|
| |
The error paths of __check_native would leave the socket FD open on
return, resulting in an FD leak. Rework function exit paths so that
the fd is always closed on return.
|
|
|
|
|
|
|
|
| |
The symbols were moved using scripts/move-symbol-to-libc.py,
in one commit due to their dependency on the internal
__concurrency_level variable.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
| |
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
| |
__pthread_unregister_cancel_restore to libc
The symbols were moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
| |
The symbols were moved using scripts/move-symbol-to-libc.py.
Also clean up some unwinder linking leftover in the same spot
in nptl/pthreadP.h.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|