| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The z13/vector-optimized wcsncmp implementation segfaults if n=1
and there is only one character (equal on both strings) before
the page end. Then it loads and compares one character and misses
to check n again. The following load fails.
This patch removes the extra load and compare of the first character
and just start with the loop which uses vector-load-to-block-boundary.
This code-path also checks n.
With this patch both tests are passing:
- the simplified one mentioned in the bugzilla 31934
- the full one in Florian Weimer's patch:
"manual: Document a GNU extension for strncmp/wcsncmp"
(https://patchwork.sourceware.org/project/glibc/patch/874j9eml6y.fsf@oldenburg.str.redhat.com/):
On s390x-linux-gnu (z16), the new wcsncmp test fails due to bug 31934.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 9b7651410375ec8848a1944992d663d514db4ba7)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On Fedora 40/x86-64, linker enables --enable-new-dtags by default which
generates DT_RUNPATH instead of DT_RPATH. Unlike DT_RPATH, DT_RUNPATH
only applies to DT_NEEDED entries in the executable and doesn't applies
to DT_NEEDED entries in shared libraries which are loaded via DT_NEEDED
entries in the executable. Some glibc tests have libstdc++.so.6 in
DT_NEEDED, which has libm.so.6 in DT_NEEDED. When DT_RUNPATH is generated,
/lib64/libm.so.6 is loaded for such tests. If the newly built glibc is
older than glibc 2.36, these tests fail with
assert/tst-assert-c++: /export/build/gnu/tools-build/glibc-gitlab-release/build-x86_64-linux/libc.so.6: version `GLIBC_2.36' not found (required by /lib64/libm.so.6)
assert/tst-assert-c++: /export/build/gnu/tools-build/glibc-gitlab-release/build-x86_64-linux/libc.so.6: version `GLIBC_ABI_DT_RELR' not found (required by /lib64/libm.so.6)
Pass -Wl,--disable-new-dtags to linker when building glibc tests with
--enable-hardcoded-path-in-tests. This fixes BZ #31719.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit 2dcaf70643710e22f92a351e36e3cff8b48c60dc)
|
|
|
|
| |
(cherry picked from commit 9cc9d61ee12f2f8620d8e0ea3c42af02bf07fe1e)
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using int may give false results for future dates (timeouts after the
year 2028).
Fixes commit 04a21e050d64a1193a6daab872bca2528bda44b ("CVE-2024-33601,
CVE-2024-33602: nscd: netgroup: Use two buffers in addgetnetgrentX
(bug 31680)").
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 4bbca1a44691a6e9adcee5c6798a707b626bc331)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
addgetnetgrentX (bug 31680)
This avoids potential memory corruption when the underlying NSS
callback function does not use the buffer space to store all strings
(e.g., for constant strings).
Instead of custom buffer management, two scratch buffers are used.
This increases stack usage somewhat.
Scratch buffer allocation failure is handled by return -1
(an invalid timeout value) instead of terminating the process.
This fixes bug 31679.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
(cherry picked from commit c04a21e050d64a1193a6daab872bca2528bda44b)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(bug 31678)
The addgetnetgrentX call in addinnetgrX may have failed to produce
a result, so the result variable in addinnetgrX can be NULL.
Use db->negtimeout as the fallback value if there is no result data;
the timeout is also overwritten below.
Also avoid sending a second not-found response. (The client
disconnects after receiving the first response, so the data stream did
not go out of sync even without this fix.) It is still beneficial to
add the negative response to the mapping, so that the client can get
it from there in the future, instead of going through the socket.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
(cherry picked from commit b048a482f088e53144d26a61c390bed0210f49f2)
|
|
|
|
|
|
|
|
|
|
| |
addgetnetgrentX (bug 31678)
If we failed to add a not-found response to the cache, the dataset
point can be null, resulting in a null pointer dereference.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
(cherry picked from commit 7835b00dbce53c3c87bbbb1754a95fb5e58187aa)
|
|
|
|
|
|
|
|
| |
Using alloca matches what other caches do. The request length is
bounded by MAXKEYLEN.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 87801a8fd06db1d654eea3e4f7626ff476a9bdaa)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(CVE-2024-2961)
ISO-2022-CN-EXT uses escape sequences to indicate character set changes
(as specified by RFC 1922). While the SOdesignation has the expected
bounds checks, neither SS2designation nor SS3designation have its;
allowing a write overflow of 1, 2, or 3 bytes with fixed values:
'$+I', '$+J', '$+K', '$+L', '$+M', or '$*H'.
Checked on aarch64-linux-gnu.
Co-authored-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit f9dc609e06b1136bb0408be9605ce7973a767ada)
|
|
|
|
|
|
|
| |
Since __memcpy_simd is the fastest memcpy on almost all cores, replace
the generic memcpy with it.
(cherry picked from commit 91ac82d0c61076aa55ac08f6c7b58c5c28dd2f59)
|
|
|
|
|
|
|
|
|
| |
Use shrn for narrowing the mask which simplifies code and speeds up small
strings. Unroll the first search loop to improve performance on large
strings.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit 55599d480437dcf129b41b95be32b48f2a9e5da9)
|
|
|
|
|
|
|
|
|
| |
Optimize strnlen using the shrn instruction and improve the main loop.
Small strings are around 10% faster, large strings are 40% faster on
modern CPUs.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit ad098893ba3c3344a5f2f6ab1627c47204afdb47)
|
|
|
|
|
|
|
|
| |
Optimize strlen by unrolling the main loop. Large strings are 64% faster on
modern CPUs.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit 03c8ce5000198947a4dd7b2c14e5131738fda62b)
|
|
|
|
|
|
|
| |
Unroll the main loop. Large strings are around 20% faster on modern CPUs.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit 349e48c01e85bd96006860084e76d322e6ca02f1)
|
|
|
|
|
|
|
| |
Unroll the main loop, which improves performance slightly.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit 09ebd8549b2ce5a3a6c0c7c5f3e62227faf50a99)
|
|
|
|
|
|
|
|
| |
Simplify calculation of the mask using shrn. Unroll the main loop.
Small strings are 20% faster on modern CPUs.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit 51541a229740801882490177fa178e49264b13fb)
|
|
|
|
|
|
|
|
| |
Use shrn for the mask, merge tst+bne into cbnz, and tweak code alignment.
Performance improves slightly as a result.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit 1bbb1a2022e126f21810d3d0ebe0a975d5243e43)
|
|
|
|
|
|
|
| |
Optimize the main loop - large strings are 43% faster on modern CPUs.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit 00776241776e67fc666b896c1e85770f4f3ec1e1)
|
|
|
|
|
|
|
| |
Optimize the main loop - large strings are 40% faster on modern CPUs.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit ce758d4f063820c2bc743e12797d7454c66be718)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We found that string functions were using AND+ADDP
to find the nibble/syndrome mask but there is an easier
opportunity through `SHRN dst.8b, src.8h, 4` (shift
right every 2 bytes by 4 and narrow to 1 byte) and has
same latency on all SIMD ARMv8 targets as ADDP. There
are also possible gaps for memcmp but that's for
another patch.
We see 10-20% savings for small-mid size cases (<=128)
which are primary cases for general workloads.
(cherry picked from commit 3c9980698988ef64072f1fac339b180f52792faf)
|
|
|
|
|
|
|
|
|
| |
Rewrite memcmp to improve performance. On small and medium inputs performance
is 10-20% better. Large inputs use a SIMD loop processing 64 bytes per
iteration, which is 30-50% faster depending on the size.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit b51eb35c572b015641f03e3682c303f7631279b7)
|
|
|
|
|
|
|
|
|
| |
Optimize strnlen by avoiding UMINV which is slow on most cores. On Neoverse N1
large strings are 1.8x faster than the current version, and bench-strnlen is
50% faster overall. This version is MTE compatible.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit 252cad02d4c63540501b9b8c988cb91248563224)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DELOUSE was added to asm code to make them compatible with non-LP64
ABIs, but it is an unfortunate name and the code was not compatible
with ABIs where pointer and size_t are different. Glibc currently
only supports the LP64 ABI so these macros are not really needed or
tested, but for now the name is changed to be more meaningful instead
of removing them completely.
Some DELOUSE macros were dropped: clone, strlen and strnlen used it
unnecessarily.
The out of tree ILP32 patches are currently not maintained and will
likely need a rework to rebase them on top of the time64 changes.
(cherry picked from commit 45b1e17e9150dbd9ac2d578579063fbfa8e1b327)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ffsll function randomly regress by ~20%, depending on how code gets
aligned in memory. Ffsll function code size is 17 bytes. Since default
function alignment is 16 bytes, it can load on 16, 32, 48 or 64 bytes
aligned memory. When ffsll function load at 16, 32 or 64 bytes aligned
memory, entire code fits in single 64 bytes cache line. When ffsll
function load at 48 bytes aligned memory, it splits in two cache line,
hence random regression.
Ffsll function size reduction from 17 bytes to 12 bytes ensures that it
will always fit in single 64 bytes cache line.
This patch fixes ffsll function random performance regression.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 9d94997b5f9445afd4f2bccc5fa60ff7c4361ec1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The:
```
if (shared_per_thread > 0 && threads > 0)
shared_per_thread /= threads;
```
Code was accidentally moved to inside the else scope. This doesn't
match how it was previously (before af992e7abd).
This patch fixes that by putting the division after the `else` block.
(cherry picked from commit 084fb31bc2c5f95ae0b9e6df4d3cf0ff43471ede)
|
|
|
|
|
|
|
|
|
|
|
| |
On some machines we end up with incomplete cache information. This can
make the new calculation of `sizeof(total-L3)/custom-divisor` end up
lower than intended (and lower than the prior value). So reintroduce
the old bound as a lower bound to avoid potentially regressing code
where we don't have complete information to make the decision.
Reviewed-by: DJ Delorie <dj@redhat.com>
(cherry picked from commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After:
```
commit af992e7abdc9049714da76cae1e5e18bc4838fb8
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date: Wed Jun 7 13:18:01 2023 -0500
x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
```
Split `shared` (cumulative cache size) from `shared_per_thread` (cache
size per socket), the `shared_per_thread` *can* be slightly off from
the previous calculation.
Previously we added `core` even if `threads_l2` was invalid, and only
used `threads_l2` to divide `core` if it was present. The changed
version only included `core` if `threads_l2` was valid.
This change restores the old behavior if `threads_l2` is invalid by
adding the entire value of `core`.
Reviewed-by: DJ Delorie <dj@redhat.com>
(cherry picked from commit 47f747217811db35854ea06741be3685e8bbd44d)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 4`
The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.
Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal thresholds and leads to using non-temporal stores in
cases where REP MOVSB is multiple times faster.
Furthermore, non-temporal stores are written directly to main memory
so using it at a size much smaller than L3 can place soon to be
accessed data much further away than it otherwise could be. As well,
modern machines are able to detect streaming patterns (especially if
REP MOVSB is used) and provide LRU hints to the memory subsystem. This
in affect caps the total amount of eviction at 1/cache_associativity,
far below meaningfully thrashing the entire cache.
As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be REP MOVSB which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, REP MOVSB is ~2x faster
up to `sizeof_L3`.
The divisor of `4` is a somewhat arbitrary value. From benchmarks it
seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
such as Broadwell prefer something closer to `8`. This patch is meant
to be followed up by another one to make the divisor cpu-specific, but
in the meantime (and for easier backporting), this patch settles on
`4` as a middle-ground.
Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks
Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
Reviewed-by: DJ Delorie <dj@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit af992e7abdc9049714da76cae1e5e18bc4838fb8)
|
|
|
|
|
|
|
|
|
|
| |
The signal handler installed in the ELF constructor cannot easily
be removed again (because the program may have changed handlers
in the meantime). Mark the object as NODELETE so that the registered
handler function is never unloaded.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 23ee92deea4c99d0e6a5f48fa7b942909b123ec5)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previous implementation was adjusting length (rsi) to match
bytes (eax), but since there is no bound to length this can cause
overflow.
Fix is to just convert the byte-count (eax) to length by dividing by
sizeof (wchar_t) before the comparison.
Full check passes on x86-64 and build succeeds w/ and w/o multiarch.
(cherry picked from commit b0969fa53a28b4ab2159806bf6c99a98999502ee)
|
|
|
|
|
|
|
|
| |
The sunrpc function svcunix_create suffers from a stack-based buffer
overflow with overlong pathname arguments.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
(cherry picked from commit f545ad4928fa1f27a3075265182b38a4f939a5f7)
|
|
|
|
|
|
|
|
| |
This is helpful for testing compat symbols in cases where _ISOMAC
is activated implicitly due to -DMODULE_NAME=testsuite and cannot
be disabled easily.
(cherry picked from commit 36f6e408845c8c539128f3fb9cb132bf1845a2c8)
|
|
|
|
|
| |
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
(cherry picked from commit ef972a4c50014a16132b5c75571cfb6b30bef136)
|
|
|
|
|
|
|
|
| |
Processing an overlong pathname in the sunrpc clnt_create function
results in a stack-based buffer overflow.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
(cherry picked from commit 226b46770c82899b555986583294b049c6ec9b40)
|
|
|
|
|
| |
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
(cherry picked from commit e368b12f6c16b6888dda99ba641e999b9c9643c8)
|
|
|
|
|
|
| |
BZ #26923 now has a CVE entry, so add a NEWS entry for it.
(cherry picked from commit 38a9e93cb1c58e3c899d638480e6d6e42af8e6fc)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, in UCS4 conversion routines we limit the number of
characters we examine to the minimum of the number of characters in the
input and the number of characters in the output. This is not the
correct behavior when __GCONV_IGNORE_ERRORS is set, as we do not consume
an output character when we skip a code unit. Instead, track the input
and output pointers and terminate the loop when either reaches its
limit.
This resolves assertion failures when resetting the input buffer in a step of
iconv, which assumes that the input will be fully consumed given sufficient
output space.
(cherry picked from commit 228edd356f03bf62dcf2b1335f25d43c602ee68d)
|
|
|
|
|
| |
Add a NEWS entry for the fix that was backported by commit
27e892f6608e9d0da71884bb1422a735f6062850.
|
|
|
|
| |
(cherry picked from commit 24eb3be5db5befefe4bcf0f438bf6629a9c3a608)
|
|
|
|
| |
(cherry picked from commit d7f4f3f5fb1275f0b3d9f4e1b3d9d7b75a5a9e26)
|
|
|
|
| |
(cherry picked from commit 18b640c57094236e6c991ba16f87467085a1d55a)
|
|
|
|
|
|
| |
The fix was backported by commit ff75390ef59823193351ae77584c397c503b7b58
("Use __pthread_attr_copy in mq_notify (bug 27896)")
after glibc 2.32 release.
|
|
|
|
|
|
| |
The fix was backported by commit 050022910be1d1f5c11cd5168f1685ad4f9580d2
("iconv: Accept redundant shift sequences in IBM1364 [BZ #26224]")
after glibc 2.32 release.
|
|
|
|
|
|
|
| |
Add NEWS entries to the list of bugs that were fixed after glibc 2.32
release: 24973, 25399, 26383, 26690, 26798, 26831, 26926, 26988, 27024,
27068, 27256, 27398, 27462, 27471, 27476, 27511, 27609, 27655, 27896,
28011, 28033, 28064, 28213, 29304, and 29611.
|
|
|
|
| |
(cherry picked from commit 4d3a77c73594c3704992f8d5b779c8be053cff35)
|
|
|
|
| |
(cherry picked from commit fdb724f9032ff73310be0e51549f494a3eaa7495)
|
|
|
|
| |
This patch fixes BZ #29611
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since strchr-avx2.S updated by
commit 1f745ecc2109890886b161d4791e1406fdfc29b8
Author: noah <goldstein.w.n@gmail.com>
Date: Wed Feb 3 00:38:59 2021 -0500
x86-64: Refactor and improve performance of strchr-avx2.S
uses sarx:
c4 e2 72 f7 c0 sarx %ecx,%eax,%eax
for strchr-avx2 family functions, require BMI2 in ifunc-impl-list.c and
ifunc-avx2.h.
This fixes BZ #29611.
(cherry picked from commit 83c5b368226c34a2f0a5287df40fc290b2b34359)
|
|
|
|
|
|
|
|
|
|
|
|
| |
libc_map is never reset to NULL, neither during dlclose nor on a
dlopen call which reuses the namespace structure. As a result, if a
namespace is reused, its libc is not initialized properly. The most
visible result is a crash in the <ctype.h> functions.
To prevent similar bugs on namespace reuse from surfacing,
unconditionally initialize the chosen namespace to zero using memset.
(cherry picked from commit d0e357ff45a75553dee3b17ed7d303bfa544f6fe)
|
|
|
|
|
|
|
|
|
|
| |
On success, mq_receive() and mq_timedreceive() return the number of
bytes in the received message, so it requires to check if the value
is larger than 0.
Checked on i686-linux-gnu.
(cherry picked from commit 71d87d85bf54f6522813aec97c19bdd24997341e)
|