about summary refs log tree commit diff
path: root/sysdeps/x86_64
Commit message (Collapse)AuthorAgeFilesLines
* x86/cet: Sync with Linux kernel 6.6 shadow stack interfaceH.J. Lu2024-01-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | Sync with Linux kernel 6.6 shadow stack interface. Since only x86-64 is supported, i386 shadow stack codes are unchanged and CET shouldn't be enabled for i386. 1. When the shadow stack base in TCB is unset, the default shadow stack is in use. Use the current shadow stack pointer as the marker for the default shadow stack. It is used to identify if the current shadow stack is the same as the target shadow stack when switching ucontexts. If yes, INCSSP will be used to unwind shadow stack. Otherwise, shadow stack restore token will be used. 2. Allocate shadow stack with the map_shadow_stack syscall. Since there is no function to explicitly release ucontext, there is no place to release shadow stack allocated by map_shadow_stack in ucontext functions. Such shadow stacks will be leaked. 3. Rename arch_prctl CET commands to ARCH_SHSTK_XXX. 4. Rewrite the CET control functions with the current kernel shadow stack interface. Since CET is no longer enabled by kernel, a separate patch will enable shadow stack during startup.
* x86-64: Fix the tcb field load for x32 [BZ #31185]H.J. Lu2023-12-221-2/+2
| | | | | | | | | | | | | | | | | | | | | | | _dl_tlsdesc_undefweak and _dl_tlsdesc_dynamic access the thread pointer via the tcb field in TCB: _dl_tlsdesc_undefweak: _CET_ENDBR movq 8(%rax), %rax subq %fs:0, %rax ret _dl_tlsdesc_dynamic: ... subq %fs:0, %rax movq -8(%rsp), %rdi ret Since the tcb field in TCB is a pointer, %fs:0 is a 32-bit location, not 64-bit. It should use "sub %fs:0, %RAX_LP" instead. Since _dl_tlsdesc_undefweak returns ptrdiff_t and _dl_make_tlsdesc_dynamic returns void *, RAX_LP is appropriate here for x32 and x86-64. This fixes BZ #31185.
* x86-64: Fix the dtv field load for x32 [BZ #31184]H.J. Lu2023-12-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On x32, I got FAIL: elf/tst-tlsgap $ gdb elf/tst-tlsgap ... open tst-tlsgap-mod1.so Thread 2 "tst-tlsgap" received signal SIGSEGV, Segmentation fault. [Switching to LWP 2268754] _dl_tlsdesc_dynamic () at ../sysdeps/x86_64/dl-tlsdesc.S:108 108 movq (%rsi), %rax (gdb) p/x $rsi $4 = 0xf7dbf9005655fb18 (gdb) This is caused by _dl_tlsdesc_dynamic: _CET_ENDBR /* Preserve call-clobbered registers that we modify. We need two scratch regs anyway. */ movq %rsi, -16(%rsp) movq %fs:DTV_OFFSET, %rsi Since the dtv field in TCB is a pointer, %fs:DTV_OFFSET is a 32-bit location, not 64-bit. Load the dtv field to RSI_LP instead of rsi. This fixes BZ #31184.
* x86: Do not raises floating-point exception traps on fesetexceptflag (BZ 30990)Bruno Haible2023-12-191-10/+14
| | | | | | | | | | | | | | | | | | | According to ISO C23 (7.6.4.4), fesetexcept is supposed to set floating-point exception flags without raising a trap (unlike feraiseexcept, which is supposed to raise a trap if feenableexcept was called with the appropriate argument). The flags can be set in the 387 unit or in the SSE unit. When we need to clear a flag, we need to do so in both units, due to the way fetestexcept is implemented. When we need to set a flag, it is sufficient to do it in the SSE unit, because that is guaranteed to not trap. However, on i386 CPUs that have only a 387 unit, set the flags in the 387, as long as this cannot trap. Co-authored-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* x86: Unifies 'strlen-evex' and 'strlen-evex512' implementations.Matthew Sterrett2023-12-185-472/+439
| | | | | | | | | | | | | | | | | | | | | | This commit uses a common implementation 'strlen-evex-base.S' for both 'strlen-evex' and 'strlen-evex512' The motivation is to reduce the number of implementations to maintain. This incidentally gives a small performance improvement. All tests pass on x86. Benchmarks were taken on SKX. https://www.intel.com/content/www/us/en/products/sku/123613/intel-core-i97900x-xseries-processor-13-75m-cache-up-to-4-30-ghz/specifications.html Geometric mean for strlen-evex512 over all benchmarks (N=10) was (new/old) 0.939 Geometric mean for wcslen-evex512 over all benchmarks (N=10) was (new/old) 0.965 Code Size Changes: strlen-evex512.S : +24 bytes wcslen-evex512.S : +54 bytes Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* x86: Only align destination to 1x VEC_SIZE in memset 4x loopNoah Goldstein2023-11-281-1/+1
| | | | | | | | | Current code aligns to 2x VEC_SIZE. Aligning to 2x has no affect on performance other than potentially resulting in an additional iteration of the loop. 1x maintains aligned stores (the only reason to align in this case) and doesn't incur any unnecessary loop iterations. Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
* elf: Remove LD_PROFILE for static binariesAdhemerval Zanella2023-11-212-30/+36
| | | | | | | | | | | The _dl_non_dynamic_init does not parse LD_PROFILE, which does not enable profile for dlopen objects. Since dlopen is deprecated for static objects, it is better to remove the support. It also allows to trim down libc.a of profile support. Checked on x86_64-linux-gnu. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* x86: Use dl-symbol-redir-ifunc.h on cpu-tunablesAdhemerval Zanella2023-11-212-28/+15
| | | | | | | | | | | | | | | | The dl-symbol-redir-ifunc.h redirects compiler-generated libcalls to arch-specific memory implementations to avoid ifunc calls where it is not yet possible. The memcmp-isa-default-impl.h aims to fix the same issue by calling the specific memset implementation directly. Using the memcmp symbol directly allows the compiler to inline the memset calls (especially because _dl_tunable_set_hwcaps uses constants values), generating better code. Checked on x86_64-linux-gnu. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com> Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* elf: Ignore GLIBC_TUNABLES for setuid/setgid binariesAdhemerval Zanella2023-11-211-1/+0
| | | | | | | | | | | | | | | | | | | | The tunable privilege levels were a retrofit to try and keep the malloc tunable environment variables' behavior unchanged across security boundaries. However, CVE-2023-4911 shows how tricky can be tunable parsing in a security-sensitive environment. Not only parsing, but the malloc tunable essentially changes some semantics on setuid/setgid processes. Although it is not a direct security issue, allowing users to change setuid/setgid semantics is not a good security practice, and requires extra code and analysis to check if each tunable is safe to use on all security boundaries. It also means that security opt-in features, like aarch64 MTE, would need to be explicit enabled by an administrator with a wrapper script or with a possible future system-wide tunable setting. Co-authored-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: DJ Delorie <dj@redhat.com>
* x86: Fix unchecked AVX512-VBMI2 usage in strrchr-evex-base.SNoah Goldstein2023-11-151-24/+51
| | | | | | | | | | | | | strrchr-evex-base used `vpcompress{b|d}` in the page cross logic but was missing the CPU_FEATURE checks for VBMI2 in the ifunc/ifunc-impl-list. The fix is either to add those checks or change the logic to not use `vpcompress{b|d}`. Choosing the latter here so that the strrchr-evex implementation is usable on SKX. New implementation is a bit slower, but this is in a cold path so its probably okay.
* x86: Prepare `strrchr-evex` and `strrchr-evex512` for AVX10Noah Goldstein2023-10-063-569/+293
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit refactors `strrchr-evex` and `strrchr-evex512` to use a common implementation: `strrchr-evex-base.S`. The motivation is `strrchr-evex` needed to be refactored to not use 64-bit masked registers in preperation for AVX10. Once vec-width masked register combining was removed, the EVEX and EVEX512 implementations can easily be implemented in the same file without any major overhead. The net result is performance improvements (measured on TGL) for both `strrchr-evex` and `strrchr-evex512`. Although, note there are some regressions in the test suite and it may be many of the cases that make the total-geomean of improvement/regression across bench-strrchr are cold. The point of the performance measurement is to show there are no major regressions, but the primary motivation is preperation for AVX10. Benchmarks where taken on TGL: https://www.intel.com/content/www/us/en/products/sku/213799/intel-core-i711850h-processor-24m-cache-up-to-4-80-ghz/specifications.html EVEX geometric_mean(N=5) of all benchmarks New / Original : 0.74 EVEX512 geometric_mean(N=5) of all benchmarks New / Original: 0.87 Full check passes on x86.
* hurd: Drop REG_GSFS and REG_ESDS from x86_64's ucontextSamuel Thibault2023-09-281-5/+1
| | | | These are useless on x86_64, and __NGREG was actually wrong with them.
* elf: Fix slow tls access after dlopen [BZ #19924]Szabolcs Nagy2023-09-011-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In short: __tls_get_addr checks the global generation counter and if the current dtv is older then _dl_update_slotinfo updates dtv up to the generation of the accessed module. So if the global generation is newer than generation of the module then __tls_get_addr keeps hitting the slow dtv update path. The dtv update path includes a number of checks to see if any update is needed and this already causes measurable tls access slow down after dlopen. It may be possible to detect up-to-date dtv faster. But if there are many modules loaded (> TLS_SLOTINFO_SURPLUS) then this requires at least walking the slotinfo list. This patch tries to update the dtv to the global generation instead, so after a dlopen the tls access slow path is only hit once. The modules with larger generation than the accessed one were not necessarily synchronized before, so additional synchronization is needed. This patch uses acquire/release synchronization when accessing the generation counter. Note: in the x86_64 version of dl-tls.c the generation is only loaded once, since relaxed mo is not faster than acquire mo load. I have not benchmarked this. Tested by Adhemerval Zanella on aarch64, powerpc, sparc, x86 who reported that it fixes the performance issue of bug 19924. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86_64: Add log1p with FMAH.J. Lu2023-08-213-0/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On Skylake, it changes log1p bench performance by: Before After Improvement max 63.349 58.347 8% min 4.448 5.651 -30% mean 12.0674 10.336 14% The minimum code path is if (hx < 0x3FDA827A) /* x < 0.41422 */ { if (__glibc_unlikely (ax >= 0x3ff00000)) /* x <= -1.0 */ { ... } if (__glibc_unlikely (ax < 0x3e200000)) /* |x| < 2**-29 */ { math_force_eval (two54 + x); /* raise inexact */ if (ax < 0x3c900000) /* |x| < 2**-54 */ { ... } else return x - x * x * 0.5; FMA and non-FMA code sequences look similar. Non-FMA version is slightly faster. Since log1p is called by asinh and atanh, it improves asinh performance by: Before After Improvement max 75.645 63.135 16% min 10.074 10.071 0% mean 15.9483 14.9089 6% and improves atanh performance by: Before After Improvement max 91.768 75.081 18% min 15.548 13.883 10% mean 18.3713 16.8011 8%
* x86_64: Add expm1 with FMAH.J. Lu2023-08-143-0/+48
| | | | | | | | | | | | | | | | | | On Skylake, it improves expm1 bench performance by: Before After Improvement max 70.204 68.054 3% min 20.709 16.2 22% mean 22.1221 16.7367 24% NB: Add extern long double __expm1l (long double); extern long double __expm1f128 (long double); for __typeof (__expm1l) and __typeof (__expm1f128) when __expm1 is defined since __expm1 may be expanded in their declarations which causes the build failure.
* x86_64: Add log2 with FMAH.J. Lu2023-08-113-0/+48
| | | | | | | | | On Skylake, it improves log2 bench performance by: Before After Improvement max 208.779 63.827 69% min 9.977 6.55 34% mean 10.366 6.8191 34%
* x86_64: Sort fpu/multiarch/MakefileH.J. Lu2023-08-101-20/+74
| | | | | | Sort Makefile variables using scripts/sort-makefile-lines.py. No code generation changes observed in libm. No regressions on x86_64.
* x86_64: Fix build with --disable-multiarch (BZ 30721)Adhemerval Zanella2023-08-103-1/+5
| | | | | | | | | | | | | | With multiarch disabled, the default memmove implementation provides the fortify routines for memcpy, mempcpy, and memmove. However, it does not provide the internal hidden definitions used when building with fortify enabled. The memset has a similar issue. Checked on x86_64-linux-gnu building with different options: default and --disable-multi-arch plus default, --disable-default-pie, --enable-fortify-source={2,3}, and --enable-fortify-source={2,3} with --disable-default-pie. Tested-by: Andreas K. Huettel <dilfridge@gentoo.org> Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* Update x86_64 libm-test-ulps (x32 ABI)Andreas K. Hüttel2023-07-191-13/+14
| | | | | | | | | Based on feedback by Mike Gilbert <floppym@gentoo.org> Linux-6.1.38-dist x86_64 AMD Phenom-tm- II X6 1055T Processor -march=amdfam10 failures occur for x32 ABI Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org>
* configure: Use autoconf 2.71Siddhesh Poyarekar2023-07-172-21/+27
| | | | | | | | | | | | | | Bump autoconf requirement to 2.71 to allow regenerating configure on more recent distributions. autoconf 2.71 has been in Fedora since F36 and is the current version in Debian stable (bookworm). It appears to be current in Gentoo as well. All sysdeps configure and preconfigure scripts have also been regenerated; all changes are trivial transformations that do not affect functionality. Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* wchar: Avoid PLT entries with _FORTIFY_SOURCEFrédéric Bérat2023-07-051-0/+4
| | | | | | | | | | The change is meant to avoid unwanted PLT entries for the wmemset and wcrtomb routines when _FORTIFY_SOURCE is set. On top of that, ensure that *_chk routines have their hidden builtin definitions available. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* string: Ensure *_chk routines have their hidden builtin definition availableFrédéric Bérat2023-07-058-0/+20
| | | | | | | If libc_hidden_builtin_{def,proto} isn't properly set for *_chk routines, there are unwanted PLT entries in libc.so. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* ld.so: Always use MAP_COPY to map the first segment [BZ #30452]H.J. Lu2023-06-303-0/+9
| | | | | | | | | | | The first segment in a shared library may be read-only, not executable. To support LD_PREFER_MAP_32BIT_EXEC on such shared libraries, we also check MAP_DENYWRITE to decide if MAP_32BIT should be passed to mmap. Normally the first segment is mapped with MAP_COPY, which is defined as (MAP_PRIVATE | MAP_DENYWRITE). But if the segment alignment is greater than the page size, MAP_COPY isn't used to allocate enough space to ensure that the segment can be properly aligned. Map the first segment with MAP_COPY in this case to fix BZ #30452.
* x86: Make dl-cache.h and readelflib.c not Linux-specificSergey Bugaev2023-06-261-0/+51
| | | | | | | These files could be useful to any port that wants to use ld.so.cache. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* sysdeps/{i386, x86_64}/mempcpy_chk.S: fix linknamespace for __mempcpy_chkFrederic Berat2023-06-221-1/+1
| | | | | | | | | | | | | On i386 and x86_64, for libc.a specifically, __mempcpy_chk calls mempcpy which leads POSIX routines to call non-POSIX mempcpy indirectly. This leads the linknamespace test to fail when glibc is built with __FORTIFY_SOURCE=3. Since calling mempcpy doesn't bring any benefit for libc.a, directly call __mempcpy instead. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* x86-64: Use YMM registers in memcmpeq-evex.SH.J. Lu2023-06-011-1/+1
| | | | | | | Since the assembly source file with -evex suffix should use YMM registers, not ZMM registers, include x86-evex256-vecs.h by default to use YMM registers in memcmpeq-evex.S Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* Fix misspellings in sysdeps/x86_64 -- BZ 25337.Paul Pluzhnikov2023-05-2337-105/+105
| | | | | | | Applying this commit results in bit-identical rebuild of libc.so.6 math/libm.so.6 elf/ld-linux-x86-64.so.2 mathvec/libmvec.so.1 Reviewed-by: Florian Weimer <fweimer@redhat.com>
* Fix misspellings in sysdeps/x86_64/fpu/multiarch -- BZ 25337.Paul Pluzhnikov2023-05-23112-169/+169
| | | | | | | Applying this commit results in a bit-identical rebuild of mathvec/libmvec.so.1 (which is the only binary that gets rebuilt). Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* Enable libmvec support for AArch64Joe Ramsay2023-05-033-104/+54
| | | | | | | | | | | | | | | | | | | | | | | This patch enables libmvec on AArch64. The proposed change is mainly implementing build infrastructure to add the new routines to ABI, tests and benchmarks. I have demonstrated how this all fits together by adding implementations for vector cos, in both single and double precision, targeting both Advanced SIMD and SVE. The implementations of the routines themselves are just loops over the scalar routine from libm for now, as we are more concerned with getting the plumbing right at this point. We plan to contribute vector routines from the Arm Optimized Routines repo that are compliant with requirements described in the libmvec wiki. Building libmvec requires minimum GCC 10 for SVE ACLE. To avoid raising the minimum GCC by such a big jump, we allow users to disable libmvec if their compiler is too old. Note that at this point users have to manually call the vector math functions. This seems to be acceptable to some downstream users. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* nptl: move tst-x86-64-tls-1 to nptl-only testsSamuel Thibault2023-05-013-2/+4
| | | | It is essentially nptl-only.
* hurd: Implement prefer_map_32bit_exec tunableSergey Bugaev2023-04-245-0/+119
| | | | | | | This makes the prefer_map_32bit_exec tunable no longer Linux-specific. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-Id: <20230423215526.346009-4-bugaevc@gmail.com>
* hurd: Add sys/ucontext.h and sigcontext.h for x86_64Sergey Bugaev2023-04-101-0/+157
| | | | | | | This is based on the Linux port's version, but laid out to match Mach's struct i386_thread_state, much like the i386 version does. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
* x86_64: Fix asm constraints in feraiseexcept (bug 30305)Florian Weimer2023-04-031-2/+2
| | | | | | | | | The divss instruction clobbers its first argument, and the constraints need to reflect that. Fortunately, with GCC 12, generated code does not actually change, so there is no externally visible bug. Suggested-by: Jakub Jelinek <jakub@redhat.com> Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* x86_64: Add rtld-stpncpy & rtld-strncpySergey Bugaev2023-04-032-0/+36
| | | | | | | | Just like the other existing rtld-str* files, this provides rtld with usable versions of stpncpy and strncpy. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-Id: <20230319151017.531737-22-bugaevc@gmail.com>
* htl: Add tcb-offsets.sym for x86_64Sergey Bugaev2023-04-032-0/+28
| | | | | | | | The source code is the same as sysdeps/i386/htl/tcb-offsets.sym, but of course the produced tcb-offsets.h will be different. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-Id: <20230319151017.531737-21-bugaevc@gmail.com>
* Remove --enable-tunables configure optionAdhemerval Zanella Netto2023-03-291-2/+0
| | | | | | | | | | | | And make always supported. The configure option was added on glibc 2.25 and some features require it (such as hwcap mask, huge pages support, and lock elisition tuning). It also simplifies the build permutations. Changes from v1: * Remove glibc.rtld.dynamic_sort changes, it is orthogonal and needs more discussion. * Cleanup more code. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* benchtests: Move libmvec benchtest inputs to benchtests directoryJoe Ramsay2023-03-2753-213201/+1
| | | | | | | This allows other targets to use the same inputs for their own libmvec microbenchmarks without having to duplicate them in their own subdirectory. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* htl: Add pthreadtypes-arch.h for x86_64Sergey Bugaev2023-02-271-0/+36
| | | | | Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-Id: <20230221211932.296459-5-bugaevc@gmail.com>
* x86_64: Update libm test ulpsH.J. Lu2023-02-271-0/+1
| | | | | | | | | | | | Update libm test ulps for commit 3efbf11fdf15ed991d2c41743921c524a867e145 Author: Paul Zimmermann <Paul.Zimmermann@inria.fr> Date: Tue Feb 14 11:24:59 2023 +0100 update auto-libm-test-out-hypot Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* hurd, htl: Add some x86_64-specific codeSergey Bugaev2023-02-121-0/+29
| | | | | Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-Id: <20230212111044.610942-12-bugaevc@gmail.com>
* htl: Generalize i386 pt-machdep.h to x86Samuel Thibault2023-02-121-0/+1
|
* string: Add libc_hidden_proto for memrchrAdhemerval Zanella2023-02-082-0/+2
| | | | | | | Although static linker can optimize it to local call, it follows the internal scheme to provide hidden proto and definitions. Reviewed-by: Carlos Eduardo Seo <carlos.seo@linaro.org>
* string: Add libc_hidden_proto for strchrnulAdhemerval Zanella2023-02-082-0/+5
| | | | | | | Although static linker can optimize it to local call, it follows the internal scheme to provide hidden proto and definitions. Reviewed-by: Carlos Eduardo Seo <carlos.seo@linaro.org>
* Parameterize op_t from memcopy.hAdhemerval Zanella2023-02-061-0/+24
| | | | | | | | | | It moves the op_t definition out to an specific header, adds the attribute 'may-alias', and cleanup its duplicated definitions. Checked with a build and check with run-built-tests=no for all major Linux ABIs. Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* x86: Fix strncat-avx2.S reading past length [BZ #30065]Noah Goldstein2023-01-311-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Occurs when `src` has no null-term. Two cases: 1) Zero-length check is doing: ``` test %rdx, %rdx jl L(zero_len) ``` which doesn't actually check zero (was at some point `decq` and the flag never got updated). The fix is just make the flag `jle` i.e: ``` test %rdx, %rdx jle L(zero_len) ``` 2) Length check in page-cross case checking if we should continue is doing: ``` cmpq %r8, %rdx jb L(page_cross_small) ``` which means we will continue searching for null-term if length ends at the end of a page and there was no null-term in `src`. The fix is to make the flag: ``` cmpq %r8, %rdx jbe L(page_cross_small) ```
* Update copyright dates with scripts/update-copyrightsJoseph Myers2023-01-061168-1168/+1168
|
* stdio-common: Convert vfprintf and related functions to buffersFlorian Weimer2022-12-191-18/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | vfprintf is entangled with vfwprintf (of course), __printf_fp, __printf_fphex, __vstrfmon_l_internal, and the strfrom family of functions. The latter use the internal snprintf functionality, so vsnprintf is converted as well. The simples conversion is __printf_fphex, followed by __vstrfmon_l_internal and __printf_fp, and finally __vfprintf_internal and __vfwprintf_internal. __vsnprintf_internal and strfrom* are mostly consuming the new interfaces, so they are comparatively simple. __printf_fp is a public symbol, so the FILE *-based interface had to preserved. The __printf_fp rewrite does not change the actual binary-to-decimal conversion algorithm, and digits are still not emitted directly to the target buffer. However, the staging buffer now uses bytes instead of wide characters, and one buffer copy is eliminated. The changes are at least performance-neutral in my testing. Floating point printing and snprintf improved measurably, so that this Lua script for i=1,5000000 do print(i, i * math.pi) end runs about 5% faster for me. To preserve fprintf performance for a simple "%d" format, this commit has some logic changes under LABEL (unsigned_number) to avoid additional function calls. There are certainly some very easy performance improvements here: binary, octal and hexadecimal formatting can easily avoid the temporary work buffer (the number of digits can be computed ahead-of-time using one of the __builtin_clz* built-ins). Decimal formatting can use a specialized version of _itoa_word for base 10. The existing (inconsistent) width handling between strfmon and printf is preserved here. __print_fp_buffer_1 would have to use __translated_number_width to achieve ISO conformance for printf. Test expectations in libio/tst-vtables-common.c are adjusted because the internal staging buffer merges all virtual function calls into one. In general, stack buffer usage is greatly reduced, particularly for unbuffered input streams. __printf_fp can still use a large buffer in binary128 mode for %g, though. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86: Prevent SIGSEGV in memcmp-sse2 when data is concurrently modified [BZ ↵Noah Goldstein2022-12-151-1/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | #29863] In the case of INCORRECT usage of `memcmp(a, b, N)` where `a` and `b` are concurrently modified as `memcmp` runs, there can be a SIGSEGV in `L(ret_nonzero_vec_end_0)` because the sequential logic assumes that `(rdx - 32 + rax)` is a positive 32-bit integer. To be clear, this change does not mean the usage of `memcmp` is supported. The program behaviour is undefined (UB) in the presence of data races, and `memcmp` is incorrect when the values of `a` and/or `b` are modified concurrently (data race). This UB may manifest itself as a SIGSEGV. That being said, if we can allow the idiomatic use cases, like those in yottadb with opportunistic concurrency control (OCC), to execute without a SIGSEGV, at no cost to regular use cases, then we can aim to minimize harm to those existing users. The fix replaces a 32-bit `addl %edx, %eax` with the 64-bit variant `addq %rdx, %rax`. The 1-extra byte of code size from using the 64-bit instruction doesn't contribute to overall code size as the next target is aligned and has multiple bytes of `nop` padding before it. As well all the logic between the add and `ret` still fits in the same fetch block, so the cost of this change is basically zero. The relevant sequential logic can be seen in the following pseudo-code: ``` /* * rsi = a * rdi = b * rdx = len - 32 */ /* cmp a[0:15] and b[0:15]. Since length is known to be [17, 32] in this case, this check is also assumed to cover a[0:(31 - len)] and b[0:(31 - len)]. */ movups (%rsi), %xmm0 movups (%rdi), %xmm1 PCMPEQ %xmm0, %xmm1 pmovmskb %xmm1, %eax subl %ecx, %eax jnz L(END_NEQ) /* cmp a[len-16:len-1] and b[len-16:len-1]. */ movups 16(%rsi, %rdx), %xmm0 movups 16(%rdi, %rdx), %xmm1 PCMPEQ %xmm0, %xmm1 pmovmskb %xmm1, %eax subl %ecx, %eax jnz L(END_NEQ2) ret L(END2): /* Position first mismatch. */ bsfl %eax, %eax /* The sequential version is able to assume this value is a positive 32-bit value because the first check included bytes in range a[0:(31 - len)] and b[0:(31 - len)] so `eax` must be greater than `31 - len` so the minimum value of `edx` + `eax` is `(len - 32) + (32 - len) >= 0`. In the concurrent case, however, `a` or `b` could have been changed so a mismatch in `eax` less or equal than `(31 - len)` is possible (the new low bound is `(16 - len)`. This can result in a negative 32-bit signed integer, which when zero extended to 64-bits is a random large value this out out of bounds. */ addl %edx, %eax /* Crash here because 32-bit negative number in `eax` zero extends to out of bounds 64-bit offset. */ movzbl 16(%rdi, %rax), %ecx movzbl 16(%rsi, %rax), %eax ``` This fix is quite simple, just make the `addl %edx, %eax` 64 bit (i.e `addq %rdx, %rax`). This prevents the 32-bit zero extension and since `eax` is still a low bound of `16 - len` the `rdx + rax` is bound by `(len - 32) - (16 - len) >= -16`. Since we have a fixed offset of `16` in the memory access this must be in bounds.
* x86-64 strncpy: Properly handle the length parameter [BZ# 29839]H.J. Lu2022-12-022-0/+8
| | | | | | | | | | | | On x32, the size_t parameter may be passed in the lower 32 bits of a 64-bit register with the non-zero upper 32 bits. The string/memory functions written in assembly can only use the lower 32 bits of a 64-bit register as length or must clear the upper 32 bits before using the full 64-bit register for length. This pach fixes strncpy for x32. Tested on x86-64 and x32. On x86-64, libc.so is the same with and without the fix. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
* x86-64 strncat: Properly handle the length parameter [BZ# 24097]H.J. Lu2022-12-025-1/+73
| | | | | | | | | | | | On x32, the size_t parameter may be passed in the lower 32 bits of a 64-bit register with the non-zero upper 32 bits. The string/memory functions written in assembly can only use the lower 32 bits of a 64-bit register as length or must clear the upper 32 bits before using the full 64-bit register for length. This pach fixes strncat for x32. Tested on x86-64 and x32. On x86-64, libc.so is the same with and without the fix. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>