about summary refs log tree commit diff
path: root/sysdeps/aarch64
Commit message (Collapse)AuthorAgeFilesLines
* aarch64: Make cpu-features definitions not Linux-specificSergey Bugaev2024-01-042-0/+110
| | | | | | | | | These describe generic AArch64 CPU features, and are not tied to a kernel-specific way of determining them. We can share them between the Linux and Hurd AArch64 ports. Signed-off-by: Sergey Bugaev <bugaevc@gmail.com> Message-ID: <20240103171502.1358371-13-bugaevc@gmail.com>
* aarch64: Add longjmp test for SMESzabolcs Nagy2024-01-022-0/+283
| | | | | | | | | | Includes test for setcontext too. The test directly checks after longjmp if ZA got disabled and the ZA contents got saved following the lazy saving scheme. It does not use ACLE code to verify that gcc can interoperate with glibc. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* aarch64: Add longjmp support for SMESzabolcs Nagy2024-01-021-0/+22
| | | | | | | | | | For the ZA lazy saving scheme to work, longjmp has to call __libc_arm_za_disable. In ld.so we assume ZA is not used so longjmp does not need special support there. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* aarch64: Add SME runtime supportSzabolcs Nagy2024-01-023-3/+129
| | | | | | | | | | | | | | | | | | | The runtime support routines for the call ABI of the Scalable Matrix Extension (SME) are mostly in libgcc. Since libc.so cannot depend on libgcc_s.so have an implementation of __arm_za_disable in libc for libc internal use in longjmp and similar APIs. __libc_arm_za_disable follows the same PCS rules as __arm_za_disable, but it's a hidden symbol so it does not need variant PCS marking. Using __libc_fatal instead of abort because it can print a message and works in ld.so too. But for now we don't need SME routines in ld.so. To check the SME HWCAP in asm, we need the _dl_hwcap2 member offset in _rtld_global_ro in the shared libc.so, while in libc.a the _dl_hwcap2 object is accessed. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Update copyright dates with scripts/update-copyrightsPaul Eggert2024-01-01223-223/+223
|
* aarch64: Add SIMD attributes to math functions with vector versionsJoe Ramsay2023-12-202-0/+113
| | | | | | | Added annotations for autovec by GCC and GFortran - this enables GCC >= 9 to autovectorise math calls at -Ofast. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* aarch64: Add half-width versions of AdvSIMD f32 libmvec routinesJoe Ramsay2023-12-2018-14/+108
| | | | | | | | | | | Compilers may emit calls to 'half-width' routines (two-lane single-precision variants). These have been added in the form of wrappers around the full-width versions, where the low half of the vector is simply duplicated. This will perform poorly when one lane triggers the special-case handler, as there will be a redundant call to the scalar version, however this is expected to be rare at Ofast. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* aarch64: correct CFI in rawmemchr (bug 31113)Andreas Schwab2023-12-051-1/+1
| | | | | | | The .cfi_return_column directive changes the return column for the whole FDE range. But the actual intent is to tell the unwinder that the value in x30 (lr) now resides in x15 after the move, and that is expressed by the .cfi_register directive.
* aarch64: fix tested ifunc variantsSzabolcs Nagy2023-12-041-3/+3
| | | | | Don't test a64fx string functions when BTI is enabled since they are not BTI compatible.
* aarch64: Improve special-case handling in AdvSIMD double-precision libmvec ↵Joe Ramsay2023-11-291-1/+7
| | | | | | | routines Avoids emitting many saves/restores of vector registers, reduces the amount of code generated around the scalar fallback.
* aarch64: Fix libmvec benchmarksJoe Ramsay2023-11-222-49/+81
| | | | | | | | These were broken by the new atan2 functions, as they were only set up for univariate functions. Arity is now detected from the input file - this revealed a mistake that the double-precision inputs were being used for both single- and double-precision routines, which is now remedied.
* elf: Remove LD_PROFILE for static binariesAdhemerval Zanella2023-11-212-2/+4
| | | | | | | | | | | The _dl_non_dynamic_init does not parse LD_PROFILE, which does not enable profile for dlopen objects. Since dlopen is deprecated for static objects, it is better to remove the support. It also allows to trim down libc.a of profile support. Checked on x86_64-linux-gnu. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* aarch64: Add vector implementations of expm1 routinesJoe Ramsay2023-11-2012-0/+458
| | | | May discard sign of 0 - auto tests for -0 and -0x1p-10000 updated accordingly.
* AArch64: Remove Falkor memcpyWilco Dijkstra2023-11-135-324/+0
| | | | | | | | | | The latest implementations of memcpy are actually faster than the Falkor implementations [1], so remove the falkor/phecda ifuncs for memcpy and the now unused IS_FALKOR/IS_PHECDA defines. [1] https://sourceware.org/pipermail/libc-alpha/2022-December/144227.html Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* AArch64: Add memset_zva64Wilco Dijkstra2023-11-136-68/+38
| | | | | | | | Add a specialized memset for the common ZVA size of 64 to avoid the overhead of reading the ZVA size. Since the code is identical to __memset_falkor, remove the latter. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* AArch64: Cleanup emag memsetWilco Dijkstra2023-11-134-197/+90
| | | | | | | Cleanup emag memset - merge the memset_base64.S file, remove the unused ZVA code (since it is disabled on emag). Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* aarch64: Add vector implementations of log1p routinesJoe Ramsay2023-11-1012-0/+496
| | | | May discard sign of zero.
* aarch64: Add vector implementations of atan2 routinesJoe Ramsay2023-11-1014-0/+531
|
* aarch64: Add vector implementations of atan routinesJoe Ramsay2023-11-1012-0/+403
|
* aarch64: Add vector implementations of acos routinesJoe Ramsay2023-11-1012-1/+436
|
* aarch64: Add vector implementations of asin routinesJoe Ramsay2023-11-1012-1/+403
|
* AArch64: Cleanup ifuncsWilco Dijkstra2023-11-0118-125/+41
| | | | | | | | Cleanup ifuncs. Remove uses of libc_hidden_builtin_def, use ENTRY rather than ENTRY_ALIGN, remove unnecessary defines and conditional compilation. Rename strlen_mte to strlen_generic. Remove rtld-memset. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* AArch64: Add support for MOPS memcpy/memmove/memsetWilco Dijkstra2023-10-249-1/+137
| | | | | | | | Add support for MOPS in cpu_features and INIT_ARCH. Add ifuncs using MOPS for memcpy, memmove and memset (use .inst for now so it works with all binutils versions without needing complex configure and conditional compilation). Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* aarch64: Add vector implementations of exp10 routinesJoe Ramsay2023-10-2312-0/+524
| | | | | Double-precision routines either reuse the exp table (AdvSIMD) or use SVE FEXPA intruction.
* aarch64: Add vector implementations of log10 routinesJoe Ramsay2023-10-2314-1/+580
| | | | A table is also added, which is shared between AdvSIMD and SVE log10.
* aarch64: Add vector implementations of log2 routinesJoe Ramsay2023-10-2314-1/+545
| | | | A table is also added, which is shared between AdvSIMD and SVE log2.
* aarch64: Add vector implementations of exp2 routinesJoe Ramsay2023-10-2312-0/+459
| | | | Some routines reuse table from v_exp_data.c
* aarch64: Add vector implementations of tan routinesJoe Ramsay2023-10-2318-1/+1244
| | | | | This includes some utility headers for evaluating polynomials using various schemes.
* aarch64: Optimise vecmath logsJoe Ramsay2023-10-057-215/+226
| | | | | | | | | | | * Transpose table layout for improved memory access * Use half-vector special comparisons for AdvSIMD * Improve register use near special-case branches - Due to the presence of a function call, return value would get mov-d out of x0 in order to facilitate PCS. By moving the final computation after the branch this can be avoided Also change SVE routines to use overloaded intrinsics for readability.
* aarch64: Cosmetic change in SVE exp routinesJoe Ramsay2023-10-052-47/+44
| | | | | | | Use overloaded intrinsics for readability. Codegen does not change, however while we're bringing the routines up-to-date with recent improvements to other routines in AOR it is worth copying this change over as well.
* aarch64: Optimize SVE cos & cosfJoe Ramsay2023-10-052-53/+47
| | | | | | Saves a mov by ensuring return value does not need to be moved out of the way before special-case branch. Also change to use overloaded intrinsics.
* aarch64: Improve vecmath sin routinesJoe Ramsay2023-10-053-73/+87
| | | | | | | | * Update ULP comment reflecting a new observed max in [-pi/2, pi/2] * Use the same polynomial in AdvSIMD and SVE, rather than FTRIG instructions * Improve register use near special-case branch Also use overloaded intrinsics for SVE.
* AArch64: Remove -0.0 check from vector sinWilco Dijkstra2023-09-262-12/+2
| | | | | | | Remove the unnecessary extra checks for sin (-0.0) from vector sin/sinf, improving performance. Passes regress. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* configure: Use autoconf 2.71Siddhesh Poyarekar2023-07-171-91/+111
| | | | | | | | | | | | | | Bump autoconf requirement to 2.71 to allow regenerating configure on more recent distributions. autoconf 2.71 has been in Fedora since F36 and is the current version in Debian stable (bookworm). It appears to be current in Gentoo as well. All sysdeps configure and preconfigure scripts have also been regenerated; all changes are trivial transformations that do not affect functionality. Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* aarch64: Add vector implementations of exp routinesJoe Ramsay2023-06-3014-1/+593
| | | | | | | | | | | | Optimised implementations for single and double precision, Advanced SIMD and SVE, copied from Arm Optimized Routines. As previously, data tables are used via a barrier to prevent overly aggressive constant inlining. Special-case handlers are marked NOINLINE to avoid incurring the penalty of switching call standards unnecessarily. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* aarch64: Add vector implementations of log routinesJoe Ramsay2023-06-3014-1/+559
| | | | | | | | | | | | | | Optimised implementations for single and double precision, Advanced SIMD and SVE, copied from Arm Optimized Routines. Log lookup table added as HIDDEN symbol to allow it to be shared between AdvSIMD and SVE variants. As previously, data tables are used via a barrier to prevent overly aggressive constant inlining. Special-case handlers are marked NOINLINE to avoid incurring the penalty of switching call standards unnecessarily. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* aarch64: Add vector implementations of sin routinesJoe Ramsay2023-06-3012-6/+426
| | | | | | | | | | | | Optimised implementations for single and double precision, Advanced SIMD and SVE, copied from Arm Optimized Routines. As previously, data tables are used via a barrier to prevent overly aggressive constant inlining. Special-case handlers are marked NOINLINE to avoid incurring the penalty of switching call standards unnecessarily. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* aarch64: Add vector implementations of cos routinesJoe Ramsay2023-06-3010-117/+608
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the loop-over-scalar placeholder routines with optimised implementations from Arm Optimized Routines (AOR). Also add some headers containing utilities for aarch64 libmvec routines, and update libm-test-ulps. Data tables for new routines are used via a pointer with a barrier on it, in order to prevent overly aggressive constant inlining in GCC. This allows a single adrp, combined with offset loads, to be used for every constant in the table. Special-case handlers are marked NOINLINE in order to confine the save/restore overhead of switching from vector to normal calling standard. This way we only incur the extra memory access in the exceptional cases. NOINLINE definitions have been moved to math_private.h in order to reduce duplication. AOR exposes a config option, WANT_SIMD_EXCEPT, to enable selective masking (and later fixing up) of invalid lanes, in order to trigger fp exceptions correctly (AdvSIMD only). This is tested and maintained in AOR, however it is configured off at source level here for performance reasons. We keep the WANT_SIMD_EXCEPT blocks in routine sources to greatly simplify the upstreaming process from AOR to glibc. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* Fix a few more typos I missed in previous round -- BZ 25337Paul Pluzhnikov2023-06-021-1/+1
|
* Fix misspellings in sysdeps/ -- BZ 25337Paul Pluzhnikov2023-05-302-5/+5
|
* aarch64: More configure checks for libmvecSzabolcs Nagy2023-05-052-6/+48
| | | | | Check assembler and linker support too, not just SVE ACLE in the compiler, since variant PCS requires at least binutils 2.32.1.
* aarch64: SVE ACLE configure test cleanupsSzabolcs Nagy2023-05-052-16/+27
| | | | Use more idiomatic configure test for better autoconf cache and logs.
* aarch64: fix SVE ACLE check for bootstrap glibc buildsSzabolcs Nagy2023-05-042-2/+2
| | | | | | | | | | | | | | arm_sve.h depends on stdint.h but that relies on libc headers unless compiled in freestanding mode. Without this change a bootstrap glibc build (that uses a compiler without installed libc headers) failed with checking for availability of SVE ACLE... In file included from [...]/arm_sve.h:28, from conftest.c:1: [...]/stdint.h:9:16: fatal error: stdint.h: No such file or directory 9 | # include_next <stdint.h> | ^~~~~~~~~~ compilation terminated. configure: error: mathvec is enabled but compiler does not have SVE ACLE. [...]
* Enable libmvec support for AArch64Joe Ramsay2023-05-0325-0/+910
| | | | | | | | | | | | | | | | | | | | | | | This patch enables libmvec on AArch64. The proposed change is mainly implementing build infrastructure to add the new routines to ABI, tests and benchmarks. I have demonstrated how this all fits together by adding implementations for vector cos, in both single and double precision, targeting both Advanced SIMD and SVE. The implementations of the routines themselves are just loops over the scalar routine from libm for now, as we are more concerned with getting the plumbing right at this point. We plan to contribute vector routines from the Arm Optimized Routines repo that are compliant with requirements described in the libmvec wiki. Building libmvec requires minimum GCC 10 for SVE ACLE. To avoid raising the minimum GCC by such a big jump, we allow users to disable libmvec if their compiler is too old. Note that at this point users have to manually call the vector math functions. This seems to be acceptable to some downstream users. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* aarch64: update libm test ulpsSzabolcs Nagy2023-02-241-0/+1
|
* AArch64: Fix HP_TIMING_DIFF computation [BZ# 29329]Jun Tang2023-02-221-1/+1
| | | | | | | | Fix the computation to allow for cntfrq_el0 being larger than 1GHz. Assume cntfrq_el0 is a multiple of 1MHz to increase the maximum interval (1024 seconds at 1GHz). Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
* string: Remove string_private.hAdhemerval Zanella2023-02-171-20/+0
| | | | | | Now that _STRING_ARCH_unaligned is not used anymore. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
* string: Add libc_hidden_proto for memrchrAdhemerval Zanella2023-02-081-0/+1
| | | | | | | Although static linker can optimize it to local call, it follows the internal scheme to provide hidden proto and definitions. Reviewed-by: Carlos Eduardo Seo <carlos.seo@linaro.org>
* string: Add libc_hidden_proto for strchrnulAdhemerval Zanella2023-02-081-0/+1
| | | | | | | Although static linker can optimize it to local call, it follows the internal scheme to provide hidden proto and definitions. Reviewed-by: Carlos Eduardo Seo <carlos.seo@linaro.org>
* AArch64: Improve SVE memcpy and memmoveWilco Dijkstra2023-02-061-20/+14
| | | | | | | | Improve SVE memcpy by copying 2 vectors if the size is small enough. This improves performance of random memcpy by ~9% on Neoverse V1, and 33-64 byte copies are ~16% faster. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>