about summary refs log tree commit diff
path: root/sysdeps/aarch64
Commit message (Collapse)AuthorAgeFilesLines
* aarch64: MTE compatible strncmpAlex Butler2020-06-231-99/+145
| | | | | | | | | | | | | Add support for MTE to strncmp. Regression tested with xcheck and benchmarked with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1. The existing implementation assumes that any access to the pages in which the string resides is safe. This assumption is not true when MTE is enabled. This patch updates the algorithm to ensure that accesses remain within the bounds of an MTE tag (16-byte chunks) and improves overall performance. Co-authored-by: Branislav Rankov <branislav.rankov@arm.com> Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
* aarch64: MTE compatible strcmpAlex Butler2020-06-231-109/+125
| | | | | | | | | | | | | Add support for MTE to strcmp. Regression tested with xcheck and benchmarked with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1. The existing implementation assumes that any access to the pages in which the string resides is safe. This assumption is not true when MTE is enabled. This patch updates the algorithm to ensure that accesses remain within the bounds of an MTE tag (16-byte chunks) and improves overall performance. Co-authored-by: Branislav Rankov <branislav.rankov@arm.com> Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
* aarch64: MTE compatible strrchrAlex Butler2020-06-231-114/+91
| | | | | | | | | | | | Add support for MTE to strrchr. Regression tested with xcheck and benchmarked with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1. The existing implementation assumes that any access to the pages in which the string resides is safe. This assumption is not true when MTE is enabled. This patch updates the algorithm to ensure that accesses remain within the bounds of an MTE tag (16-byte chunks) and improves overall performance. Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
* aarch64: MTE compatible memrchrAlex Butler2020-06-231-116/+83
| | | | | | | | | | | | Add support for MTE to memrchr. Regression tested with xcheck and benchmarked with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1. The existing implementation assumes that any access to the pages in which the string resides is safe. This assumption is not true when MTE is enabled. This patch updates the algorithm to ensure that accesses remain within the bounds of an MTE tag (16-byte chunks) and improves overall performance. Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
* aarch64: MTE compatible memchrAlex Butler2020-06-231-107/+80
| | | | | | | | | | | | Add support for MTE to memchr. Regression tested with xcheck and benchmarked with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1. The existing implementation assumes that any access to the pages in which the string resides is safe. This assumption is not true when MTE is enabled. This patch updates the algorithm to ensure that accesses remain within the bounds of an MTE tag (16-byte chunks) and improves overall performance. Co-authored-by: Gabor Kertesz <gabor.kertesz@arm.com>
* aarch64: MTE compatible strcpyAlex Butler2020-06-231-263/+122
| | | | | | | | | | | | Add support for MTE to strcpy. Regression tested with xcheck and benchmarked with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1. The existing implementation assumes that any access to the pages in which the string resides is safe. This assumption is not true when MTE is enabled. This patch updates the algorithm to ensure that accesses remain within the bounds of an MTE tag (16-byte chunks) and improves overall performance. Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
* aarch64: Remove fpu MakefileAdhemerval Zanella2020-06-221-14/+0
| | | | | | | | The -fno-math-errno is already added by default and the minimum required GCC to build glibc (6.2) make the -ffinite-math-only superflous. Checked on aarch64-linux-gnu.
* aarch64: Use math-use-builtins for ceil{f}Adhemerval Zanella2020-06-222-58/+0
| | | | | | | The define is already set on the math-use-builtins-ceil.h, the patch just removes the implementations (it was missed on c9feb1be93). Checked on aarch64-linux-gnu.
* math: Decompose math-use-builtins.hAdhemerval Zanella2020-06-229-71/+32
| | | | | | | | | | | Each symbol definitions are moved on a separated file and it cover all symbol type definitions (float, double, long double, and float128). It allows to set support for architectures without the boiler place of copying default values. Checked with a build on the affected ABIs.
* aarch64: MTE compatible strlenAndrea Corallo2020-06-091-183/+56
| | | | | | | | | | | | | | | | Introduce an Arm MTE compatible strlen implementation. The existing implementation assumes that any access to the pages in which the string resides is safe. This assumption is not true when MTE is enabled. This patch updates the algorithm to ensure that accesses remain within the bounds of an MTE tag (16-byte chunks) and improves overall performance on modern cores. On cores with less efficient Advanced SIMD implementation such as Cortex-A53 it can be slower. Benchmarked on Cortex-A72, Cortex-A53, Neoverse N1. Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
* aarch64: MTE compatible strchrAndrea Corallo2020-06-091-91/+71
| | | | | | | | | | | | | | Introduce an Arm MTE compatible strchr implementation. The existing implementation assumes that any access to the pages in which the string resides is safe. This assumption is not true when MTE is enabled. This patch updates the algorithm to ensure that accesses remain within the bounds of an MTE tag (16-byte chunks) and improves overall performance. Benchmarked on Cortex-A72, Cortex-A53, Neoverse N1. Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
* aarch64: MTE compatible strchrnulAndrea Corallo2020-06-091-85/+51
| | | | | | | | | | | | | | Introduce an Arm MTE compatible strchrnul implementation. The existing implementation assumes that any access to the pages in which the string resides is safe. This assumption is not true when MTE is enabled. This patch updates the algorithm to ensure that accesses remain within the bounds of an MTE tag (16-byte chunks) and improves overall performance. Benchmarked on Cortex-A72, Cortex-A53, Neoverse N1. Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
* AArch64: Merge Falkor memcpy and memmove implementationsKrzysztof Koch2020-06-083-245/+144
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Falkor's memcpy and memmove share some implementation details, therefore, the two routines are moved to a single source file for code reuse. The two routines now share code for small and medium copies (up to and including 128 bytes). Large copies in memcpy do not handle overlap correctly, consequently, the loops for moving/copying more than 128 bytes stay separate for memcpy and memmove. To increase code reuse a number of small modifications were made: 1. The old implementation of memcpy copied the first 16-bytes as soon as the size of data was determined to be greater than 32 bytes. For memcpy code to also work when copying small/medium overlapping data, the first load and store was moved to the large copy case. 2. Medium memcpy case no longer assumes that 16 bytes were already copied and uses 8 registers to copy up to 128 bytes. 3. Small case for memmove was enlarged to that of memcpy, which is less than or equal to 32 bytes. 4. Medium case for memmove was enlarged to that of memcpy, which is less than or equal to 128 bytes. Other changes include: 1. Improve alignment of existing loop bodies. 2. 'Delouse' memmove and memcpy input arguments. Make sure that upper 32-bits of input registers are zeroed if unused. 3. Do one more iteration in memmove loops and reduce the number of copies made from the start/end of the buffer, depending on the direction of the memmove loop. Benchmarking: Looking at the results from bench-memcpy-random.out, we can see that now memmove_falkor is about 5% faster than memcpy_falkor_old, while memmove_falkor_old was more than 15% slower. The memcpy implementation remained largely unmodified, so there is no significant performance change. The reason for such a significant memmove performance gain is the increase of the upper bound on the small copy case to 32 bytes and the increase of the upper bound on the medium copy case to 128 bytes. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* aarch/fpu: use generic builtins based math functionsVineet Gupta2020-06-0315-398/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | introduce sysdep header math-use-builtins.h to replace aarch64 implementations with corresponding generic ones. - newly inroduced generic sqrt{,f}, fma{,f} - existing floor{,f}, nearbyint{,f}, rint{,f}, round{,f}, trunc{,f} - Note that generic copysign was already enabled (via generic math-use-builtins.h) now thru sysdep header Tested with build-many-glibcs for aarch64-linux-gnu This is a non functional change and aarch64 libm before/after was byte invariant as compared below: | cd /SCRATCH/vgupta/gnu/install-glibc-A-baseline | for i in `find . -name libm-2.31.9000.so`; do | echo $i; diff $i /SCRATCH/vgupta/gnu/install-glibc-C-reduce-scope/$i ; | echo $?; | done | ./aarch64-linux-gnu/lib64/libm-2.31.9000.so | 0 | ./arm-linux-gnueabi/lib/libm-2.31.9000.so | 0 | ./x86_64-linux-gnu/lib64/libm-2.31.9000.so | 0 | ./arm-linux-gnueabihf/lib/libm-2.31.9000.so | 0 | ./riscv64-linux-gnu-rv64imac-lp64/lib64/lp64/libm-2.31.9000.so | 0 | ./riscv64-linux-gnu-rv64imafdc-lp64/lib64/lp64/libm-2.31.9000.so | 0 | ./powerpc-linux-gnu/lib/libm-2.31.9000.so | 0 | ./microblaze-linux-gnu/lib/libm-2.31.9000.so | 0 | ./nios2-linux-gnu/lib/libm-2.31.9000.so | 0 | ./hppa-linux-gnu/lib/libm-2.31.9000.so | 0 | ./s390x-linux-gnu/lib64/libm-2.31.9000.so | 0 Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* aarch64: fix strcpy and strnlen for big-endian [BZ #25824]Lexi Shao2020-05-152-0/+10
| | | | | | | | | | | | | | | | | | | This patch fixes the optimized implementation of strcpy and strnlen on a big-endian arm64 machine. The optimized method uses neon, which can process 128bit with one instruction. On a big-endian machine, the bit order should be reversed for the whole 128-bits double word. But with instuction rev64 datav.16b, datav.16b it reverses 64bits in the two halves rather than reversing 128bits. There is no such instruction as rev128 to reverse the 128bits, but we can fix this by loading the data registers accordingly. Fixes 0237b61526e7("aarch64: Optimized implementation of strcpy") and 2911cb68ed3d("aarch64: Optimized implementation of strnlen"). Signed-off-by: Lexi Shao <shaolexi@huawei.com> Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* Update aarch64 libm-test-ulpsAdhemerval Zanella2020-04-081-15/+16
|
* math: Remove inline math testsAdhemerval Zanella2020-03-191-840/+0
| | | | | | | | | | | | | With mathinline removal there is no need to keep building and testing inline math tests. The gen-libm-tests.py support to generate ULP_I_* is removed and all libm-test-ulps files are updated to longer have the i{float,double,ldouble} entries. The support for no-test-inline is also removed from both gen-auto-libm-tests and the auto-libm-test-out-* were regenerated. Checked on x86_64-linux-gnu and i686-linux-gnu.
* [AArch64] Improve integer memcpyWilco Dijkstra2020-03-111-96/+101
| | | | | | Further optimize integer memcpy. Small cases now include copies up to 32 bytes. 64-128 byte copies are split into two cases to improve performance of 64-96 byte copies. Comments have been rewritten.
* Introduce <elf-initfini.h> and ELF_INITFINI for all architecturesFlorian Weimer2020-02-181-0/+20
| | | | | | | | | | | | | | | | This supersedes the init_array sysdeps directory. It allows us to check for ELF_INITFINI in both C and assembler code, and skip DT_INIT and DT_FINI processing completely on newer architectures. A new header file is needed because <dl-machine.h> is incompatible with assembler code. <sysdep.h> is compatible with assembler code, but it cannot be included in all assembler files because on some architectures, it redefines register names, and some assembler files conflict with that. <elf-initfini.h> is replicated for legacy architectures which need DT_INIT/DT_FINI support. New architectures follow the generic default and disable it.
* nptl: add missing pthread-offsets.hAndreas Schwab2020-02-101-0/+3
| | | | | All architectures using their own definition of struct __pthread_rwlock_arch_t need to provide their own pthread-offsets.h.
* Add libm_alias_finite for _finite symbolsWilco Dijkstra2020-01-033-3/+6
| | | | | | | | | | | | | | | | | | | This patch adds a new macro, libm_alias_finite, to define all _finite symbol. It sets all _finite symbol as compat symbol based on its first version (obtained from the definition at built generated first-versions.h). The <fn>f128_finite symbols were introduced in GLIBC 2.26 and so need special treatment in code that is shared between long double and float128. It is done by adding a list, similar to internal symbol redifinition, on sysdeps/ieee754/float128/float128_private.h. Alpha also needs some tricky changes to ensure we still emit 2 compat symbols for sqrt(f). Passes buildmanyglibc. Co-authored-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* Update copyright dates with scripts/update-copyrights.Joseph Myers2020-01-01134-134/+134
|
* aarch64: add default memcpy version for kunpeng920Xuelei Zhang2019-12-271-1/+1
| | | | Checked on aarch64-linux-gnu.
* aarch64: ifunc rename for kunpengXuelei Zhang2019-12-272-2/+2
| | | | | | | Rename ifunc for kunpeng to kunpeng920, and modify the corresponding function files including IS_KUNPENG920 judgement. Checked on aarch64-linux-gnu.
* aarch64: Modify error-shown comments for strcpyXuelei Zhang2019-12-271-1/+1
| | | | Checked on aarch64-linux-gnu.
* aarch64: Optimized memset for Kunpeng processor.Xuelei Zhang2019-12-194-2/+119
| | | | | | | | | | | | | | | Due to the branch prediction issue of Kunpeng processor, we found memset_generic has poor performance on middle sizes setting, and so we reconstructed the logic, expanded the loop by 4 times in set_long to solve the problem, even when setting below 1K sizes have benefit. Another change is that DZ_ZVA seems no work when setting zero, so we discarded it and used set_long to set zero instead. Fewer branches and predictions also make the zero case have slightly improvement. Checked on aarch64-linux-gnu. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
* aarch64: Optimized strlen for strlen_asimdXuelei Zhang2019-12-192-17/+29
| | | | | | | | | | | Optimize the strlen implementation by using vector operations and loop unrolling in main loop.Compared to __strlen_generic,it reduces latency of cases in bench-strlen by 7%~18% when the length of src is greater than 128 bytes, with gains throughout the benchmark. Checked on aarch64-linux-gnu. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
* aarch64: Optimized implementation of memrchrXuelei Zhang2019-12-191-0/+165
| | | | | | | | | | | Considering the excellent performance of memchr.S on glibc 2.30, the same algorithm is used to find chrin. Compared to memrchr.c, this method with memrchr.S achieves an average performance improvement of 58% based on benchtest and its extension cases. Checked on aarch64-linux-gnu. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
* aarch64: Optimized implementation of strnlenXuelei Zhang2019-12-191-1/+51
| | | | | | | | | | | Optimize the strlen implementation by using vector operations and loop unrooling in main loop. Compared to aarch64/strnlen.S, it reduces latency of cases in bench-strnlen by 11%~24% when the length of src is greater than 64 bytes, with gains throughout the benchmark. Checked on aarch64-linux-gnu. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
* aarch64: Optimized implementation of strcpyXuelei Zhang2019-12-191-32/+27
| | | | | | | | | | | Optimize the strcpy implementation by using vector loads and operations in main loop.Compared to aarch64/strcpy.S, it reduces latency of cases in bench-strlen by 5%~18% when the length of src is greater than 64 bytes, with gains throughout the benchmark. Checked on aarch64-linux-gnu. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
* aarch64: Optimized implementation of memcmpXuelei Zhang2019-12-191-53/+79
| | | | | | | | | | | The loop body is expanded from a 16-byte comparison to a 64-byte comparison, and the usage of ldp is replaced by the Post-index mode to the Base plus offset mode. Hence, compare can faster 18% around > 128 bytes in all. Checked on aarch64-linux-gnu. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
* elf: Do not run IFUNC resolvers for LD_DEBUG=unused [BZ #24214]Florian Weimer2019-12-021-1/+2
| | | | | | | | | This commit adds missing skip_ifunc checks to aarch64, arm, i386, sparc, and x86_64. A new test case ensures that IRELATIVE IFUNC resolvers do not run in various diagnostic modes of the dynamic loader. Reviewed-By: Szabolcs Nagy <szabolcs.nagy@arm.com>
* nptl: Add default pthread-offsets.hAdhemerval Zanella2019-11-261-3/+0
| | | | | | | | | | This patch adds a default pthread-offsets.h based on default thread definitions from struct_mutex.h and struct_rwlock.h. The idea is to simplify new ports inclusion. Checked with a build on affected abis. Change-Id: I7785a9581e651feb80d1413b9e03b5ac0452668a
* nptl: Add struct_rwlock.hAdhemerval Zanella2019-11-262-15/+41
| | | | | | | | | | | | | | | | | | | | | | | | This patch adds a new generic __pthread_rwlock_arch_t definition meant to be used by new ports. Its layout mimics the current usage on some 64 bits ports and it allows some ports to use the generic definition. The arch __pthread_rwlock_arch_t definition is moved from pthreadtypes-arch.h to another arch-specific header (struct_rwlock.h). Also the static intialization macro for pthread_rwlock_t is set to use an arch defined on (__PTHREAD_RWLOCK_INITIALIZER) which simplifies its implementation. The default pthread_rwlock_t layout differs from current ports with: 1. Internal layout is the same for 32 bits and 64 bits. 2. Internal flag is an unsigned short so it should not required additional padding to align for word boundary (if it is the case for the ABI). Checked with a build on affected abis. Change-Id: I776a6a986c23199929d28a3dcd30272db21cd1d0
* nptl: Add struct_mutex.hAdhemerval Zanella2019-11-261-7/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current way of defining the common mutex definition for POSIX and C11 on pthreadtypes-arch.h (added by commit 06be6368da16104be5) is not really the best options for newer ports. It requires define some misleading flags that should be always defined as 0 (__PTHREAD_COMPAT_PADDING_MID and __PTHREAD_COMPAT_PADDING_END), it exposes options used solely for linuxthreads compat mode (__PTHREAD_MUTEX_USE_UNION and __PTHREAD_MUTEX_NUSERS_AFTER_KIND), and requires newer ports to explicit define them (adding more boilerplate code). This patch adds a new default __pthread_mutex_s definition meant to be used by newer ports. Its layout mimics the current usage on both 32 and 64 bits ports and it allows most ports to use the generic definition. Only ports that use some arch-specific definition (such as hardware lock-elision or linuxthreads compat) requires specific headers. For 32 bit, the generic definitions mimic the other 32-bit ports of using an union to define the fields uses on adaptive and robust mutexes (thus not allowing both usage at same time) and by using a single linked-list for robust mutexes. Both decisions seemed to follow what recent ports have done and make the resulting pthread_mutex_t/mtx_t object smaller. Also the static intialization macro for pthread_mutex_t is set to use a macro __PTHREAD_MUTEX_INITIALIZER where the architecture can redefine in its struct_mutex.h if it requires additional fields to be initialized. Checked with a build on affected abis. Change-Id: I30a22c3e3497805fd6e52994c5925897cffcfe13
* nptl: Remove rwlock elision definitionsAdhemerval Zanella2019-11-261-2/+0
| | | | | | | | | | The new rwlock implementation added by cc25c8b4c1196 (2.25) removed support for lock-elision. This patch removes remaining the arch-specific unused definitions. Checked with a build against all affected ABIs. Change-Id: I5dec8af50e3cd56d7351c52ceff4aa3771b53cd6
* nptl: Add tests for internal pthread_rwlock_t offsetsAdhemerval Zanella2019-11-261-0/+2
| | | | | | | | | | | | | This patch new build tests to check for internal fields offsets for internal pthread_rwlock_t definition. Althoug the '__data.__flags' field layout should be preserved due static initializators, the patch also adds tests for the futexes that may be used in a shared memory (although using different libc version in such scenario is not really supported). Checked with a build against all affected ABIs. Change-Id: Iccc103d557de13d17e4a3f59a0cad2f4a640c148
* nptl: Cleanup mutex internal offset testsAdhemerval Zanella2019-11-261-4/+0
| | | | | | | | | | | | | The offsets of pthread_mutex_t __data.__nusers, __data.__spins, __data.elision, __data.list are not required to be constant over the releases. Only the __data.__kind is used for static initializers. This patch also adds an additional size check for __data.__kind. Checked with a build against affected ABIs. Change-Id: I7a4e48cc91b4c4ada57e9a5d1b151fb702bfaa9f
* aarch64: Increase small and medium cases for __memcpy_genericKrzysztof Koch2019-11-121-35/+47
| | | | | | | | | | | | | | | | | | | | | | | | Increase the upper bound on medium cases from 96 to 128 bytes. Now, up to 128 bytes are copied unrolled. Increase the upper bound on small cases from 16 to 32 bytes so that copies of 17-32 bytes are not impacted by the larger medium case. Benchmarking: The attached figures show relative timing difference with respect to 'memcpy_generic', which is the existing implementation. 'memcpy_med_128' denotes the the version of memcpy_generic with only the medium case enlarged. The 'memcpy_med_128_small_32' numbers are for the version of memcpy_generic submitted in this patch, which has both medium and small cases enlarged. The figures were generated using the script from: https://www.sourceware.org/ml/libc-alpha/2019-10/msg00563.html Depending on the platform, the performance improvement in the bench-memcpy-random.c benchmark ranges from 6% to 20% between the original and final version of memcpy.S Tested against GLIBC testsuite and randomized tests.
* Split up endian.h to minimize exposure of BYTE_ORDER.Alistair Francis2019-10-013-31/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With only two exceptions (sys/types.h and sys/param.h, both of which historically might have defined BYTE_ORDER) the public headers that include <endian.h> only want to be able to test __BYTE_ORDER against __*_ENDIAN. This patch creates a new bits/endian.h that can be included by any header that wants to be able to test __BYTE_ORDER and/or __FLOAT_WORD_ORDER against the __*_ENDIAN constants, or needs __LONG_LONG_PAIR. It only defines macros in the implementation namespace. The existing bits/endian.h (which could not be included independently of endian.h, and only defines __BYTE_ORDER and maybe __FLOAT_WORD_ORDER) is renamed to bits/endianness.h. I also took the opportunity to canonicalize the form of this header, which we are stuck with having one copy of per architecture. Since they are so short, this means git doesn’t understand that they were renamed from existing headers, sigh. endian.h itself is a nonstandard header and its only remaining use from a standard header is guarded by __USE_MISC, so I dropped the __USE_MISC conditionals from around all of the public-namespace things it defines. (This means, an application that requests strict library conformance but includes endian.h will still see the definition of BYTE_ORDER.) A few changes to specific bits/endian(ness).h variants deserve mention: - sysdeps/unix/sysv/linux/ia64/bits/endian.h is moved to sysdeps/ia64/bits/endianness.h. If I remember correctly, ia64 did have selectable endianness, but we have assembly code in sysdeps/ia64 that assumes it’s little-endian, so there is no reason to treat the ia64 endianness.h as linux-specific. - The C-SKY port does not fully support big-endian mode, the compile will error out if __CSKYBE__ is defined. - The PowerPC port had extra logic in its bits/endian.h to detect a broken compiler, which strikes me as unnecessary, so I removed it. - The only files that defined __FLOAT_WORD_ORDER always defined it to the same value as __BYTE_ORDER, so I removed those definitions. The SH bits/endian(ness).h had comments inconsistent with the actual setting of __FLOAT_WORD_ORDER, which I also removed. - I *removed* copyright boilerplate from the few bits/endian(ness).h headers that had it; these files record a single fact in a fashion dictated by an external spec, so I do not think they are copyrightable. As long as I was changing every copy of ieee754.h in the tree, I noticed that only the MIPS variant includes float.h, because it uses LDBL_MANT_DIG to decide among three different versions of ieee854_long_double. This patch makes it not include float.h when GCC’s intrinsic __LDBL_MANT_DIG__ is available. * string/endian.h: Unconditionally define LITTLE_ENDIAN, BIG_ENDIAN, PDP_ENDIAN, and BYTE_ORDER. Condition byteswapping macros only on !__ASSEMBLER__. Move the definitions of __BIG_ENDIAN, __LITTLE_ENDIAN, __PDP_ENDIAN, __FLOAT_WORD_ORDER, and __LONG_LONG_PAIR to... * string/bits/endian.h: ...this new file, which includes the renamed header bits/endianness.h for the definition of __BYTE_ORDER and possibly __FLOAT_WORD_ORDER. * string/Makefile: Install bits/endianness.h. * include/bits/endian.h: New wrapper. * bits/endian.h: Rename to bits/endianness.h. Add multiple-include guard. Rewrite the comment explaining what the machine-specific variants of this file should do. * sysdeps/unix/sysv/linux/ia64/bits/endian.h: Move to sysdeps/ia64. * sysdeps/aarch64/bits/endian.h * sysdeps/alpha/bits/endian.h * sysdeps/arm/bits/endian.h * sysdeps/csky/bits/endian.h * sysdeps/hppa/bits/endian.h * sysdeps/ia64/bits/endian.h * sysdeps/m68k/bits/endian.h * sysdeps/microblaze/bits/endian.h * sysdeps/mips/bits/endian.h * sysdeps/nios2/bits/endian.h * sysdeps/powerpc/bits/endian.h * sysdeps/riscv/bits/endian.h * sysdeps/s390/bits/endian.h * sysdeps/sh/bits/endian.h * sysdeps/sparc/bits/endian.h * sysdeps/x86/bits/endian.h: Rename to endianness.h; canonicalize form of file; remove redundant definitions of __FLOAT_WORD_ORDER. * sysdeps/powerpc/bits/endianness.h: Remove logic to check for broken compilers. * ctype/ctype.h * sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h * sysdeps/arm/nptl/bits/pthreadtypes-arch.h * sysdeps/csky/nptl/bits/pthreadtypes-arch.h * sysdeps/ia64/ieee754.h * sysdeps/ieee754/ieee754.h * sysdeps/ieee754/ldbl-128/ieee754.h * sysdeps/ieee754/ldbl-128ibm/ieee754.h * sysdeps/m68k/nptl/bits/pthreadtypes-arch.h * sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h * sysdeps/mips/ieee754/ieee754.h * sysdeps/mips/nptl/bits/pthreadtypes-arch.h * sysdeps/nios2/nptl/bits/pthreadtypes-arch.h * sysdeps/nptl/pthread.h * sysdeps/riscv/nptl/bits/pthreadtypes-arch.h * sysdeps/sh/nptl/bits/pthreadtypes-arch.h * sysdeps/sparc/sparc32/ieee754.h * sysdeps/unix/sysv/linux/generic/bits/stat.h * sysdeps/unix/sysv/linux/generic/bits/statfs.h * sysdeps/unix/sysv/linux/sys/acct.h * wctype/bits/wctype-wchar.h: Include bits/endian.h, not endian.h. * sysdeps/unix/sysv/linux/hppa/pthread.h: Don’t include endian.h. * sysdeps/mips/ieee754/ieee754.h: Use __LDBL_MANT_DIG__ in ifdefs, instead of LDBL_MANT_DIG. Only include float.h when __LDBL_MANT_DIG__ is not predefined, in which case define __LDBL_MANT_DIG__ to equal LDBL_MANT_DIG.
* Prefer https to http for gnu.org and fsf.org URLsPaul Eggert2019-09-07132-132/+132
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Also, change sources.redhat.com to sourceware.org. This patch was automatically generated by running the following shell script, which uses GNU sed, and which avoids modifying files imported from upstream: sed -ri ' s,(http|ftp)(://(.*\.)?(gnu|fsf|sourceware)\.org($|[^.]|\.[^a-z])),https\2,g s,(http|ftp)(://(.*\.)?)sources\.redhat\.com($|[^.]|\.[^a-z]),https\2sourceware.org\4,g ' \ $(find $(git ls-files) -prune -type f \ ! -name '*.po' \ ! -name 'ChangeLog*' \ ! -path COPYING ! -path COPYING.LIB \ ! -path manual/fdl-1.3.texi ! -path manual/lgpl-2.1.texi \ ! -path manual/texinfo.tex ! -path scripts/config.guess \ ! -path scripts/config.sub ! -path scripts/install-sh \ ! -path scripts/mkinstalldirs ! -path scripts/move-if-change \ ! -path INSTALL ! -path locale/programs/charmap-kw.h \ ! -path po/libc.pot ! -path sysdeps/gnu/errlist.c \ ! '(' -name configure \ -execdir test -f configure.ac -o -f configure.in ';' ')' \ ! '(' -name preconfigure \ -execdir test -f preconfigure.ac ';' ')' \ -print) and then by running 'make dist-prepare' to regenerate files built from the altered files, and then executing the following to cleanup: chmod a+x sysdeps/unix/sysv/linux/riscv/configure # Omit irrelevant whitespace and comment-only changes, # perhaps from a slightly-different Autoconf version. git checkout -f \ sysdeps/csky/configure \ sysdeps/hppa/configure \ sysdeps/riscv/configure \ sysdeps/unix/sysv/linux/csky/configure # Omit changes that caused a pre-commit check to fail like this: # remote: *** error: sysdeps/powerpc/powerpc64/ppc-mcount.S: trailing lines git checkout -f \ sysdeps/powerpc/powerpc64/ppc-mcount.S \ sysdeps/unix/sysv/linux/s390/s390-64/syscall.S # Omit change that caused a pre-commit check to fail like this: # remote: *** error: sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S: last line does not end in newline git checkout -f sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S
* aarch64: Disable using DC ZVA in emag memsetFeng Xue2019-08-142-7/+17
| | | | | | | * sysdeps/aarch64/multiarch/memset_base64.S (DC_ZVA_THRESHOLD): Disable DC ZVA code if this macro is defined as zero. * sysdeps/aarch64/multiarch/memset_emag.S (DC_ZVA_THRESHOLD): Change to zero to disable using DC ZVA.
* Declare most TS 18661-1 interfaces for C2X.Joseph Myers2019-08-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C2X adds the interfaces from TS 18661-1, and all except a handful in Annex F are unconditionally visible in C2X rather than only visible when __STDC_WANT_IEC_60559_BFP_EXT__ is defined. This patch updates glibc headers accordingly: most uses of __GLIBC_USE (IEC_60559_BFP_EXT) are changed to a new __GLIBC_USE (IEC_60559_BFP_EXT_C2X). (Regarding totalorder and totalordermag, the type-generic macros in tgmath.h will go away when the functions are changed to take pointer arguments.) * bits/libc-header-start.h (__GLIBC_USE_IEC_60559_BFP_EXT): Update comment. (__GLIBC_USE_IEC_60559_BFP_EXT_C2X): New macro. * bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Change to [__GLIBC_USE (IEC_60559_BFP_EXT_C2X)]. * include/limits.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * math/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * math/math.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * stdlib/bits/stdlib-ldbl.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * stdlib/stdint.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * stdlib/stdlib.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/aarch64/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/alpha/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/arm/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/csky/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/hppa/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/ia64/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/m68k/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/microblaze/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/mips/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/nios2/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/powerpc/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/riscv/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/s390/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/sh/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/sparc/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * sysdeps/x86/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise. * math/bits/mathcalls.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise, except for totalorder, totalordermag, getpayload, setpayload and setpayloadsig. * math/tgmath.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise, except for totalorder and totalordermag.
* aarch64: simplify the DT_AARCH64_VARIANT_PCS handling codeSzabolcs Nagy2019-07-102-6/+1
| | | | | | | | | | | Remove unnecessary variant_pcs field: the dynamic tag can be checked directly. * sysdeps/aarch64/dl-machine.h (elf_machine_runtime_setup): Remove the DT_AARCH64_VARIANT_PCS check. (elf_machine_lazy_rel): Use l_info[DT_AARCH64 (VARIANT_PCS)]. * sysdeps/aarch64/linkmap.h (struct link_map_machine): Remove variant_pcs.
* aarch64: new ifunc resolver ABISzabolcs Nagy2019-07-045-1/+185
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Passing a second argument to the ifunc resolver allows accessing AT_HWCAP2 values from the resolver. AArch64 will start using AT_HWCAP2 on linux because for ilp32 to remain compatible with lp64 ABI no more than 32bit hwcap flags can be in AT_HWCAP which is already used up. Currently the relocation ordering logic does not guarantee that ifunc resolvers can call libc apis or access libc objects, so only the resolver arguments and runtime environment dependent instructions can be used to do the dispatch (this affects ifunc resolvers outside of the libc). Since ifunc resolver is target specific and only supposed to be called by the dynamic linker, the call ABI can be changed in a backward compatible way: Old call ABI passed hwcap as uint64_t, new abi sets the _IFUNC_ARG_HWCAP flag in the hwcap and passes a second argument that's a pointer to an extendible struct. A resolver has to check the _IFUNC_ARG_HWCAP flag before accessing the second argument. The new sys/ifunc.h installed header has the definitions for the new ABI, everything is in the implementation reserved namespace. An alternative approach is to try to support extern calls from ifunc resolvers such as getauxval, but that seems non-trivial https://sourceware.org/ml/libc-alpha/2017-01/msg00468.html * sysdeps/aarch64/Makefile: Install sys/ifunc.h and add tests. * sysdeps/aarch64/dl-irel.h (elf_ifunc_invoke): Update to new ABI. * sysdeps/aarch64/sys/ifunc.h: New file. * sysdeps/aarch64/tst-ifunc-arg-1.c: New file. * sysdeps/aarch64/tst-ifunc-arg-2.c: New file.
* aarch64: handle STO_AARCH64_VARIANT_PCSSzabolcs Nagy2019-06-133-4/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Avoid lazy binding of symbols that may follow a variant PCS with different register usage convention from the base PCS. Currently the lazy binding entry code does not preserve all the registers required for AdvSIMD and SVE vector calls. Saving and restoring all registers unconditionally may break existing binaries, even if they never use vector calls, because of the larger stack requirement for lazy resolution, which can be significant on an SVE system. The solution is to mark all symbols in the symbol table that may follow a variant PCS so the dynamic linker can handle them specially. In this patch such symbols are always resolved at load time, not lazily. So currently LD_AUDIT for variant PCS symbols are not supported, for that the _dl_runtime_profile entry needs to be changed e.g. to unconditionally save/restore all registers (but pass down arg and retval registers to pltentry/exit callbacks according to the base PCS). This patch also removes a __builtin_expect from the modified code because the branch prediction hint did not seem useful. * sysdeps/aarch64/dl-dtprocnum.h: New file. * sysdeps/aarch64/dl-machine.h (DT_AARCH64): Define. (elf_machine_runtime_setup): Handle DT_AARCH64_VARIANT_PCS. (elf_machine_lazy_rel): Check STO_AARCH64_VARIANT_PCS and bind such symbols at load time. * sysdeps/aarch64/linkmap.h (struct link_map_machine): Add variant_pcs.
* aarch64: thunderx2 memmove performance improvementsAnton Youdkevitch2019-05-033-362/+216
| | | | | | | | | | | | | | | | | | | | | | | | The performance improvement is about 20%-30% for larger cases and about 1%-5% for smaller cases. Used SIMD load/store instead of GPR for large overlapping forward moves. Reused existing memcpy implementation for smaller or overlapping backward moves. Fixed the existing memcpy implementation to allow it to deal with the overlapping case. Simplified loop tails in the memcpy implementation - use branchless overlapping sequence of fixed length load/stores instead of branching depending on the size. A cleanup/optimization converting str's to stp's. Added __memmove_thunderx2 to the list of the available implementations.
* aarch64: thunderx2 memcpy implementation cleanup and streamliningAnton Youdkevitch2019-04-051-21/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here is the updated patch for improving the long unaligned code path (the one using "ext" instruction). 1. Always taken conditional branch at the beginning is removed. 2. Epilogue code is placed after the end of the loop to reduce the number of branches. 3. The redundant "mov" instructions inside the loop are gone due to the changed order of the registers in the "ext" instructions inside the loop, the prologue has additional "ext" instruction. 4.Updating count in the prologue was hoisted out as it is the same update for each prologue. 5. Invariant code of the loop epilogue was hoisted out. 6. As the current size of the ext chunk is exactly 16 instructions long "nop" was added at the beginning of the code sequence so that the loop entry for all the chunks be aligned. * sysdeps/aarch64/multiarch/memcpy_thunderx2.S: Cleanup branching and remove redundant code.
* Break more lines before not after operators.Joseph Myers2019-02-252-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes further coding style fixes where code was breaking lines after an operator, contrary to the GNU Coding Standards. As with the previous patch, it is limited to files following a reasonable approximation to GNU style already, and is not exhaustive; more such issues remain to be fixed. Tested for x86_64, and with build-many-glibcs.py. * dirent/dirent.h [!_DIRENT_HAVE_D_NAMLEN && _DIRENT_HAVE_D_RECLEN] (_D_ALLOC_NAMLEN): Break lines before rather than after operators. * elf/cache.c (print_cache): Likewise. * gshadow/fgetsgent_r.c (__fgetsgent_r): Likewise. * htl/pt-getattr.c (__pthread_getattr_np): Likewise. * hurd/hurdinit.c (_hurd_setproc): Likewise. * hurd/hurdkill.c (_hurd_sig_post): Likewise. * hurd/hurdlookup.c (__file_name_lookup_under): Likewise. * hurd/hurdsig.c (_hurd_internal_post_signal): Likewise. (reauth_proc): Likewise. * hurd/lookup-at.c (__file_name_lookup_at): Likewise. (__file_name_split_at): Likewise. (__directory_name_split_at): Likewise. * hurd/lookup-retry.c (__hurd_file_name_lookup_retry): Likewise. * hurd/port2fd.c (_hurd_port2fd): Likewise. * iconv/gconv_dl.c (do_print): Likewise. * inet/netinet/in.h (struct sockaddr_in): Likewise. * libio/wstrops.c (_IO_wstr_seekoff): Likewise. * locale/setlocale.c (new_composite_name): Likewise. * malloc/memusagestat.c (main): Likewise. * misc/fstab.c (fstab_convert): Likewise. * nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_usercnt): Likewise. * nss/nss_compat/compat-grp.c (getgrent_next_nss): Likewise. (getgrent_next_file): Likewise. (internal_getgrnam_r): Likewise. (internal_getgrgid_r): Likewise. * nss/nss_compat/compat-initgroups.c (getgrent_next_nss): Likewise. (internal_getgrent_r): Likewise. * nss/nss_compat/compat-pwd.c (getpwent_next_nss_netgr): Likewise. (getpwent_next_nss): Likewise. (getpwent_next_file): Likewise. (internal_getpwnam_r): Likewise. (internal_getpwuid_r): Likewise. * nss/nss_compat/compat-spwd.c (getspent_next_nss_netgr): Likewise. (getspent_next_nss): Likewise. (internal_getspnam_r): Likewise. * pwd/fgetpwent_r.c (__fgetpwent_r): Likewise. * shadow/fgetspent_r.c (__fgetspent_r): Likewise. * string/strchr.c (STRCHR): Likewise. * string/strchrnul.c (STRCHRNUL): Likewise. * sysdeps/aarch64/fpu/fpu_control.h (_FPU_FPCR_IEEE): Likewise. * sysdeps/aarch64/sfp-machine.h (_FP_CHOOSENAN): Likewise. * sysdeps/csky/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/generic/memcopy.h (PAGE_COPY_FWD_MAYBE): Likewise. * sysdeps/generic/symbol-hacks.h (__stack_chk_fail_local): Likewise. * sysdeps/gnu/netinet/ip_icmp.h (ICMP_INFOTYPE): Likewise. * sysdeps/gnu/updwtmp.c (TRANSFORM_UTMP_FILE_NAME): Likewise. * sysdeps/gnu/utmp_file.c (TRANSFORM_UTMP_FILE_NAME): Likewise. * sysdeps/hppa/jmpbuf-unwind.h (_JMPBUF_UNWINDS): Likewise. * sysdeps/mach/hurd/bits/stat.h (S_ISPARE): Likewise. * sysdeps/mach/hurd/dl-sysdep.c (_dl_sysdep_start): Likewise. (open_file): Likewise. * sysdeps/mach/hurd/htl/pt-mutexattr-setprotocol.c (pthread_mutexattr_setprotocol): Likewise. * sysdeps/mach/hurd/ioctl.c (__ioctl): Likewise. * sysdeps/mach/hurd/mmap.c (__mmap): Likewise. * sysdeps/mach/hurd/ptrace.c (ptrace): Likewise. * sysdeps/mach/hurd/spawni.c (__spawni): Likewise. * sysdeps/microblaze/dl-machine.h (elf_machine_type_class): Likewise. (elf_machine_rela): Likewise. * sysdeps/mips/mips32/sfp-machine.h (_FP_CHOOSENAN): Likewise. * sysdeps/mips/mips64/sfp-machine.h (_FP_CHOOSENAN): Likewise. * sysdeps/mips/sys/asm.h (multiple #if conditionals): Likewise. * sysdeps/posix/rename.c (rename): Likewise. * sysdeps/powerpc/novmx-sigjmp.c (__novmx__sigjmp_save): Likewise. * sysdeps/powerpc/sigjmp.c (__vmx__sigjmp_save): Likewise. * sysdeps/s390/fpu/fenv_libc.h (FPC_VALID_MASK): Likewise. * sysdeps/s390/utf8-utf16-z9.c (gconv_end): Likewise. * sysdeps/unix/grantpt.c (grantpt): Likewise. * sysdeps/unix/sysv/linux/a.out.h (N_TXTOFF): Likewise. * sysdeps/unix/sysv/linux/updwtmp.c (TRANSFORM_UTMP_FILE_NAME): Likewise. * sysdeps/unix/sysv/linux/utmp_file.c (TRANSFORM_UTMP_FILE_NAME): Likewise. * sysdeps/x86/cpu-features.c (get_common_indices): Likewise. * time/tzfile.c (__tzfile_compute): Likewise.
* aarch64: Optimized memchr specific to AmpereComputing emagFeng Xue2019-02-016-3/+308
| | | | | | | | | | | | | | | | | | This version uses general register based memory instruction to load data, because vector register based is slightly slower in emag. Character-matching is performed on 16-byte (both size and alignment) memory block in parallel each iteration. * sysdeps/aarch64/memchr.S (__memchr): Rename to MEMCHR. [!MEMCHR](MEMCHR): Set to __memchr. * sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add memchr_generic and memchr_nosimd. * sysdeps/aarch64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Add memchr ifuncs. * sysdeps/aarch64/multiarch/memchr.c: New file. * sysdeps/aarch64/multiarch/memchr_generic.S: Likewise. * sysdeps/aarch64/multiarch/memchr_nosimd.S: Likewise.