about summary refs log tree commit diff
path: root/sysdeps/x86
Commit message (Collapse)AuthorAgeFilesLines
* x86-64: Add vector erf/erff implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized erf/erff containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector erf/erff with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add vector acosh/acoshf implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized acosh/acoshf containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector acosh/acoshf with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add vector atanh/atanhf implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized atanh/atanhf containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector atanh/atanhf with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add vector log1p/log1pf implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized log1p/log1pf containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector log1p/log1pf with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add vector log2/log2f implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized log2/log2f containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector log2/log2f with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add vector log10/log10f implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized log10/log10f containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector log10/log10f with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add vector atan2/atan2f implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized atan2/atan2f containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector atan2/atan2f with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add vector cbrt/cbrtf implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized cbrt/cbrtf containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector cbrt/cbrtf with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add vector sinh/sinhf implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized sinh/sinhf containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector sinh/sinhf with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add vector expm1/expm1f implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized expm1/expm1f containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector expm1/expm1f with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add vector cosh/coshf implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized cosh/coshf containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector cosh/coshf with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add vector exp10/exp10f implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized exp10/exp10f containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector exp10/exp10f with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add vector exp2/exp2f implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized exp2/exp2f containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector exp2/exp2f with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add vector hypot/hypotf implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized hypot/hypotf containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector hypot/hypotf with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add vector asin/asinf implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized asin/asinf containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector asin/asinf with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add vector atan/atanf implementation to libmvecSunil K Pandey2021-12-292-0/+8
| | | | | | | | Implement vectorized atan/atanf containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector atan/atanf with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* elf: Add _dl_find_object functionFlorian Weimer2021-12-281-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | It can be used to speed up the libgcc unwinder, and the internal _dl_find_dso_for_object function (which is used for caller identification in dlopen and related functions, and in dladdr). _dl_find_object is in the internal namespace due to bug 28503. If libgcc switches to _dl_find_object, this namespace issue will be fixed. It is located in libc for two reasons: it is necessary to forward the call to the static libc after static dlopen, and there is a link ordering issue with -static-libgcc and libgcc_eh.a because libc.so is not a linker script that includes ld.so in the glibc build tree (so that GCC's internal -lc after libgcc_eh.a does not pick up ld.so). It is necessary to do the i386 customization in the sysdeps/x86/bits/dl_find_object.h header shared with x86-64 because otherwise, multilib installations are broken. The implementation uses software transactional memory, as suggested by Torvald Riegel. Two copies of the supporting data structures are used, also achieving full async-signal-safety. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Remove atomic-machine.h atomic typedefsAdhemerval Zanella2021-12-281-33/+7
| | | | Now that memusage.c uses generic types we can remove them.
* x86-64: Add vector acos/acosf implementation to libmvecSunil K Pandey2021-12-222-0/+8
| | | | | | | | Implement vectorized acos/acosf containing SSE, AVX, AVX2 and AVX512 versions for libmvec as per vector ABI. It also contains accuracy and ABI tests for vector acos/acosf with regenerated ulps. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* elf: Fix tst-cpu-features-cpuinfo for KVM guests on some AMD systems [BZ #28704]Aurelien Jarno2021-12-171-1/+8
| | | | | | | | | On KVM guests running on some AMD systems, the IBRS feature is reported as a synthetic feature using the Intel feature, while the cpuinfo entry keeps the same. Handle that by first checking the presence of the Intel feature on AMD systems. Fixes bug 28704.
* x86-64: Remove LD_PREFER_MAP_32BIT_EXEC support [BZ #28656]H.J. Lu2021-12-102-8/+0
| | | | | | | | | | Remove the LD_PREFER_MAP_32BIT_EXEC environment variable support since the first PT_LOAD segment is no longer executable due to defaulting to -z separate-code. This fixes [BZ #28656]. Reviewed-by: Florian Weimer <fweimer@redhat.com>
* nptl: Add <thread_pointer.h> for defining __thread_pointerFlorian Weimer2021-12-091-0/+38
| | | | | | | | | <tls.h> already contains a definition that is quite similar, but it is not consistent across architectures. Only architectures for which rseq support is added are covered. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
* x86: Don't set Prefer_No_AVX512 for processors with AVX512 and AVX-VNNIH.J. Lu2021-12-061-2/+5
| | | | | | Don't set Prefer_No_AVX512 on processors with AVX512 and AVX-VNNI since they won't lower CPU frequency when ZMM load and store instructions are used.
* x86: Double size of ERMS rep_movsb_threshold in dl-cacheinfo.hNoah Goldstein2021-11-062-14/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | No bug. This patch doubles the rep_movsb_threshold when using ERMS. Based on benchmarks the vector copy loop, especially now that it handles 4k aliasing, is better for these medium ranged. On Skylake with ERMS: Size, Align1, Align2, dst>src,(rep movsb) / (vec copy) 4096, 0, 0, 0, 0.975 4096, 0, 0, 1, 0.953 4096, 12, 0, 0, 0.969 4096, 12, 0, 1, 0.872 4096, 44, 0, 0, 0.979 4096, 44, 0, 1, 0.83 4096, 0, 12, 0, 1.006 4096, 0, 12, 1, 0.989 4096, 0, 44, 0, 0.739 4096, 0, 44, 1, 0.942 4096, 12, 12, 0, 1.009 4096, 12, 12, 1, 0.973 4096, 44, 44, 0, 0.791 4096, 44, 44, 1, 0.961 4096, 2048, 0, 0, 0.978 4096, 2048, 0, 1, 0.951 4096, 2060, 0, 0, 0.986 4096, 2060, 0, 1, 0.963 4096, 2048, 12, 0, 0.971 4096, 2048, 12, 1, 0.941 4096, 2060, 12, 0, 0.977 4096, 2060, 12, 1, 0.949 8192, 0, 0, 0, 0.85 8192, 0, 0, 1, 0.845 8192, 13, 0, 0, 0.937 8192, 13, 0, 1, 0.939 8192, 45, 0, 0, 0.932 8192, 45, 0, 1, 0.927 8192, 0, 13, 0, 0.621 8192, 0, 13, 1, 0.62 8192, 0, 45, 0, 0.53 8192, 0, 45, 1, 0.516 8192, 13, 13, 0, 0.664 8192, 13, 13, 1, 0.659 8192, 45, 45, 0, 0.593 8192, 45, 45, 1, 0.575 8192, 2048, 0, 0, 0.854 8192, 2048, 0, 1, 0.834 8192, 2061, 0, 0, 0.863 8192, 2061, 0, 1, 0.857 8192, 2048, 13, 0, 0.63 8192, 2048, 13, 1, 0.629 8192, 2061, 13, 0, 0.627 8192, 2061, 13, 1, 0.62 Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* i386: Explain why __HAVE_64B_ATOMICS has to be 0Florian Weimer2021-11-021-0/+4
|
* x86-64: Remove Prefer_AVX2_STRCMPH.J. Lu2021-11-013-11/+0
| | | | | | | | | Remove Prefer_AVX2_STRCMP to enable EVEX strcmp. When comparing 2 32-byte strings, EVEX strcmp has been improved to require 1 load, 1 VPTESTM, 1 VPCMP, 1 KMOVD and 1 INCL instead of 2 loads, 3 VPCMPs, 2 KORDs, 1 KMOVD and 1 TESTL while AVX2 strcmp requires 1 load, 2 VPCMPEQs, 1 VPMINU, 1 VPMOVMSKB and 1 TESTL. EVEX strcmp is now faster than AVX2 strcmp by up to 40% on Tiger Lake and Ice Lake.
* elf: Remove Intel MPX support (lazy PLT, ld.so profile, and LD_AUDIT)Fangrui Song2021-10-111-5/+5
| | | | | | | | | | Intel MPX failed to gain wide adoption and has been deprecated for a while. GCC 9.1 removed Intel MPX support. Linux kernel removed MPX in 2019. This patch removes the support code from the dynamic loader. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86: Modify ENTRY in sysdep.h so that p2align can be specifiedNoah Goldstein2021-10-081-2/+5
| | | | | | | | | | | | No bug. This change adds a new macro ENTRY_P2ALIGN which takes a second argument, log2 of the desired function alignment. The old ENTRY(name) macro is just ENTRY_P2ALIGN(name, 4) so this doesn't affect any existing functionality. Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
* Initial support for GNU_PROPERTY_1_NEEDEDH.J. Lu2021-10-072-5/+16
| | | | | | | | | | | | | | | | | 1. Add GNU_PROPERTY_1_NEEDED: #define GNU_PROPERTY_1_NEEDED GNU_PROPERTY_UINT32_OR_LO to indicate the needed properties by the object file. 2. Add GNU_PROPERTY_1_NEEDED_INDIRECT_EXTERN_ACCESS: #define GNU_PROPERTY_1_NEEDED_INDIRECT_EXTERN_ACCESS (1U << 0) to indicate that the object file requires canonical function pointers and cannot be used with copy relocation. 3. Scan GNU_PROPERTY_1_NEEDED property and store it in l_1_needed. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Fix sysdeps/x86/fpu/s_ffma.c for 32-bit FMA processor caseJoseph Myers2021-09-241-2/+6
| | | | | | | | | | | | | | | | | | | | It turns out the __SSE2_MATH__ conditional in sysdeps/x86/fpu/s_ffma.c does not cover all cases where the x86 fenv_private.h macros might manipulate one of the SSE and 387 floating-point state, while the actual fma implementation uses the other. Specifically, in the 32-bit case, with a compiler not defaulting to -mfpmath=sse, but testing on a processor with hardware FMA support, the multiarch fma function implementations will end up using SSE, while the fenv_private.h macros will use the 387 state for double. Change the conditional to use the default macros rather than the optimized ones in all cases except when the compiler inlines an fma instruction (in which case, since all those instructions are SSE instructions and -mfpmath=sse must be in effect for them to be inlined, the optimized macros will only use the SSE state and it's OK for them to only use the SSE state). Tested for x86_64 and x86. H.J. reports in <https://sourceware.org/pipermail/libc-alpha/2021-September/131367.html> that it fixes the problems he observed.
* Fix ffma use of round-to-odd on x86Joseph Myers2021-09-231-0/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On 32-bit x86 with -mfpmath=sse, and on x86_64 with --disable-multi-arch, the tests of ffma and its aliases (fma narrowing from binary64 to binary32) fail. This is probably the issue reported by H.J. in <https://sourceware.org/pipermail/libc-alpha/2021-September/131277.html>. The problem is the use of fenv_private.h macros in the round-to-odd implementation. Those macros are set up to manipulate only one of the SSE and 387 floating-point state, whichever is relevant for the type indicated by the suffix on the macro name. But x86 configurations sometimes use the ldbl-96 implementation of binary64 fma (that's where --disable-multi-arch is relevant for x86_64: it causes the ldbl-96 implementation to be used, instead of an IFUNC implementation that falls back to the dbl-64 version), contrary to the expectations of those macros for functions operating on double when __SSE2_MATH__ is defined. This can be addressed by using the default versions of those macros (giving x86 its own version of s_ffma.c), as is done for the *f128 macro variants where it depends on the details of how GCC was configured when building libgcc which floating-point state is affected by _Float128 arithmetic. The issue only applies when __SSE2_MATH__ is defined, and doesn't apply when __FP_FAST_FMA is defined (because in that case, fma will be inlined by the compiler, meaning it's definitely an SSE operation; for the same reason, this is not an issue for narrowing sqrt, as hardware sqrt is always inlined in that implementation for x86), but in other cases it's safest to use the default versions of the fenv_private.h macros to ensure things work whichever fma implementation is used. Tested for x86_64 (with and without --disable-multi-arch) and x86 (with and without -mfpmath=sse).
* Remove "Contributed by" linesSiddhesh Poyarekar2021-09-034-5/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | We stopped adding "Contributed by" or similar lines in sources in 2012 in favour of git logs and keeping the Contributors section of the glibc manual up to date. Removing these lines makes the license header a bit more consistent across files and also removes the possibility of error in attribution when license blocks or files are copied across since the contributed-by lines don't actually reflect reality in those cases. Move all "Contributed by" and similar lines (Written by, Test by, etc.) into a new file CONTRIBUTED-BY to retain record of these contributions. These contributors are also mentioned in manual/contrib.texi, so we just maintain this additional record as a courtesy to the earlier developers. The following scripts were used to filter a list of files to edit in place and to clean up the CONTRIBUTED-BY file respectively. These were not added to the glibc sources because they're not expected to be of any use in future given that this is a one time task: https://gist.github.com/siddhesh/b5ecac94eabfd72ed2916d6d8157e7dc https://gist.github.com/siddhesh/15ea1f5e435ace9774f485030695ee02 Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* configure: Allow LD to be LLD 13.0.0 or above [BZ #26558]Fangrui Song2021-08-311-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | When using LLD (LLVM linker) as the linker, configure prints a confusing message. *** These critical programs are missing or too old: GNU ld LLD>=13.0.0 can build glibc --enable-static-pie. (8.0.0 needs one workaround for -Wl,-defsym=_begin=0. 9.0.0 works with --disable-static-pie). XFAIL two tests sysdeps/x86/tst-ifunc-isa-* which have the BZ #28154 issue (LLD follows the PowerPC port of GNU ld for ifunc by placing IRELATIVE relocations in .rela.dyn, triggering a glibc ifunc fragility). The set of dynamic symbols is the same with GNU ld and LLD, modulo unused SHN_ABS version node symbols. For comparison, gold does not support --enable-static-pie yet (--no-dynamic-linker is unsupported BZ #22221), yet has 6 failures more than LLD. gold linked libc.so has larger .dynsym differences with GNU ld and LLD (non-default version symbols are changed to default versions by a version script BZ #28196).
* x86: fix Autoconf caching of instruction support checks [BZ #27991]Matt Whitlock2021-08-192-37/+53
| | | | | | | | | | | | | | | | | | | | | | | | The Autoconf documentation for the AC_CACHE_CHECK macro states: The commands-to-set-it must have no side effects except for setting the variable cache-id, see below. However, the tests for support of -msahf and -mmovbe were embedded in the commands-to-set-it for lib_cv_include_x86_isa_level. This had the consequence that libc_cv_have_x86_lahf_sahf and libc_cv_have_x86_movbe were not defined whenever lib_cv_include_x86_isa_level was read from cache. These variables' being undefined meant that their unquoted use in later test expressions led to the 'test' built-in's misparsing its arguments and emitting errors like "test: =: unexpected operator" or "test: =: unary operator expected", depending on the particular shell. This commit refactors the tests for LAHF/SAHF and MOVBE instruction support into their own AC_CACHE_CHECK macro invocations to obey the rule that the commands-to-set-it must have no side effects other than setting the variable named by cache-id. Signed-off-by: Matt Whitlock <sourceware@mattwhitlock.name> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* x86-64: Add Avoid_Short_Distance_REP_MOVSBH.J. Lu2021-07-284-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | commit 3ec5d83d2a237d39e7fd6ef7a0bc8ac4c171a4a5 Author: H.J. Lu <hjl.tools@gmail.com> Date: Sat Jan 25 14:19:40 2020 -0800 x86-64: Avoid rep movsb with short distance [BZ #27130] introduced some regressions on Intel processors without Fast Short REP MOV (FSRM). Add Avoid_Short_Distance_REP_MOVSB to avoid rep movsb with short distance only on Intel processors with FSRM. bench-memmove-large on Skylake server shows that cycles of __memmove_evex_unaligned_erms improves for the following data size: before after Improvement length=4127, align1=3, align2=0: 479.38 349.25 27% length=4223, align1=9, align2=5: 405.62 333.25 18% length=8223, align1=3, align2=0: 786.12 496.38 37% length=8319, align1=9, align2=5: 727.50 501.38 31% length=16415, align1=3, align2=0: 1436.88 840.00 41% length=16511, align1=9, align2=5: 1375.50 836.38 39% length=32799, align1=3, align2=0: 2890.00 1860.12 36% length=32895, align1=9, align2=5: 2891.38 1931.88 33%
* x86: Install <bits/platform/x86.h> [BZ #27958]H.J. Lu2021-07-2312-553/+557
| | | | | | | | | | | | | | | 1. Install <bits/platform/x86.h> for <sys/platform/x86.h> which includes <bits/platform/x86.h>. 2. Rename HAS_CPU_FEATURE to CPU_FEATURE_PRESENT which checks if the processor has the feature. 3. Rename CPU_FEATURE_USABLE to CPU_FEATURE_ACTIVE which checks if the feature is active. There may be other preconditions, like sufficient stack space or further setup for AMX, which must be satisfied before the feature can be used. This fixes BZ #27958. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* Fix build and tests with --disable-tunablesSiddhesh Poyarekar2021-07-231-2/+6
| | | | | | | | | | | | | Remove unused code and declare __libc_mallopt when !IS_IN (libc) to allow the debug hook to build with --disable-tunables. Also, run tst-ifunc-isa-2* tests only when tunables are enabled since the result depends on it. Tested on x86_64. Reported-by: Matheus Castanho <msc@linux.ibm.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* elf: Fix tst-cpu-features-cpuinfo on some AMD systems (BZ #28090)Adhemerval Zanella2021-07-193-1/+16
| | | | | | | | | | | | | | | The SSBD feature is implemented in 2 different ways on AMD processors: newer systems (Zen3) provides AMD_SSBD (function 8000_0008, EBX[24]), while older system provides AMD_VIRT_SSBD (function 8000_0008, EBX[25]). However for AMD_VIRT_SSBD, kernel shows both 'ssdb' and 'virt_ssdb' on /proc/cpuinfo; while for AMD_SSBD only 'ssdb' is provided. This now check is AMD_SSBD is set to check for 'ssbd', otherwise check if AMD_VIRT_SSDB is set to check for 'virt_ssbd'. Checked on x86_64-linux-gnu on a Ryzen 9 5900x. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86: Check RTM_ALWAYS_ABORT for RTM [BZ #28033]H.J. Lu2021-07-015-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html * Intel TSX will be disabled by default. * The processor will force abort all Restricted Transactional Memory (RTM) transactions by default. * A new CPUID bit CPUID.07H.0H.EDX[11](RTM_ALWAYS_ABORT) will be enumerated, which is set to indicate to updated software that the loaded microcode is forcing RTM abort. * On processors that enumerate support for RTM, the CPUID enumeration bits for Intel TSX (CPUID.07H.0H.EBX[11] and CPUID.07H.0H.EBX[4]) continue to be set by default after microcode update. * Workloads that were benefited from Intel TSX might experience a change in performance. * System software may use a new bit in Model-Specific Register (MSR) 0x10F TSX_FORCE_ABORT[TSX_CPUID_CLEAR] functionality to clear the Hardware Lock Elision (HLE) and RTM bits to indicate to software that Intel TSX is disabled. 1. Add RTM_ALWAYS_ABORT to CPUID features. 2. Set RTM usable only if RTM_ALWAYS_ABORT isn't set. This skips the string/tst-memchr-rtm etc. testcases on the affected processors, which always fail after a microcde update. 3. Check RTM feature, instead of usability, against /proc/cpuinfo. This fixes BZ #28033.
* x86: Fix tst-cpu-features-cpuinfo on Ryzen 9 (BZ #27873)Adhemerval Zanella2021-06-243-4/+34
| | | | | | | | | | | | | | | | | | | AMD define different flags for IRPB, IBRS, and STIPBP [1], so new x86_64_cpu are added and IBRS_IBPB is only tested for Intel. The SSDB is also defined and implemented different on AMD [2], and also a new AMD_SSDB flag is added. It should map to the cpuinfo 'ssdb' on recent AMD cpus. It fixes tst-cpu-features-cpuinfo and tst-cpu-features-cpuinfo-static on recent AMD cpus. Checked on x86_64-linux-gnu on AMD Ryzen 9 5900X. [1] https://developer.amd.com/wp-content/resources/Architecture_Guidelines_Update_Indirect_Branch_Control.pdf [2] https://bugzilla.kernel.org/show_bug.cgi?id=199889 Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86: Copy IBT and SHSTK usable only if CET is enabledH.J. Lu2021-06-231-2/+5
| | | | | | | IBT and SHSTK usable bits are copied from CPUID feature bits and later cleared if kernel doesn't support CET. Copy IBT and SHSTK usable only if CET is enabled so that they aren't set on CET capable processors with non-CET enabled glibc.
* dlfcn: Cleanups after -ldl is no longer requiredFlorian Weimer2021-06-031-11/+2
| | | | | | | | | | | | This commit removes the ELF constructor and internal variables from dlfcn/dlfcn.c. The file now serves the same purpose as nptl/libpthread-compat.c, so it is renamed to dlfcn/libdl-compat.c. The use of libdl-shared-only-routines ensures that libdl.a is empty. This commit adjusts the test suite not to use $(libdl). The libdl.so symbolic link is no longer installed. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Properly check stack alignment [BZ #27901]H.J. Lu2021-05-241-0/+28
| | | | | | | | | | | | | | | | | | | | | | 1. Replace if ((((uintptr_t) &_d) & (__alignof (double) - 1)) != 0) which may be optimized out by compiler, with int __attribute__ ((weak, noclone, noinline)) is_aligned (void *p, int align) { return (((uintptr_t) p) & (align - 1)) != 0; } 2. Add TEST_STACK_ALIGN_INIT to TEST_STACK_ALIGN. 3. Add a common TEST_STACK_ALIGN_INIT to check 16-byte stack alignment for both i386 and x86-64. 4. Update powerpc to use TEST_STACK_ALIGN_INIT. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* x86: Set rep_movsb_threshold to 2112 on processors with FSRMH.J. Lu2021-05-031-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The glibc memcpy benchmark on Intel Core i7-1065G7 (Ice Lake) showed that REP MOVSB became faster after 2112 bytes: Vector Move REP MOVSB length=2112, align1=0, align2=0: 24.20 24.40 length=2112, align1=1, align2=0: 26.07 23.13 length=2112, align1=0, align2=1: 27.18 28.13 length=2112, align1=1, align2=1: 26.23 25.16 length=2176, align1=0, align2=0: 23.18 22.52 length=2176, align1=2, align2=0: 25.45 22.52 length=2176, align1=0, align2=2: 27.14 27.82 length=2176, align1=2, align2=2: 22.73 25.56 length=2240, align1=0, align2=0: 24.62 24.25 length=2240, align1=3, align2=0: 29.77 27.15 length=2240, align1=0, align2=3: 35.55 29.93 length=2240, align1=3, align2=3: 34.49 25.15 length=2304, align1=0, align2=0: 34.75 26.64 length=2304, align1=4, align2=0: 32.09 22.63 length=2304, align1=0, align2=4: 28.43 31.24 Use REP MOVSB for data size > 2112 bytes in memcpy on processors with fast short REP MOVSB (FSRM). * sysdeps/x86/dl-cacheinfo.h (dl_init_cacheinfo): Set rep_movsb_threshold to 2112 on processors with fast short REP MOVSB (FSRM).
* x86: tst-cpu-features-supports.c: Update AMX checkH.J. Lu2021-04-221-3/+3
| | | | | Pass "amx-bf16", "amx-int8" and "amx-tile", instead of "amx_bf16", "amx_int8" and "amx_tile", to __builtin_cpu_supports for GCC 11.
* nptl: Remove longjmp, siglongjmp from libpthreadFlorian Weimer2021-04-211-71/+0
| | | | | | | | | | | | | | The definitions in libc are sufficient, the forwarders are no longer needed. The symbols have been moved using scripts/move-symbol-to-libc.py. s390-linux-gnu and s390x-linux-gnu need a new version placeholder to keep the GLIBC_2.19 symbol version in libpthread. Tested on i386-linux-gnu, powerpc64le-linux-gnu, s390x-linux-gnu, x86_64-linux-gnu. Built with build-many-glibcs.py. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
* Move __isnanf128 to libc.soSiddhesh Poyarekar2021-03-301-0/+1
| | | | | | | | All of the isnan functions are in libc.so due to printf_fp, so move __isnanf128 there too for consistency. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@ascii.art.br> Reviewed-by: Florian Weimer <fweimer@redhat.com>
* x86: Add string/memory function tests in RTM regionH.J. Lu2021-03-2912-0/+618
| | | | | | | | At function exit, AVX optimized string/memory functions have VZEROUPPER which triggers RTM abort. When such functions are called inside a transactionally executing RTM region, RTM abort causes severe performance degradation. Add tests to verify that string/memory functions won't cause RTM abort in RTM region.
* x86: Set Prefer_No_VZEROUPPER and add Prefer_AVX2_STRCMPH.J. Lu2021-03-293-2/+21
| | | | | | | | | 1. Set Prefer_No_VZEROUPPER if RTM is usable to avoid RTM abort triggered by VZEROUPPER inside a transactionally executing RTM region. 2. Since to compare 2 32-byte strings, 256-bit EVEX strcmp requires 2 loads, 3 VPCMPs and 2 KORDs while AVX2 strcmp requires 1 load, 2 VPCMPEQs, 1 VPMINU and 1 VPMOVMSKB, AVX2 strcmp is faster than EVEX strcmp. Add Prefer_AVX2_STRCMP to prefer AVX2 strcmp family functions.
* x86: Properly disable XSAVE related features [BZ #27605]H.J. Lu2021-03-292-0/+56
| | | | | | | 1. Support GLIBC_TUNABLES=glibc.cpu.hwcaps=-XSAVE. 2. Disable all features which depend on XSAVE: a. If OSXSAVE is disabled by glibc tunables. Or b. If both XSAVE and XSAVEC aren't usable.