about summary refs log tree commit diff
path: root/sysdeps/x86/cpu-features.h
Commit message (Collapse)AuthorAgeFilesLines
* Update copyright dates with scripts/update-copyrights.Joseph Myers2018-01-011-1/+1
| | | | | | | * All files with FSF copyright notices: Update copyright dates using scripts/update-copyrights. * locale/programs/charmap-kw.h: Regenerated. * locale/programs/locfile-kw.h: Likewise.
* x86-64: Use fxsave/xsave/xsavec in _dl_runtime_resolve [BZ #21265]H.J. Lu2017-10-201-7/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In _dl_runtime_resolve, use fxsave/xsave/xsavec to preserve all vector, mask and bound registers. It simplifies _dl_runtime_resolve and supports different calling conventions. ld.so code size is reduced by more than 1 KB. However, use fxsave/xsave/xsavec takes a little bit more cycles than saving and restoring vector and bound registers individually. Latency for _dl_runtime_resolve to lookup the function, foo, from one shared library plus libc.so: Before After Change Westmere (SSE)/fxsave 345 866 151% IvyBridge (AVX)/xsave 420 643 53% Haswell (AVX)/xsave 713 1252 75% Skylake (AVX+MPX)/xsavec 559 719 28% Skylake (AVX512+MPX)/xsavec 145 272 87% Ryzen (AVX)/xsavec 280 553 97% This is the worst case where portion of time spent for saving and restoring registers is bigger than majority of cases. With smaller _dl_runtime_resolve code size, overall performance impact is negligible. On IvyBridge, differences in build and test time of binutils with lazy binding GCC and binutils are noises. On Westmere, differences in bootstrap and "makc check" time of GCC 7 with lazy binding GCC and binutils are also noises. [BZ #21265] * sysdeps/x86/cpu-features-offsets.sym (XSAVE_STATE_SIZE_OFFSET): New. * sysdeps/x86/cpu-features.c: Include <libc-pointer-arith.h>. (get_common_indeces): Set xsave_state_size, xsave_state_full_size and bit_arch_XSAVEC_Usable if needed. (init_cpu_features): Remove bit_arch_Use_dl_runtime_resolve_slow and bit_arch_Use_dl_runtime_resolve_opt. * sysdeps/x86/cpu-features.h (bit_arch_Use_dl_runtime_resolve_opt): Removed. (bit_arch_Use_dl_runtime_resolve_slow): Likewise. (bit_arch_Prefer_No_AVX512): Updated. (bit_arch_MathVec_Prefer_No_AVX512): Likewise. (bit_arch_XSAVEC_Usable): New. (STATE_SAVE_OFFSET): Likewise. (STATE_SAVE_MASK): Likewise. [__ASSEMBLER__]: Include <cpu-features-offsets.h>. (cpu_features): Add xsave_state_size and xsave_state_full_size. (index_arch_Use_dl_runtime_resolve_opt): Removed. (index_arch_Use_dl_runtime_resolve_slow): Likewise. (index_arch_XSAVEC_Usable): New. * sysdeps/x86/cpu-tunables.c (TUNABLE_CALLBACK (set_hwcaps)): Support XSAVEC_Usable. Remove Use_dl_runtime_resolve_slow. * sysdeps/x86_64/Makefile (tst-x86_64-1-ENV): New if tunables is enabled. * sysdeps/x86_64/dl-machine.h (elf_machine_runtime_setup): Replace _dl_runtime_resolve_sse, _dl_runtime_resolve_avx, _dl_runtime_resolve_avx_slow, _dl_runtime_resolve_avx_opt, _dl_runtime_resolve_avx512 and _dl_runtime_resolve_avx512_opt with _dl_runtime_resolve_fxsave, _dl_runtime_resolve_xsave and _dl_runtime_resolve_xsavec. * sysdeps/x86_64/dl-trampoline.S (DL_RUNTIME_UNALIGNED_VEC_SIZE): Removed. (DL_RUNTIME_RESOLVE_REALIGN_STACK): Check STATE_SAVE_ALIGNMENT instead of VEC_SIZE. (REGISTER_SAVE_BND0): Removed. (REGISTER_SAVE_BND1): Likewise. (REGISTER_SAVE_BND3): Likewise. (REGISTER_SAVE_RAX): Always defined to 0. (VMOV): Removed. (_dl_runtime_resolve_avx): Likewise. (_dl_runtime_resolve_avx_slow): Likewise. (_dl_runtime_resolve_avx_opt): Likewise. (_dl_runtime_resolve_avx512): Likewise. (_dl_runtime_resolve_avx512_opt): Likewise. (_dl_runtime_resolve_sse): Likewise. (_dl_runtime_resolve_sse_vex): Likewise. (USE_FXSAVE): New. (_dl_runtime_resolve_fxsave): Likewise. (USE_XSAVE): Likewise. (_dl_runtime_resolve_xsave): Likewise. (USE_XSAVEC): Likewise. (_dl_runtime_resolve_xsavec): Likewise. * sysdeps/x86_64/dl-trampoline.h (_dl_runtime_resolve_avx512): Removed. (_dl_runtime_resolve_avx512_opt): Likewise. (_dl_runtime_resolve_avx): Likewise. (_dl_runtime_resolve_avx_opt): Likewise. (_dl_runtime_resolve_sse): Likewise. (_dl_runtime_resolve_sse_vex): Likewise. (_dl_runtime_resolve_fxsave): New. (_dl_runtime_resolve_xsave): Likewise. (_dl_runtime_resolve_xsavec): Likewise.
* x86: Add MathVec_Prefer_No_AVX512 to cpu-features [BZ #21967]H.J. Lu2017-09-121-0/+2
| | | | | | | | | | | | | | | | | | | | | AVX512 functions in mathvec are used on machines with AVX512. An AVX2 wrapper is also provided and it can be used when the AVX512 version isn't profitable. MathVec_Prefer_No_AVX512 is addded to cpu-features. If glibc.tune.hwcaps=MathVec_Prefer_No_AVX512 is set in GLIBC_TUNABLES environment variable, the AVX2 wrapper will be used. Tested on x86-64 machines with and without AVX512. Also verified glibc.tune.hwcaps=MathVec_Prefer_No_AVX512 on AVX512 machine. [BZ #21967] * sysdeps/x86/cpu-features.h (bit_arch_MathVec_Prefer_No_AVX512): New. (index_arch_MathVec_Prefer_No_AVX512): Likewise. * sysdeps/x86/cpu-tunables.c (TUNABLE_CALLBACK (set_hwcaps)): Handle MathVec_Prefer_No_AVX512. * sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-avx512.h (IFUNC_SELECTOR): Return AVX2 version if MathVec_Prefer_No_AVX512 is set.
* x86: Remove assembly versions of index_cpu_*/index_arch_*H.J. Lu2017-08-251-42/+1
| | | | | | | | | | | Since assembly versions of HAS_CPU_FEATURE and HAS_ARCH_FEATURE have been removed, assembly versions of index_cpu_* and index_arch_* can also be removed. Tested on i686 and x86-64 with and without --disable-multi-arch. * sysdeps/x86/cpu-features.h [__ASSEMBLER__] (index_cpu_*, index_arch_*): Removed.
* x86: Add IBT/SHSTK bits to cpu-featuresH.J. Lu2017-08-141-0/+8
| | | | | | | | | | | | | | | | Add IBT/SHSTK bits to cpu-features for Shadow Stack in Intel Control-flow Enforcement Technology (CET) instructions: https://software.intel.com/sites/default/files/managed/4d/2a/control-flow-enforcement-technology-preview.pdf * sysdeps/x86/cpu-features.h (bit_cpu_BIT): New. (bit_cpu_SHSTK): Likewise. (index_cpu_IBT): Likewise. (index_cpu_SHSTK): Likewise. (reg_IBT): Likewise. (reg_SHSTK): Likewise. * sysdeps/x86/cpu-tunables.c (TUNABLE_CALLBACK (set_hwcaps)): Handle index_cpu_IBT and index_cpu_SHSTK.
* x86: Remove assembly versions of HAS_CPU_FEATURE/HAS_ARCH_FEATUREH.J. Lu2017-08-041-57/+0
| | | | | | | | | Since all x86 IFUNC selectors are implemented in C, assembly versions of HAS_CPU_FEATURE and HAS_ARCH_FEATURE can be removed. * sysdeps/x86/cpu-features.h [__ASSEMBLER__] (LOAD_RTLD_GLOBAL_RO_RDX, HAS_FEATURE, LOAD_FUNC_GOT_EAX, HAS_CPU_FEATURE, HAS_ARCH_FEATURE): Removed.
* x86: Rename glibc.tune.ifunc to glibc.tune.hwcapsH.J. Lu2017-06-211-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | Rename glibc.tune.ifunc to glibc.tune.hwcaps and move it to sysdeps/x86/dl-tunables.list since it is x86 specicifc. Also change type of data_cache_size, data_cache_size and non_temporal_threshold to unsigned long int to match size_t. Remove usage DEFAULT_STRLEN from cpu-tunables.c. * elf/dl-tunables.list (glibc.tune.ifunc): Removed. * sysdeps/x86/dl-tunables.list (glibc.tune.hwcaps): New. Remove security_level on all fields. * manual/tunables.texi: Replace ifunc with hwcaps. * sysdeps/x86/cpu-features.c (TUNABLE_CALLBACK (set_ifunc)): Renamed to .. (TUNABLE_CALLBACK (set_hwcaps)): This. (init_cpu_features): Updated. * sysdeps/x86/cpu-features.h (cpu_features): Change type of data_cache_size, data_cache_size and non_temporal_threshold to unsigned long int. * sysdeps/x86/cpu-tunables.c (DEFAULT_STRLEN): Removed. (TUNABLE_CALLBACK (set_ifunc)): Renamed to ... (TUNABLE_CALLBACK (set_hwcaps)): This. Update comments. Don't use DEFAULT_STRLEN.
* tunables: Add IFUNC selection and cache sizesH.J. Lu2017-06-201-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current IFUNC selection is based on microbenchmarks in glibc. It should give the best performance for most workloads. But other choices may have better performance for a particular workload or on the hardware which wasn't available at the selection was made. The environment variable, GLIBC_TUNABLES=glibc.tune.ifunc=-xxx,yyy,-zzz...., can be used to enable CPU/ARCH feature yyy, disable CPU/ARCH feature yyy and zzz, where the feature name is case-sensitive and has to match the ones in cpu-features.h. It can be used by glibc developers to override the IFUNC selection to tune for a new processor or improve performance for a particular workload. It isn't intended for normal end users. NOTE: the IFUNC selection may change over time. Please check all multiarch implementations when experimenting. Also, GLIBC_TUNABLES=glibc.tune.x86_non_temporal_threshold=NUMBER is provided to set threshold to use non temporal store to NUMBER, GLIBC_TUNABLES=glibc.tune.x86_data_cache_size=NUMBER to set data cache size, GLIBC_TUNABLES=glibc.tune.x86_shared_cache_size=NUMBER to set shared cache size. * elf/dl-tunables.list (tune): Add ifunc, x86_non_temporal_threshold, x86_data_cache_size and x86_shared_cache_size. * manual/tunables.texi: Document glibc.tune.ifunc, glibc.tune.x86_data_cache_size, glibc.tune.x86_shared_cache_size and glibc.tune.x86_non_temporal_threshold. * sysdeps/unix/sysv/linux/x86/dl-sysdep.c: New file. * sysdeps/x86/cpu-tunables.c: Likewise. * sysdeps/x86/cacheinfo.c (init_cacheinfo): Check and get data cache size, shared cache size and non temporal threshold from cpu_features. * sysdeps/x86/cpu-features.c [HAVE_TUNABLES] (TUNABLE_NAMESPACE): New. [HAVE_TUNABLES] Include <unistd.h>. [HAVE_TUNABLES] Include <elf/dl-tunables.h>. [HAVE_TUNABLES] (TUNABLE_CALLBACK (set_ifunc)): Likewise. [HAVE_TUNABLES] (init_cpu_features): Use TUNABLE_GET to set IFUNC selection, data cache size, shared cache size and non temporal threshold. * sysdeps/x86/cpu-features.h (cpu_features): Add data_cache_size, shared_cache_size and non_temporal_threshold.
* x86-64: Optimize memcmp/wmemcmp with AVX2 and MOVBEH.J. Lu2017-06-051-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Optimize x86-64 memcmp/wmemcmp with AVX2. It uses vector compare as much as possible. It is as fast as SSE4 memcmp for size <= 16 bytes and up to 2X faster for size > 16 bytes on Haswell and Skylake. Select AVX2 memcmp/wmemcmp on AVX2 machines where vzeroupper is preferred and AVX unaligned load is fast. NB: It uses TZCNT instead of BSF since TZCNT produces the same result as BSF for non-zero input. TZCNT is faster than BSF and is executed as BSF if machine doesn't support TZCNT. Key features: 1. For size from 2 to 7 bytes, load as big endian with movbe and bswap to avoid branches. 2. Use overlapping compare to avoid branch. 3. Use vector compare when size >= 4 bytes for memcmp or size >= 8 bytes for wmemcmp. 4. If size is 8 * VEC_SIZE or less, unroll the loop. 5. Compare 4 * VEC_SIZE at a time with the aligned first memory area. 6. Use 2 vector compares when size is 2 * VEC_SIZE or less. 7. Use 4 vector compares when size is 4 * VEC_SIZE or less. 8. Use 8 vector compares when size is 8 * VEC_SIZE or less. * sysdeps/x86/cpu-features.h (index_cpu_MOVBE): New. * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add memcmp-avx2 and wmemcmp-avx2. * sysdeps/x86_64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Test __memcmp_avx2 and __wmemcmp_avx2. * sysdeps/x86_64/multiarch/memcmp-avx2.S: New file. * sysdeps/x86_64/multiarch/wmemcmp-avx2.S: Likewise. * sysdeps/x86_64/multiarch/memcmp.S: Use __memcmp_avx2 on AVX 2 machines if AVX unaligned load is fast and vzeroupper is preferred. * sysdeps/x86_64/multiarch/wmemcmp.S: Use __wmemcmp_avx2 on AVX 2 machines if AVX unaligned load is fast and vzeroupper is preferred.
* x86: Set dl_platform and dl_hwcap from CPU features [BZ #21391]H.J. Lu2017-05-031-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dl_platform and dl_hwcap are set from AT_PLATFORM and AT_HWCAP very early during startup. They are used by dynamic linker to determine platform and build an array of hardware capability names, which are added to search path when loading shared object. dl_platform and dl_hwcap are unused on x86-64. On i386, i386, i486, i586 and i686 platforms were supported and only SSE2 capability was used. On x86, usage of AT_PLATFORM and AT_HWCAP to determine platform and processor capabilities is obsolete since all information is available in dl_x86_cpu_features. This patch sets dl_platform and dl_hwcap from dl_x86_cpu_features in dynamic linker. On i386, the available plaforms are changed to i586 and i686 since i386 has been deprecated. On x86-64, the available plaforms are haswell, which is for Haswell class processors with BMI1, BMI2, LZCNT, MOVBE, POPCNT, AVX2 and FMA, and xeon_phi, which is for Xeon Phi class processors with AVX512F, AVX512CD, AVX512ER and AVX512PF. A capability, avx512_1, is also added to x86-64 for AVX512 ISAs: AVX512F, AVX512CD, AVX512BW, AVX512DQ and AVX512VL. [BZ #21391] * sysdeps/i386/dl-machine.h (dl_platform_init) [IS_IN (rtld)]: Only call init_cpu_features. [!IS_IN (rtld)]: Only set GLRO(dl_platform) to NULL if needed. * sysdeps/x86_64/dl-machine.h (dl_platform_init): Likewise. * sysdeps/i386/dl-procinfo.h: Removed. * sysdeps/unix/sysv/linux/i386/dl-procinfo.h: Don't include <sysdeps/i386/dl-procinfo.h> nor <ldsodefs.h>. Include <sysdeps/x86/dl-procinfo.h>. (_dl_procinfo): Replace _DL_HWCAP_COUNT with 32. * sysdeps/unix/sysv/linux/x86_64/dl-procinfo.h [!IS_IN (ldconfig)]: Include <sysdeps/x86/dl-procinfo.h> instead of <sysdeps/generic/dl-procinfo.h>. * sysdeps/x86/cpu-features.c: Include <dl-hwcap.h>. (init_cpu_features): Set dl_platform, dl_hwcap and dl_hwcap_mask. * sysdeps/x86/cpu-features.h (bit_cpu_LZCNT): New. (bit_cpu_MOVBE): Likewise. (bit_cpu_BMI1): Likewise. (bit_cpu_BMI2): Likewise. (index_cpu_BMI1): Likewise. (index_cpu_BMI2): Likewise. (index_cpu_LZCNT): Likewise. (index_cpu_MOVBE): Likewise. (index_cpu_POPCNT): Likewise. (reg_BMI1): Likewise. (reg_BMI2): Likewise. (reg_LZCNT): Likewise. (reg_MOVBE): Likewise. (reg_POPCNT): Likewise. * sysdeps/x86/dl-hwcap.h: New file. * sysdeps/x86/dl-procinfo.h: Likewise. * sysdeps/x86/dl-procinfo.c (_dl_x86_hwcap_flags): New. (_dl_x86_platforms): Likewise.
* x86: Use AVX2 memcpy/memset on Skylake server [BZ #21396]H.J. Lu2017-04-181-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On Skylake server, AVX512 load/store instructions in memcpy/memset may lead to lower CPU turbo frequency in certain situations. Use of AVX2 in memcpy/memset has been observed to have improved overall performance in many workloads due to the higher frequency. Since AVX512ER is unique to Xeon Phi, this patch sets Prefer_No_AVX512 if AVX512ER isn't available so that AVX2 versions of memcpy/memset are used on Skylake server. [BZ #21396] * sysdeps/x86/cpu-features.c (init_cpu_features): Set Prefer_No_AVX512 if AVX512ER isn't available. * sysdeps/x86/cpu-features.h (bit_arch_Prefer_No_AVX512): New. (index_arch_Prefer_No_AVX512): Likewise. * sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Don't use AVX512 version if Prefer_No_AVX512 is set. * sysdeps/x86_64/multiarch/memcpy_chk.S (__memcpy_chk): Likewise. * sysdeps/x86_64/multiarch/memmove.S (__libc_memmove): Likewise. * sysdeps/x86_64/multiarch/memmove_chk.S (__memmove_chk): Likewise. * sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Likewise. * sysdeps/x86_64/multiarch/mempcpy_chk.S (__mempcpy_chk): Likewise. * sysdeps/x86_64/multiarch/memset.S (memset): Likewise. * sysdeps/x86_64/multiarch/memset_chk.S (__memset_chk): Likewise.
* x86: Set Prefer_No_VZEROUPPER if AVX512ER is availableH.J. Lu2017-04-181-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | AVX512ER won't be implemented in any Xeon processors and will be in all Xeon Phi processors. Don't check CPU model number when setting Prefer_No_VZEROUPPER for Xeon Phi. Instead, set Prefer_No_VZEROUPPER if AVX512ER is available. It works with current and future Xeon Phi and non-Xeon Phi processors. * sysdeps/x86/cpu-features.c (init_cpu_features): Set Prefer_No_VZEROUPPER if AVX512ER is available. * sysdeps/x86/cpu-features.h (bit_cpu_AVX512PF): New. (bit_cpu_AVX512ER): Likewise. (bit_cpu_AVX512CD): Likewise. (bit_cpu_AVX512BW): Likewise. (bit_cpu_AVX512VL): Likewise. (index_cpu_AVX512PF): Likewise. (index_cpu_AVX512ER): Likewise. (index_cpu_AVX512CD): Likewise. (index_cpu_AVX512BW): Likewise. (index_cpu_AVX512VL): Likewise. (reg_AVX512PF): Likewise. (reg_AVX512ER): Likewise. (reg_AVX512CD): Likewise. (reg_AVX512BW): Likewise. (reg_AVX512VL): Likewise.
* Check if SSE is available with HAS_CPU_FEATUREH.J. Lu2017-04-071-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | Similar to other CPU feature checks, check if SSE is available with HAS_CPU_FEATURE. * sysdeps/i386/fpu/fclrexcpt.c (__feclearexcept): Use HAS_CPU_FEATURE to check for SSE. * sysdeps/i386/fpu/fedisblxcpt.c (fedisableexcept): Likewise. * sysdeps/i386/fpu/feenablxcpt.c (feenableexcept): Likewise. * sysdeps/i386/fpu/fegetenv.c (__fegetenv): Likewise. * sysdeps/i386/fpu/fegetmode.c (fegetmode): Likewise. * sysdeps/i386/fpu/feholdexcpt.c (__feholdexcept): Likewise. * sysdeps/i386/fpu/fesetenv.c (__fesetenv): Likewise. * sysdeps/i386/fpu/fesetmode.c (fesetmode): Likewise. * sysdeps/i386/fpu/fesetround.c (__fesetround): Likewise. * sysdeps/i386/fpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/i386/fpu/fgetexcptflg.c (__fegetexceptflag): Likewise. * sysdeps/i386/fpu/fsetexcptflg.c (__fesetexceptflag): Likewise. * sysdeps/i386/fpu/ftestexcept.c (fetestexcept): Likewise. * sysdeps/i386/setfpucw.c (__setfpucw): Likewise. * sysdeps/x86/cpu-features.h (bit_cpu_SSE): New. (index_cpu_SSE): Likewise. (reg_SSE): Likewise.
* Update copyright dates with scripts/update-copyrights.Joseph Myers2017-01-011-1/+1
|
* X86-64: Add _dl_runtime_resolve_avx[512]_{opt|slow} [BZ #20508]H.J. Lu2016-09-061-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is transition penalty when SSE instructions are mixed with 256-bit AVX or 512-bit AVX512 load instructions. Since _dl_runtime_resolve_avx and _dl_runtime_profile_avx512 save/restore 256-bit YMM/512-bit ZMM registers, there is transition penalty when SSE instructions are used with lazy binding on AVX and AVX512 processors. To avoid SSE transition penalty, if only the lower 128 bits of the first 8 vector registers are non-zero, we can preserve %xmm0 - %xmm7 registers with the zero upper bits. For AVX and AVX512 processors which support XGETBV with ECX == 1, we can use XGETBV with ECX == 1 to check if the upper 128 bits of YMM registers or the upper 256 bits of ZMM registers are zero. We can restore only the non-zero portion of vector registers with AVX/AVX512 load instructions which will zero-extend upper bits of vector registers. This patch adds _dl_runtime_resolve_sse_vex which saves and restores XMM registers with 128-bit AVX store/load instructions. It is used to preserve YMM/ZMM registers when only the lower 128 bits are non-zero. _dl_runtime_resolve_avx_opt and _dl_runtime_resolve_avx512_opt are added and used on AVX/AVX512 processors supporting XGETBV with ECX == 1 so that we store and load only the non-zero portion of vector registers. This avoids SSE transition penalty caused by _dl_runtime_resolve_avx and _dl_runtime_profile_avx512 when only the lower 128 bits of vector registers are used. _dl_runtime_resolve_avx_slow is added and used for AVX processors which don't support XGETBV with ECX == 1. Since there is no SSE transition penalty on AVX512 processors which don't support XGETBV with ECX == 1, _dl_runtime_resolve_avx512_slow isn't provided. [BZ #20495] [BZ #20508] * sysdeps/x86/cpu-features.c (init_cpu_features): For Intel processors, set Use_dl_runtime_resolve_slow and set Use_dl_runtime_resolve_opt if XGETBV suports ECX == 1. * sysdeps/x86/cpu-features.h (bit_arch_Use_dl_runtime_resolve_opt): New. (bit_arch_Use_dl_runtime_resolve_slow): Likewise. (index_arch_Use_dl_runtime_resolve_opt): Likewise. (index_arch_Use_dl_runtime_resolve_slow): Likewise. * sysdeps/x86_64/dl-machine.h (elf_machine_runtime_setup): Use _dl_runtime_resolve_avx512_opt and _dl_runtime_resolve_avx_opt if Use_dl_runtime_resolve_opt is set. Use _dl_runtime_resolve_slow if Use_dl_runtime_resolve_slow is set. * sysdeps/x86_64/dl-trampoline.S: Include <cpu-features.h>. (_dl_runtime_resolve_opt): New. Defined for AVX and AVX512. (_dl_runtime_resolve): Add one for _dl_runtime_resolve_sse_vex. * sysdeps/x86_64/dl-trampoline.h (_dl_runtime_resolve_avx_slow): New. (_dl_runtime_resolve_opt): Likewise. (_dl_runtime_profile): Define only if _dl_runtime_profile is defined.
* X86: Change bit_YMM_state to (1 << 2)H.J. Lu2016-08-191-1/+1
| | | | | | | All other state bits, except for bit_YMM_state, are defined as (1 << N). This patch changes bit_YMM_state from (2 << 1) to (1 << 2). * sysdeps/x86/cpu-features.h (bit_YMM_state): Set to (1 << 2).
* Check Prefer_ERMS in memmove/memcpy/mempcpy/memsetH.J. Lu2016-06-301-0/+3
| | | | | | | | | | | | | | | | | | | | Although the Enhanced REP MOVSB/STOSB (ERMS) implementations of memmove, memcpy, mempcpy and memset aren't used by the current processors, this patch adds Prefer_ERMS check in memmove, memcpy, mempcpy and memset so that they can be used in the future. * sysdeps/x86/cpu-features.h (bit_arch_Prefer_ERMS): New. (index_arch_Prefer_ERMS): Likewise. * sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Return __memcpy_erms for Prefer_ERMS. * sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S (__memmove_erms): Enabled for libc.a. * ysdeps/x86_64/multiarch/memmove.S (__libc_memmove): Return __memmove_erms or Prefer_ERMS. * sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Return __mempcpy_erms for Prefer_ERMS. * sysdeps/x86_64/multiarch/memset.S (memset): Return __memset_erms for Prefer_ERMS.
* Check the HTT bit before counting logical threadsH.J. Lu2016-05-191-0/+3
| | | | | | | | | | | Skip counting logical threads for Intel processors if the HTT bit is 0 which indicates there is only a single logical processor. * sysdeps/x86/cacheinfo.c (init_cacheinfo): Skip counting logical threads if the HTT bit is 0. * sysdeps/x86/cpu-features.h (bit_cpu_HTT): New. (index_cpu_HTT): Likewise. (reg_HTT): Likewise.
* Remove x86 ifunc-defines.sym and rtld-global-offsets.symH.J. Lu2016-05-111-2/+1
| | | | | | | | | | | | | | | | | | | | Merge x86 ifunc-defines.sym with x86 cpu-features-offsets.sym. Remove x86 ifunc-defines.sym and rtld-global-offsets.sym. No code changes on i686 and x86-64. * sysdeps/i386/i686/multiarch/Makefile (gen-as-const-headers): Remove ifunc-defines.sym. * sysdeps/x86_64/multiarch/Makefile (gen-as-const-headers): Likewise. * sysdeps/i386/i686/multiarch/ifunc-defines.sym: Removed. * sysdeps/x86/rtld-global-offsets.sym: Likewise. * sysdeps/x86_64/multiarch/ifunc-defines.sym: Likewise. * sysdeps/x86/Makefile (gen-as-const-headers): Remove rtld-global-offsets.sym. * sysdeps/x86_64/multiarch/ifunc-defines.sym: Merged with ... * sysdeps/x86/cpu-features-offsets.sym: This. * sysdeps/x86/cpu-features.h: Include <cpu-features-offsets.h> instead of <ifunc-defines.h> and <rtld-global-offsets.h>.
* Initial Enhanced REP MOVSB/STOSB (ERMS) supportH.J. Lu2016-03-281-0/+4
| | | | | | | | | | The newer Intel processors support Enhanced REP MOVSB/STOSB (ERMS) which has a feature bit in CPUID. This patch adds the Enhanced REP MOVSB/STOSB (ERMS) bit to x86 cpu-features. * sysdeps/x86/cpu-features.h (bit_cpu_ERMS): New. (index_cpu_ERMS): Likewise. (reg_ERMS): Likewise.
* [x86] Add a feature bit: Fast_Unaligned_CopyH.J. Lu2016-03-281-0/+3
| | | | | | | | | | | | | | | | | | | On AMD processors, memcpy optimized with unaligned SSE load is slower than emcpy optimized with aligned SSSE3 while other string functions are faster with unaligned SSE load. A feature bit, Fast_Unaligned_Copy, is added to select memcpy optimized with unaligned SSE load. [BZ #19583] * sysdeps/x86/cpu-features.c (init_cpu_features): Set Fast_Unaligned_Copy with Fast_Unaligned_Load for Intel processors. Set Fast_Copy_Backward for AMD Excavator processors. * sysdeps/x86/cpu-features.h (bit_arch_Fast_Unaligned_Copy): New. (index_arch_Fast_Unaligned_Copy): Likewise. * sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check Fast_Unaligned_Copy instead of Fast_Unaligned_Load.
* Set index_arch_AVX_Fast_Unaligned_Load only for Intel processorsH.J. Lu2016-03-221-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | Since only Intel processors with AVX2 have fast unaligned load, we should set index_arch_AVX_Fast_Unaligned_Load only for Intel processors. Move AVX, AVX2, AVX512, FMA and FMA4 detection into get_common_indeces and call get_common_indeces for other processors. Add CPU_FEATURES_CPU_P and CPU_FEATURES_ARCH_P to aoid loading GLRO(dl_x86_cpu_features) in cpu-features.c. [BZ #19583] * sysdeps/x86/cpu-features.c (get_common_indeces): Remove inline. Check family before setting family, model and extended_model. Set AVX, AVX2, AVX512, FMA and FMA4 usable bits here. (init_cpu_features): Replace HAS_CPU_FEATURE and HAS_ARCH_FEATURE with CPU_FEATURES_CPU_P and CPU_FEATURES_ARCH_P. Set index_arch_AVX_Fast_Unaligned_Load for Intel processors with usable AVX2. Call get_common_indeces for other processors with family == NULL. * sysdeps/x86/cpu-features.h (CPU_FEATURES_CPU_P): New macro. (CPU_FEATURES_ARCH_P): Likewise. (HAS_CPU_FEATURE): Use CPU_FEATURES_CPU_P. (HAS_ARCH_FEATURE): Use CPU_FEATURES_ARCH_P.
* Add _arch_/_cpu_ to index_*/bit_* in x86 cpu-features.hH.J. Lu2016-03-101-109/+113
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | index_* and bit_* macros are used to access cpuid and feature arrays o struct cpu_features. It is very easy to use bits and indices of cpuid array on feature array, especially in assembly codes. For example, sysdeps/i386/i686/multiarch/bcopy.S has HAS_CPU_FEATURE (Fast_Rep_String) which should be HAS_ARCH_FEATURE (Fast_Rep_String) We change index_* and bit_* to index_cpu_*/index_arch_* and bit_cpu_*/bit_arch_* so that we can catch such error at build time. [BZ #19762] * sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h (EXTRA_LD_ENVVARS): Add _arch_ to index_*/bit_*. * sysdeps/x86/cpu-features.c (init_cpu_features): Likewise. * sysdeps/x86/cpu-features.h (bit_*): Renamed to ... (bit_arch_*): This for feature array. (bit_*): Renamed to ... (bit_cpu_*): This for cpu array. (index_*): Renamed to ... (index_arch_*): This for feature array. (index_*): Renamed to ... (index_cpu_*): This for cpu array. [__ASSEMBLER__] (HAS_FEATURE): Add and use field. [__ASSEMBLER__] (HAS_CPU_FEATURE)): Pass cpu to HAS_FEATURE. [__ASSEMBLER__] (HAS_ARCH_FEATURE)): Pass arch to HAS_FEATURE. [!__ASSEMBLER__] (HAS_CPU_FEATURE): Replace index_##name and bit_##name with index_cpu_##name and bit_cpu_##name. [!__ASSEMBLER__] (HAS_ARCH_FEATURE): Replace index_##name and bit_##name with index_arch_##name and bit_arch_##name.
* Update copyright dates with scripts/update-copyrights.Joseph Myers2016-01-041-1/+1
|
* Added memset optimized with AVX512 for KNL hardware.Andrew Senkevich2015-12-191-0/+4
| | | | | | | | | | | | | | | It shows improvement up to 28% over AVX2 memset (performance results attached at <https://sourceware.org/ml/libc-alpha/2015-12/msg00052.html>). * sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S: New file. * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Added new file. * sysdeps/x86_64/multiarch/ifunc-impl-list.c: Added new tests. * sysdeps/x86_64/multiarch/memset.S: Added new IFUNC branch. * sysdeps/x86_64/multiarch/memset_chk.S: Likewise. * sysdeps/x86/cpu-features.h (bit_Prefer_No_VZEROUPPER, index_Prefer_No_VZEROUPPER): New. * sysdeps/x86/cpu-features.c (init_cpu_features): Set the Prefer_No_VZEROUPPER for Knights Landing.
* Add Prefer_MAP_32BIT_EXEC to map executable pages with MAP_32BIT hjl/32bit/masterH.J. Lu2015-12-151-0/+3
| | | | | | | | | | | | | | | | | | | | | According to Silvermont software optimization guide, for 64-bit applications, branch prediction performance can be negatively impacted when the target of a branch is more than 4GB away from the branch. Add the Prefer_MAP_32BIT_EXEC bit so that mmap will try to map executable pages with MAP_32BIT first. NB: MAP_32BIT will map to lower 2GB, not lower 4GB, address. Prefer_MAP_32BIT_EXEC reduces bits available for address space layout randomization (ASLR), which is always disabled for SUID programs and can only be enabled by setting environment variable, LD_PREFER_MAP_32BIT_EXEC. On Fedora 23, this patch speeds up GCC 5 testsuite by 3% on Silvermont. [BZ #19367] * sysdeps/unix/sysv/linux/wordsize-64/mmap.c: New file. * sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h: Likewise. * sysdeps/unix/sysv/linux/x86_64/64/mmap.c: Likewise. * sysdeps/x86/cpu-features.h (bit_Prefer_MAP_32BIT_EXEC): New. (index_Prefer_MAP_32BIT_EXEC): Likewise.
* Detect and select i586/i686 implementation at run-time fedora/masterH.J. Lu2015-08-271-3/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We detect i586 and i686 features at run-time by checking CX8 and CMOV CPUID features bits. We can use these information to select the best implementation in ix86 multiarch. HAS_I586/HAS_I686 is true if i586/i686 instructions are available on the processor. Due to the reordering and the other nifty extensions in i686, it is not really good to use heavily i586 optimized code on an i686. It's better to use i486 code if it isn't an i586. USE_I586/USE_I686 is true if i586/i686 implementation should be used for the processor. USE_I586 is true only if i686 instructions aren't available. If i686 instructions are available, we always choose i686 or i486 implementation, in that order, and we never choose i586 implementation for i686-class processors. * sysdeps/i386/init-arch.h: New file. * sysdeps/i386/i586/init-arch.h: Likewise. * sysdeps/i386/i686/init-arch.h: Likewise. * sysdeps/x86/cpu-features.c (init_cpu_features): Set bit_I586 bit if CX8 is available. Set bit_I686 bit if CMOV is available. * sysdeps/x86/cpu-features.h (bit_I586): New. (bit_I686): Likewise. (bit_CX8): Likewise. (bit_CMOV): Likewise. (index_CX8): Likewise. (index_CMOV): Likewise. (index_I586): Likewise. (index_I686): Likewise. (reg_CX8): Likewise. (reg_CMOV): Likewise. (HAS_I586): Defined as HAS_ARCH_FEATURE (I586) if i586 isn't available at compile-time. (HAS_I686): Defined as HAS_ARCH_FEATURE (I686) if i686 isn't available at compile-time. * sysdeps/x86/init-arch.h (USE_I586): New macro. (USE_I686): Likewise.
* Also check __i586__/__i686__ for HAS_I586/HAS_I686H.J. Lu2015-08-191-8/+9
| | | | | | * sysdeps/x86/cpu-features.h (HAS_I586): Defined to 1 if __i586__ is defined. (HAS_I686): Defined to 1 if __i686__ is defined.
* Define HAS_CPUID/HAS_I586/HAS_I686 from -march=H.J. Lu2015-08-181-0/+27
| | | | | | | | | | | | cpuid, i586 and i686 instructions are available if the processor specified by -march= supports them. We can use this information to determine whether those instructions can be used safely. * sysdeps/x86/cpu-features.c (init_cpu_features): Check whether cpuid is available only if HAS_CPUID is 0. * sysdeps/x86/cpu-features.h (HAS_CPUID): New. (HAS_I586): Likewise. (HAS_I686): Likewise.
* Add _dl_x86_cpu_features to rtld_globalH.J. Lu2015-08-131-0/+240
This patch adds _dl_x86_cpu_features to rtld_global in x86 ld.so and initializes it early before __libc_start_main is called so that cpu_features is always available when it is used and we can avoid calling __init_cpu_features in IFUNC selectors. * sysdeps/i386/dl-machine.h: Include <cpu-features.c>. (dl_platform_init): Call init_cpu_features. * sysdeps/i386/dl-procinfo.c (_dl_x86_cpu_features): New. * sysdeps/i386/i686/cacheinfo.c (DISABLE_PREFERRED_MEMORY_INSTRUCTION): Removed. * sysdeps/i386/i686/multiarch/Makefile (aux): Remove init-arch. * sysdeps/i386/i686/multiarch/Versions: Removed. * sysdeps/i386/i686/multiarch/ifunc-defines.sym (KIND_OFFSET): Removed. * sysdeps/i386/ldsodefs.h: Include <cpu-features.h>. * sysdeps/unix/sysv/linux/x86/Makefile (libpthread-sysdep_routines): Remove init-arch. * sysdeps/unix/sysv/linux/x86_64/dl-procinfo.c: Include <sysdeps/x86_64/dl-procinfo.c> instead of sysdeps/generic/dl-procinfo.c>. * sysdeps/x86/Makefile [$(subdir) == csu] (gen-as-const-headers): Add cpu-features-offsets.sym and rtld-global-offsets.sym. [$(subdir) == elf] (sysdep-dl-routines): Add dl-get-cpu-features. [$(subdir) == elf] (tests): Add tst-get-cpu-features. [$(subdir) == elf] (tests-static): Add tst-get-cpu-features-static. * sysdeps/x86/Versions: New file. * sysdeps/x86/cpu-features-offsets.sym: Likewise. * sysdeps/x86/cpu-features.c: Likewise. * sysdeps/x86/cpu-features.h: Likewise. * sysdeps/x86/dl-get-cpu-features.c: Likewise. * sysdeps/x86/libc-start.c: Likewise. * sysdeps/x86/rtld-global-offsets.sym: Likewise. * sysdeps/x86/tst-get-cpu-features-static.c: Likewise. * sysdeps/x86/tst-get-cpu-features.c: Likewise. * sysdeps/x86_64/dl-procinfo.c: Likewise. * sysdeps/x86_64/cacheinfo.c (__cpuid_count): Removed. Assume USE_MULTIARCH is defined and don't check it. (is_intel): Replace __cpu_features with GLRO(dl_x86_cpu_features). (is_amd): Likewise. (max_cpuid): Likewise. (intel_check_word): Likewise. (__cache_sysconf): Don't call __init_cpu_features. (__x86_preferred_memory_instruction): Removed. (init_cacheinfo): Don't call __init_cpu_features. Replace __cpu_features with GLRO(dl_x86_cpu_features). * sysdeps/x86_64/dl-machine.h: <cpu-features.c>. (dl_platform_init): Call init_cpu_features. * sysdeps/x86_64/ldsodefs.h: Include <cpu-features.h>. * sysdeps/x86_64/multiarch/Makefile (aux): Remove init-arch. * sysdeps/x86_64/multiarch/Versions: Removed. * sysdeps/x86_64/multiarch/cacheinfo.c: Likewise. * sysdeps/x86_64/multiarch/init-arch.c: Likewise. * sysdeps/x86_64/multiarch/ifunc-defines.sym (KIND_OFFSET): Removed. * sysdeps/x86_64/multiarch/init-arch.h: Rewrite.