about summary refs log tree commit diff
Commit message (Expand)AuthorAgeFilesLines
* Count number of logical processors sharing L2 cache hjl/erms/2.22H.J. Lu2016-06-061-34/+116
* Remove special L2 cache case for Knights LandingH.J. Lu2016-06-061-2/+0
* Correct Intel processor level type mask from CPUIDH.J. Lu2016-06-061-1/+1
* Check the HTT bit before counting logical threadsH.J. Lu2016-06-062-76/+85
* Remove alignments on jump targets in memsetH.J. Lu2016-06-061-32/+5
* Call init_cpu_features only if SHARED is definedH.J. Lu2016-06-062-0/+8
* Support non-inclusive caches on Intel processorsH.J. Lu2016-06-061-1/+11
* Remove x86 ifunc-defines.sym and rtld-global-offsets.symH.J. Lu2016-06-068-51/+18
* Move sysdeps/x86_64/cacheinfo.c to sysdeps/x86H.J. Lu2016-06-062-1/+1
* Detect Intel Goldmont and Airmont processorsH.J. Lu2016-06-061-0/+8
* X86-64: Add dummy memcopy.h and wordcopy.cH.J. Lu2016-04-082-0/+2
* X86-64: Remove previous default/SSE2/AVX2 memcpy/memmoveH.J. Lu2016-04-0819-1492/+396
* X86-64: Remove the previous SSE2/AVX2 memsetsH.J. Lu2016-04-088-325/+62
* X86-64: Use non-temporal store in memcpy on large dataH.J. Lu2016-04-085-171/+234
* X86-64: Prepare memmove-vec-unaligned-erms.SH.J. Lu2016-04-061-54/+84
* X86-64: Prepare memset-vec-unaligned-erms.SH.J. Lu2016-04-061-13/+19
* Force 32-bit displacement in memset-vec-unaligned-erms.SH.J. Lu2016-04-051-0/+13
* Add a comment in memset-sse2-unaligned-erms.SH.J. Lu2016-04-051-0/+2
* Don't put SSE2/AVX/AVX512 memmove/memset in ld.soH.J. Lu2016-04-056-32/+40
* Fix memmove-vec-unaligned-erms.SH.J. Lu2016-04-051-24/+30
* Use HAS_ARCH_FEATURE with Fast_Rep_StringH.J. Lu2016-04-059-9/+9
* Remove Fast_Copy_Backward from Intel Core processorsH.J. Lu2016-04-021-5/+1
* Add x86-64 memset with unaligned store and rep stosbH.J. Lu2016-04-026-4/+339
* Add x86-64 memmove with unaligned load/store and rep movsbH.J. Lu2016-04-026-1/+606
* Initial Enhanced REP MOVSB/STOSB (ERMS) supportH.J. Lu2016-04-021-0/+4
* Make __memcpy_avx512_no_vzeroupper an aliasH.J. Lu2016-04-023-430/+404
* Implement x86-64 multiarch mempcpy in memcpyH.J. Lu2016-04-029-57/+69
* [x86] Add a feature bit: Fast_Unaligned_CopyH.J. Lu2016-04-023-1/+12
* Don't set %rcx twice before "rep movsb"H.J. Lu2016-04-021-1/+0
* Set index_arch_AVX_Fast_Unaligned_Load only for Intel processorsH.J. Lu2016-04-022-70/+84
* Update family and model detection for AMD CPUsH.J. Lu2016-04-021-12/+15
* Add _arch_/_cpu_ to index_*/bit_* in x86 cpu-features.hH.J. Lu2016-04-023-137/+153
* Or bit_Prefer_MAP_32BIT_EXEC in EXTRA_LD_ENVVARSH.J. Lu2016-04-021-1/+1
* Group AVX512 functions in .text.avx512 sectionH.J. Lu2016-04-022-2/+2
* x86-64: Fix memcpy IFUNC selectionH.J. Lu2016-04-021-13/+14
* Added memcpy/memmove family optimized with AVX512 for KNL hardware.Andrew Senkevich2016-04-0211-19/+540
* Added memset optimized with AVX512 for KNL hardware.Andrew Senkevich2016-04-027-2/+229
* Add Prefer_MAP_32BIT_EXEC to map executable pages with MAP_32BITH.J. Lu2016-04-024-0/+124
* Enable Silvermont optimizations for Knights LandingH.J. Lu2016-04-021-0/+3
* [x86_64] Set DL_RUNTIME_UNALIGNED_VEC_SIZE to 8 hjl/plt/2.22H.J. Lu2016-02-232-11/+15
* Support x86-64 assmebler without AVX512H.J. Lu2016-02-231-16/+24
* Remove incorrect register mov in floorf/nearbyint on x86_64Siddhesh Poyarekar2015-08-142-2/+0
* Don't run tst-getpid2 with LD_BIND_NOW=1H.J. Lu2015-08-051-5/+0
* Use SSE optimized strcmp in x86-64 ld.soH.J. Lu2015-08-051-253/+216
* Remove x86-64 rtld-xxx.c and rtld-xxx.SH.J. Lu2015-08-056-464/+0
* Replace %xmm8 with %xmm0H.J. Lu2015-08-051-26/+26
* Replace %xmm[8-12] with %xmm[0-4]H.J. Lu2015-08-051-47/+47
* Don't disable SSE in x86-64 ld.soH.J. Lu2015-08-053-11/+14
* Save and restore vector registers in x86-64 ld.soH.J. Lu2015-08-058-501/+472
* Align stack when calling __errno_locationH.J. Lu2015-08-053-0/+18