about summary refs log tree commit diff
path: root/sysdeps/x86_64/multiarch/Makefile
Commit message (Expand)AuthorAgeFilesLines
* x86-64: Optimize strrchr/wcsrchr with AVX2 hjl/avx2/cH.J. Lu2017-06-051-0/+2
* x86-64: Optimize memrchr with AVX2H.J. Lu2017-06-051-0/+1
* x86-64: Optimize strchr/strchrnul/wcschr with AVX2H.J. Lu2017-06-051-0/+2
* x86-64: Optimize strlen/strnlen/wcslen/wcsnlen with AVX2H.J. Lu2017-06-051-1/+3
* x86-64: Optimize memchr/rawmemchr/wmemchr with SSE2/AVX2H.J. Lu2017-06-051-0/+2
* x86-64: Optimize memcmp/wmemcmp with AVX2 and MOVBEH.J. Lu2017-06-051-1/+4
* x86-64: Optimize wmemset with SSE2/AVX2/AVX512H.J. Lu2017-06-051-0/+4
* X86-64: Remove previous default/SSE2/AVX2 memcpy/memmoveH.J. Lu2016-06-081-4/+2
* X86-64: Remove the previous SSE2/AVX2 memsetsH.J. Lu2016-06-081-2/+1
* Remove x86 ifunc-defines.sym and rtld-global-offsets.symH.J. Lu2016-05-111-1/+0
* Add x86-64 memset with unaligned store and rep stosbH.J. Lu2016-03-311-1/+4
* Add x86-64 memmove with unaligned load/store and rep movsbH.J. Lu2016-03-311-1/+4
* Make __memcpy_avx512_no_vzeroupper an aliasH.J. Lu2016-03-281-1/+1
* Implement x86-64 multiarch mempcpy in memcpyH.J. Lu2016-03-281-4/+4
* Added memcpy/memmove family optimized with AVX512 for KNL hardware.Andrew Senkevich2016-01-161-5/+6
* Added memset optimized with AVX512 for KNL hardware.Andrew Senkevich2015-12-191-1/+2
* Remove -mavx2 configure tests.Joseph Myers2015-10-281-5/+1
* Remove configure tests for SSE4 support.Joseph Myers2015-10-061-5/+2
* Add _dl_x86_cpu_features to rtld_globalH.J. Lu2015-08-131-1/+0
* Improve 64bit memcpy performance for Haswell CPU with AVX instructionLing Ma2014-07-301-0/+1
* Enable AVX2 optimized memset only if -mavx2 worksH.J. Lu2014-07-141-2/+5
* Add x86_64 memset optimized for AVX2Ling Ma2014-06-191-1/+3
* Add strstr with unaligned loads. Fixes bug 12100.Ondřej Bílka2013-12-141-6/+3
* Faster strrchr.Ondřej Bílka2013-09-261-2/+2
* Add unaligned strcmp.Ondřej Bílka2013-09-031-2/+4
* Faster memcpy on x64.Ondrej Bilka2013-05-201-1/+1
* Faster strlen on x64.Ondrej Bilka2013-03-181-4/+2
* Remove Prefer_SSE_for_memop on x64Ondrej Bilka2013-03-111-1/+1
* Revert " * sysdeps/x86_64/strlen.S: Replace with new SSE2 based implementation"Ondrej Bilka2013-03-061-2/+4
* * sysdeps/x86_64/strlen.S: Replace with new SSE2 based implementationOndrej Bilka2013-03-061-4/+2
* BZ#14059: Fix AVX and FMA4 detection.Carlos O'Donell2012-05-171-0/+1
* Optimized wcschr and wcscpy for x86-64 and x86-32Ulrich Drepper2011-12-171-1/+5
* Optimized strnlen and wcscmp for x86-64Liubov Dmitrieva2011-10-231-2/+2
* Optimized memcmp and wmemcmp for x86-64 and x86-32Liubov Dmitrieva2011-10-151-1/+2
* Add Atom-optimized strchr and strrchr for x86-64Liubov Dmitrieva2011-09-051-1/+2
* Improve 64 bit strcat functions with SSE2/SSSE3Liubov Dmitrieva2011-07-191-2/+4
* Improved st{r,p}{,n}cpy for SSE2 and SSSE3 on x86-64H.J. Lu2011-06-241-2/+5
* Use IFUNC on x86-64 memsetH.J. Lu2010-11-081-1/+2
* Unroll x86-64 strlenH.J. Lu2010-08-261-1/+1
* Clean up warnings in new x86_64/multiarch code.Roland McGrath2010-08-251-0/+1
* Clean up SSE variable shiftsRichard Henderson2010-08-241-1/+1
* Add optimized strncasecmp versions for x86-64.Ulrich Drepper2010-08-141-1/+2
* Add support for SSSE3 and SSE4.2 versions of strcasecmp on x86-64.Ulrich Drepper2010-07-311-1/+1
* Speed up SSE4.2 strcasestr by avoiding indirect function call.Ulrich Drepper2010-07-161-1/+2
* Improve 64bit memcpy/memmove for Atom, Core 2 and Core i7H.J. Lu2010-06-301-1/+3
* x86-64 SSE4 optimized memcmpH.J. Lu2010-04-141-1/+1
* Implement SSE4.2 optimized strchr and strrchr.H.J. Lu2009-10-221-1/+2
* Add SSSE3-optimized implementation of str{,n}cmp for x86-64.Ulrich Drepper2009-08-071-1/+1
* Add SSE2 support to str{,n}cmp for x86-64.H.J. Lu2009-07-261-1/+1
* SSE4.2 strstr/strcasestr for x86-64.H.J. Lu2009-07-201-1/+3