about summary refs log tree commit diff
path: root/sysdeps/i386/i686/multiarch/ifunc-impl-list.c
diff options
context:
space:
mode:
authorH.J. Lu <hjl.tools@gmail.com>2016-03-28 04:39:48 -0700
committerH.J. Lu <hjl.tools@gmail.com>2016-03-28 04:40:03 -0700
commite41b395523040fcb58c7d378475720c2836d280c (patch)
tree7a4271638219c8d5b141039178105b0f1564ea16 /sysdeps/i386/i686/multiarch/ifunc-impl-list.c
parentb66d837bb5398795c6b0f651bd5a5d66091d8577 (diff)
downloadglibc-e41b395523040fcb58c7d378475720c2836d280c.tar.gz
glibc-e41b395523040fcb58c7d378475720c2836d280c.tar.xz
glibc-e41b395523040fcb58c7d378475720c2836d280c.zip
[x86] Add a feature bit: Fast_Unaligned_Copy
On AMD processors, memcpy optimized with unaligned SSE load is
slower than emcpy optimized with aligned SSSE3 while other string
functions are faster with unaligned SSE load.  A feature bit,
Fast_Unaligned_Copy, is added to select memcpy optimized with
unaligned SSE load.

	[BZ #19583]
	* sysdeps/x86/cpu-features.c (init_cpu_features): Set
	Fast_Unaligned_Copy with Fast_Unaligned_Load for Intel
	processors.  Set Fast_Copy_Backward for AMD Excavator
	processors.
	* sysdeps/x86/cpu-features.h (bit_arch_Fast_Unaligned_Copy):
	New.
	(index_arch_Fast_Unaligned_Copy): Likewise.
	* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check
	Fast_Unaligned_Copy instead of Fast_Unaligned_Load.
Diffstat (limited to 'sysdeps/i386/i686/multiarch/ifunc-impl-list.c')
0 files changed, 0 insertions, 0 deletions