about summary refs log tree commit diff
path: root/sysdeps/powerpc/powerpc32/crti.S
diff options
context:
space:
mode:
authorH.J. Lu <hjl.tools@gmail.com>2016-03-23 10:33:19 -0700
committerH.J. Lu <hjl.tools@gmail.com>2016-03-23 10:56:38 -0700
commit327aadf6348bd41d1fae46ee7780e214c0a493c1 (patch)
tree3a1f3550ee36ea010e53e1ad8f4e1ffc450b5c18 /sysdeps/powerpc/powerpc32/crti.S
parent7a25d6a84df9fea56963569ceccaaf7c2a88f161 (diff)
downloadglibc-327aadf6348bd41d1fae46ee7780e214c0a493c1.tar.gz
glibc-327aadf6348bd41d1fae46ee7780e214c0a493c1.tar.xz
glibc-327aadf6348bd41d1fae46ee7780e214c0a493c1.zip
[x86] Add a feature bit: Fast_Unaligned_Copy hjl/pr19583
On AMD processors, memcpy optimized with unaligned SSE load is
slower than emcpy optimized with aligned SSSE3 while other string
functions are faster with unaligned SSE load.  A feature bit,
Fast_Unaligned_Copy, is added to select memcpy optimized with
unaligned SSE load.

	[BZ #19583]
	* sysdeps/x86/cpu-features.c (init_cpu_features): Set
	Fast_Unaligned_Copy with Fast_Unaligned_Load for Intel
	processors.  Set Fast_Copy_Backward for AMD Excavator
	processors.
	* sysdeps/x86/cpu-features.h (bit_arch_Fast_Unaligned_Copy):
	New.
	(index_arch_Fast_Unaligned_Copy): Likewise.
	* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check
	Fast_Unaligned_Copy instead of Fast_Unaligned_Load.
Diffstat (limited to 'sysdeps/powerpc/powerpc32/crti.S')
0 files changed, 0 insertions, 0 deletions