From 935971ba6b4eaf67a34e4651434ba9b61e7355cc Mon Sep 17 00:00:00 2001 From: "H.J. Lu" Date: Mon, 5 Jun 2017 12:52:41 -0700 Subject: x86-64: Optimize memcmp/wmemcmp with AVX2 and MOVBE Optimize x86-64 memcmp/wmemcmp with AVX2. It uses vector compare as much as possible. It is as fast as SSE4 memcmp for size <= 16 bytes and up to 2X faster for size > 16 bytes on Haswell and Skylake. Select AVX2 memcmp/wmemcmp on AVX2 machines where vzeroupper is preferred and AVX unaligned load is fast. NB: It uses TZCNT instead of BSF since TZCNT produces the same result as BSF for non-zero input. TZCNT is faster than BSF and is executed as BSF if machine doesn't support TZCNT. Key features: 1. For size from 2 to 7 bytes, load as big endian with movbe and bswap to avoid branches. 2. Use overlapping compare to avoid branch. 3. Use vector compare when size >= 4 bytes for memcmp or size >= 8 bytes for wmemcmp. 4. If size is 8 * VEC_SIZE or less, unroll the loop. 5. Compare 4 * VEC_SIZE at a time with the aligned first memory area. 6. Use 2 vector compares when size is 2 * VEC_SIZE or less. 7. Use 4 vector compares when size is 4 * VEC_SIZE or less. 8. Use 8 vector compares when size is 8 * VEC_SIZE or less. * sysdeps/x86/cpu-features.h (index_cpu_MOVBE): New. * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add memcmp-avx2 and wmemcmp-avx2. * sysdeps/x86_64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Test __memcmp_avx2 and __wmemcmp_avx2. * sysdeps/x86_64/multiarch/memcmp-avx2.S: New file. * sysdeps/x86_64/multiarch/wmemcmp-avx2.S: Likewise. * sysdeps/x86_64/multiarch/memcmp.S: Use __memcmp_avx2 on AVX 2 machines if AVX unaligned load is fast and vzeroupper is preferred. * sysdeps/x86_64/multiarch/wmemcmp.S: Use __wmemcmp_avx2 on AVX 2 machines if AVX unaligned load is fast and vzeroupper is preferred. --- ChangeLog | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) (limited to 'ChangeLog') diff --git a/ChangeLog b/ChangeLog index 55c708a12e..303e1892e4 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,19 @@ +2017-06-05 H.J. Lu + + * sysdeps/x86/cpu-features.h (index_cpu_MOVBE): New. + * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add + memcmp-avx2 and wmemcmp-avx2. + * sysdeps/x86_64/multiarch/ifunc-impl-list.c + (__libc_ifunc_impl_list): Test __memcmp_avx2 and __wmemcmp_avx2. + * sysdeps/x86_64/multiarch/memcmp-avx2.S: New file. + * sysdeps/x86_64/multiarch/wmemcmp-avx2.S: Likewise. + * sysdeps/x86_64/multiarch/memcmp.S: Use __memcmp_avx2 on AVX + 2 machines if AVX unaligned load is fast and vzeroupper is + preferred. + * sysdeps/x86_64/multiarch/wmemcmp.S: Use __wmemcmp_avx2 on AVX + 2 machines if AVX unaligned load is fast and vzeroupper is + preferred. + 2017-06-05 H.J. Lu * include/wchar.h (__wmemset_chk): New. -- cgit 1.4.1