diff options
author | H.J. Lu <hjl.tools@gmail.com> | 2017-05-22 15:09:50 -0700 |
---|---|---|
committer | H.J. Lu <hjl.tools@gmail.com> | 2017-06-05 15:09:59 -0700 |
commit | 2aa22acfbbbb26a2e585ff62fef1ebdd290d9d85 (patch) | |
tree | 7cc09ec3a5efc43064a817f39905cd5597d7bbb8 /sysdeps/wordsize-32 | |
parent | b38361c9a6da5aea0234a9c31ce63fec93d0fc86 (diff) | |
download | glibc-2aa22acfbbbb26a2e585ff62fef1ebdd290d9d85.tar.gz glibc-2aa22acfbbbb26a2e585ff62fef1ebdd290d9d85.tar.xz glibc-2aa22acfbbbb26a2e585ff62fef1ebdd290d9d85.zip |
x86-64: Optimize strchr/strchrnul/wcschr with AVX2
Optimize strchr/strchrnul/wcschr with AVX2 to search 32 bytes with vector instructions. It is as fast as SSE2 versions for size <= 16 bytes and up to 1X faster for or size > 16 bytes on Haswell. Select AVX2 version on AVX2 machines where vzeroupper is preferred and AVX unaligned load is fast. NB: It uses TZCNT instead of BSF since TZCNT produces the same result as BSF for non-zero input. TZCNT is faster than BSF and is executed as BSF if machine doesn't support TZCNT. * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add strchr-sse2, strchrnul-sse2, strchr-avx2, strchrnul-avx2, wcschr-sse2 and wcschr-avx2. * sysdeps/x86_64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Add tests for __strchr_avx2, __strchrnul_avx2, __strchrnul_sse2, __wcschr_avx2 and __wcschr_sse2. * sysdeps/x86_64/multiarch/strchr-avx2.S: New file. * sysdeps/x86_64/multiarch/strchr-sse2.S: Likewise. * sysdeps/x86_64/multiarch/strchr.c: New file. * sysdeps/x86_64/multiarch/strchrnul-avx2.S: Likewise. * sysdeps/x86_64/multiarch/strchrnul-sse2.S: Likewise. * sysdeps/x86_64/multiarch/strchrnul.c: Likewise. * sysdeps/x86_64/multiarch/wcschr-avx2.S: Likewise. * sysdeps/x86_64/multiarch/wcschr-sse2.S: Likewise. * sysdeps/x86_64/multiarch/wcschr.c: Likewise. * sysdeps/x86_64/multiarch/strchr.S: Removed.
Diffstat (limited to 'sysdeps/wordsize-32')
0 files changed, 0 insertions, 0 deletions