about summary refs log tree commit diff
path: root/posix/tst-preadwrite.c
diff options
context:
space:
mode:
authorH.J. Lu <hjl.tools@gmail.com>2017-05-22 15:09:50 -0700
committerH.J. Lu <hjl.tools@gmail.com>2017-06-08 05:07:18 -0700
commite0f20b5a54a803ab753f2e2cc1fce7729fa23f81 (patch)
treeefc375c0e6a600533d48bd6f8abbe386b3f4927e /posix/tst-preadwrite.c
parent83c22fae4287c5fb9c3c43edc16779dcfbc2fa7b (diff)
downloadglibc-e0f20b5a54a803ab753f2e2cc1fce7729fa23f81.tar.gz
glibc-e0f20b5a54a803ab753f2e2cc1fce7729fa23f81.tar.xz
glibc-e0f20b5a54a803ab753f2e2cc1fce7729fa23f81.zip
x86-64: Optimize strchr/strchrnul/wcschr with AVX2
Optimize strchr/strchrnul/wcschr with AVX2 to search 32 bytes with vector
instructions.  It is as fast as SSE2 versions for size <= 16 bytes and up
to 1X faster for or size > 16 bytes on Haswell.  Select AVX2 version on
AVX2 machines where vzeroupper is preferred and AVX unaligned load is fast.

NB: It uses TZCNT instead of BSF since TZCNT produces the same result
as BSF for non-zero input.  TZCNT is faster than BSF and is executed
as BSF if machine doesn't support TZCNT.

	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	strchr-sse2, strchrnul-sse2, strchr-avx2, strchrnul-avx2,
	wcschr-sse2 and wcschr-avx2.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Add tests for __strchr_avx2,
	__strchrnul_avx2, __strchrnul_sse2, __wcschr_avx2 and
	__wcschr_sse2.
	* sysdeps/x86_64/multiarch/strchr-avx2.S: New file.
	* sysdeps/x86_64/multiarch/strchr-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/strchr.c: Likewise.
	* sysdeps/x86_64/multiarch/strchrnul-avx2.S: Likewise.
	* sysdeps/x86_64/multiarch/strchrnul-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/strchrnul.c: Likewise.
	* sysdeps/x86_64/multiarch/wcschr-avx2.S: Likewise.
	* sysdeps/x86_64/multiarch/wcschr-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/wcschr.c: Likewise.
	* sysdeps/x86_64/multiarch/strchr.S: Removed.
Diffstat (limited to 'posix/tst-preadwrite.c')
0 files changed, 0 insertions, 0 deletions