about summary refs log tree commit diff
path: root/sysdeps/x86_64/multiarch/memmove-avx-unaligned-erms-rtm.S
Commit message (Collapse)AuthorAgeFilesLines
* x86: Update memmove to use new VEC macrosNoah Goldstein2022-10-141-10/+1
| | | | | | | | Replace %VEC(n) -> %VMM(n) This commit does not change libc.so Tested build on x86-64
* x86: Optimize memmove-vec-unaligned-erms.SNoah Goldstein2021-11-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | No bug. The optimizations are as follows: 1) Always align entry to 64 bytes. This makes behavior more predictable and makes other frontend optimizations easier. 2) Make the L(more_8x_vec) cases 4k aliasing aware. This can have significant benefits in the case that: 0 < (dst - src) < [256, 512] 3) Align before `rep movsb`. For ERMS this is roughly a [0, 30%] improvement and for FSRM [-10%, 25%]. In addition to these primary changes there is general cleanup throughout to optimize the aligning routines and control flow logic. Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
* x86-64: Add AVX optimized string/memory functions for RTMH.J. Lu2021-03-291-0/+17
Since VZEROUPPER triggers RTM abort while VZEROALL won't, select AVX optimized string/memory functions with xtest jz 1f vzeroall ret 1: vzeroupper ret at function exit on processors with usable RTM, but without 256-bit EVEX instructions to avoid VZEROUPPER inside a transactionally executing RTM region.