diff options
author | Lucas A. M. Magalhaes <lamm@linux.ibm.com> | 2021-05-06 17:01:52 -0300 |
---|---|---|
committer | Lucas A. M. Magalhaes <lamm@linux.ibm.com> | 2021-05-31 18:00:20 -0300 |
commit | a55e2da2702e235fa0ae66a116d304d1bffc060a (patch) | |
tree | 9d37486f43447faaf707a2ffbb8eb825c5cdafeb /sysdeps/powerpc/powerpc64/multiarch/memcmp-ppc64.c | |
parent | 92a7d1343991897f77afe01041f3b77712445e47 (diff) | |
download | glibc-a55e2da2702e235fa0ae66a116d304d1bffc060a.tar.gz glibc-a55e2da2702e235fa0ae66a116d304d1bffc060a.tar.xz glibc-a55e2da2702e235fa0ae66a116d304d1bffc060a.zip |
powerpc: Optimized memcmp for power10
This patch was based on the __memcmp_power8 and the recent __strlen_power10. Improvements from __memcmp_power8: 1. Don't need alignment code. On POWER10 lxvp and lxvl do not generate alignment interrupts, so they are safe for use on caching-inhibited memory. Notice that the comparison on the main loop will wait for both VSR to be ready. Therefore aligning one of the input address does not improve performance. In order to align both registers a vperm is necessary which add too much overhead. 2. Uses new POWER10 instructions This code uses lxvp to decrease contention on load by loading 32 bytes per instruction. The vextractbm is used to have a smaller tail code for calculating the return value. 3. Performance improvement This version has around 35% better performance on average. I saw no performance regressions for any length or alignment. Thanks Matheus for helping me out with some details. Co-authored-by: Matheus Castanho <msc@linux.ibm.com> Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
Diffstat (limited to 'sysdeps/powerpc/powerpc64/multiarch/memcmp-ppc64.c')
0 files changed, 0 insertions, 0 deletions