about summary refs log tree commit diff
path: root/sysdeps/unix/sysv/linux/tst-memfd_create.c
diff options
context:
space:
mode:
authorMatheus Castanho <msc@linux.ibm.com>2020-09-29 15:40:08 -0300
committerMatheus Castanho <msc@linux.ibm.com>2021-04-22 16:18:06 -0300
commit10624a97e8e47004985740cbb04060a84cfada76 (patch)
tree761c8f267ae5156350d98e5edf019466f5ba2df0 /sysdeps/unix/sysv/linux/tst-memfd_create.c
parent6f3e54d404cfe1ba7d1444e6dfcfd77b102d9287 (diff)
downloadglibc-10624a97e8e47004985740cbb04060a84cfada76.tar.gz
glibc-10624a97e8e47004985740cbb04060a84cfada76.tar.xz
glibc-10624a97e8e47004985740cbb04060a84cfada76.zip
powerpc: Add optimized strlen for POWER10
Improvements compared to POWER9 version:

1. Take into account first 16B comparison for aligned strings

   The previous version compares the first 16B and increments r4 by the number
   of bytes until the address is 16B-aligned, then starts doing aligned loads at
   that address. For aligned strings, this causes the first 16B to be compared
   twice, because the increment is 0. Here we calculate the next 16B-aligned
   address differently, which avoids that issue.

2. Use simple comparisons for the first ~192 bytes

   The main loop is good for big strings, but comparing 16B each time is better
   for smaller strings.  So after aligning the address to 16 Bytes, we check
   more 176B in 16B chunks.  There may be some overlaps with the main loop for
   unaligned strings, but we avoid using the more aggressive strategy too soon,
   and also allow the loop to start at a 64B-aligned address.  This greatly
   benefits smaller strings and avoids overlapping checks if the string is
   already aligned at a 64B boundary.

3. Reduce dependencies between load blocks caused by address calculation on loop

   Doing a precise time tracing on the code showed many loads in the loop were
   stalled waiting for updates to r4 from previous code blocks.  This
   implementation avoids that as much as possible by using 2 registers (r4 and
   r5) to hold addresses to be used by different parts of the code.

   Also, the previous code aligned the address to 16B, then to 64B by doing a
   few 48B loops (if needed) until the address was aligned. The main loop could
   not start until that 48B loop had finished and r4 was updated with the
   current address. Here we calculate the address used by the loop very early,
   so it can start sooner.

   The main loop now uses 2 pointers 128B apart to make pointer updates less
   frequent, and also unrolls 1 iteration to guarantee there is enough time
   between iterations to update the pointers, reducing stalled cycles.

4. Use new P10 instructions

   lxvp is used to load 32B with a single instruction, reducing contention in
   the load queue.

   vextractbm allows simplifying the tail code for the loop, replacing
   vbpermq and avoiding having to generate a permute control vector.

Reviewed-by: Paul E Murphy <murphyp@linux.ibm.com>
Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
Reviewed-by: Lucas A. M. Magalhaes <lamm@linux.ibm.com>
Diffstat (limited to 'sysdeps/unix/sysv/linux/tst-memfd_create.c')
0 files changed, 0 insertions, 0 deletions