about summary refs log tree commit diff
path: root/sysdeps/loongarch/lp64/multiarch
Commit message (Collapse)AuthorAgeFilesLines
* LoongArch: Add ifunc support for strcpy, stpcpy{aligned, unaligned, lsx, lasx}dengjianbo2023-09-1512-0/+963
| | | | | | | | | | | | | | | | | | | | According to glibc strcpy and stpcpy microbenchmark test results(changed to use generic_strcpy and generic_stpcpy instead of strlen + memcpy), comparing with the generic version, this implementation could reduce the runtime as following: Name Percent of rutime reduced strcpy-aligned 8%-45% strcpy-unaligned 8%-48%, comparing with the aligned version, unaligned version takes less instructions to copy the tail of data which length is less than 8. it also has better performance in case src and dest cannot be both aligned with 8bytes strcpy-lsx 20%-80% strcpy-lasx 15%-86% stpcpy-aligned 6%-43% stpcpy-unaligned 8%-48% stpcpy-lsx 10%-80% stpcpy-lasx 10%-87%
* LoongArch: Change loongarch to LoongArch in commentsdengjianbo2023-08-2924-24/+24
|
* LoongArch: Add ifunc support for memcmp{aligned, lsx, lasx}dengjianbo2023-08-297-0/+861
| | | | | | | | | | | According to glibc memcmp microbenchmark test results(Add generic memcmp), this implementation have performance improvement except the length is less than 3, details as below: Name Percent of time reduced memcmp-lasx 16%-74% memcmp-lsx 20%-50% memcmp-aligned 5%-20%
* LoongArch: Add ifunc support for memset{aligned, unaligned, lsx, lasx}dengjianbo2023-08-298-0/+688
| | | | | | | | | | | | | | According to glibc memset microbenchmark test results, for LSX and LASX versions, A few cases with length less than 8 experience performace degradation, overall, the LASX version could reduce the runtime about 15% - 75%, LSX version could reduce the runtime about 15%-50%. The unaligned version uses unaligned memmory access to set data which length is less than 64 and make address aligned with 8. For this part, the performace is better than aligned version. Comparing with the generic version, the performance is close when the length is larger than 128. When the length is 8-128, the unaligned version could reduce the runtime about 30%-70%, the aligned version could reduce the runtime about 20%-50%.
* LoongArch: Add ifunc support for memrchr{lsx, lasx}dengjianbo2023-08-297-0/+335
| | | | | | | | | According to glibc memrchr microbenchmark, this implementation could reduce the runtime as following: Name Percent of rutime reduced memrchr-lasx 20%-83% memrchr-lsx 20%-64%
* LoongArch: Add ifunc support for memchr{aligned, lsx, lasx}dengjianbo2023-08-297-0/+401
| | | | | | | | | | According to glibc memchr microbenchmark, this implementation could reduce the runtime as following: Name Percent of runtime reduced memchr-lasx 37%-83% memchr-lsx 30%-66% memchr-aligned 0%-15%
* LoongArch: Add ifunc support for rawmemchr{aligned, lsx, lasx}dengjianbo2023-08-297-0/+365
| | | | | | | | | According to glibc rawmemchr microbenchmark, A few cases tested with char '\0' experience performance degradation due to the lasx and lsx versions don't handle the '\0' separately. Overall, rawmemchr-lasx implementation could reduce the runtime about 40%-80%, rawmemchr-lsx implementation could reduce the runtime about 40%-66%, rawmemchr-aligned implementation could reduce the runtime about 20%-40%.
* LoongArch: Add ifunc support for strncmp{aligned, lsx}dengjianbo2023-08-246-0/+508
| | | | | | | | Based on the glibc microbenchmark, only a few short inputs with this strncmp-aligned and strncmp-lsx implementation experience performance degradation, overall, strncmp-aligned could reduce the runtime 0%-10% for aligned comparision, 10%-25% for unaligend comparision, strncmp-lsx could reduce the runtime about 0%-60%.
* LoongArch: Add ifunc support for strcmp{aligned, lsx}dengjianbo2023-08-246-0/+426
| | | | | | Based on the glibc microbenchmark, strcmp-aligned implementation could reduce the runtime 0%-10% for aligned comparison, 10%-20% for unaligned comparison, strcmp-lsx implemenation could reduce the runtime 0%-50%.
* LoongArch: Add ifunc support for strnlen{aligned, lsx, lasx}dengjianbo2023-08-247-0/+382
| | | | | | | Based on the glibc microbenchmark, strnlen-aligned implementation could reduce the runtime more than 10%, strnlen-lsx implementation could reduce the runtime about 50%-78%, strnlen-lasx implementation could reduce the runtime about 50%-88%.
* Loongarch: Add ifunc support for memcpy{aligned, unaligned, lsx, lasx} and ↵dengjianbo2023-08-1713-0/+2435
| | | | | | | | | | | | | | | memmove{aligned, unaligned, lsx, lasx} These implementations improve the time to copy data in the glibc microbenchmark as below: memcpy-lasx reduces the runtime about 8%-76% memcpy-lsx reduces the runtime about 8%-72% memcpy-unaligned reduces the runtime of unaligned data copying up to 40% memcpy-aligned reduece the runtime of unaligned data copying up to 25% memmove-lasx reduces the runtime about 20%-73% memmove-lsx reduces the runtime about 50% memmove-unaligned reduces the runtime of unaligned data moving up to 40% memmove-aligned reduces the runtime of unaligned data moving up to 25%
* Loongarch: Add ifunc support for strchr{aligned, lsx, lasx} and ↵dengjianbo2023-08-1712-0/+581
| | | | | | | | | | | | | strchrnul{aligned, lsx, lasx} These implementations improve the time to run strchr{nul} microbenchmark in glibc as below: strchr-lasx reduces the runtime about 50%-83% strchr-lsx reduces the runtime about 30%-67% strchr-aligned reduces the runtime about 10%-20% strchrnul-lasx reduces the runtime about 50%-83% strchrnul-lsx reduces the runtime about 36%-65% strchrnul-aligned reduces the runtime about 6%-10%
* Loongarch: Add ifunc support and add different versions of strlendengjianbo2023-08-147-0/+359
strlen-lasx is implemeted by LASX simd instructions(256bit) strlen-lsx is implemeted by LSX simd instructions(128bit) strlen-align is implemented by LA basic instructions and never use unaligned memory acess