diff options
author | Adhemerval Zanella <adhemerval.zanella@linaro.org> | 2023-01-10 18:00:59 -0300 |
---|---|---|
committer | Adhemerval Zanella <adhemerval.zanella@linaro.org> | 2023-02-06 16:19:35 -0300 |
commit | 2a8867a17ffe5c5a4251fd40bf6c73a3fd426062 (patch) | |
tree | 2b533dae74065199f0d1eaa8be5b385685dfce5e /sysdeps/powerpc/powerpc64/multiarch | |
parent | 3709ed904770b440d68385f3da259008cdf642a6 (diff) | |
download | glibc-2a8867a17ffe5c5a4251fd40bf6c73a3fd426062.tar.gz glibc-2a8867a17ffe5c5a4251fd40bf6c73a3fd426062.tar.xz glibc-2a8867a17ffe5c5a4251fd40bf6c73a3fd426062.zip |
string: Improve generic memchr
New algorithm read the first aligned address and mask off the unwanted bytes (this strategy is similar to arch-specific implementations used on powerpc, sparc, and sh). The loop now read word-aligned address and check using the has_eq macro. Checked on x86_64-linux-gnu, i686-linux-gnu, powerpc-linux-gnu, and powerpc64-linux-gnu by removing the arch-specific assembly implementation and disabling multi-arch (it covers both LE and BE for 64 and 32 bits). Co-authored-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
Diffstat (limited to 'sysdeps/powerpc/powerpc64/multiarch')
-rw-r--r-- | sysdeps/powerpc/powerpc64/multiarch/memchr-ppc64.c | 9 |
1 files changed, 1 insertions, 8 deletions
diff --git a/sysdeps/powerpc/powerpc64/multiarch/memchr-ppc64.c b/sysdeps/powerpc/powerpc64/multiarch/memchr-ppc64.c index 8097df709c..49ba5521fe 100644 --- a/sysdeps/powerpc/powerpc64/multiarch/memchr-ppc64.c +++ b/sysdeps/powerpc/powerpc64/multiarch/memchr-ppc64.c @@ -18,14 +18,7 @@ #include <string.h> -#define MEMCHR __memchr_ppc - -#undef weak_alias -#define weak_alias(a, b) - -# undef libc_hidden_builtin_def -# define libc_hidden_builtin_def(name) - extern __typeof (memchr) __memchr_ppc attribute_hidden; +#define MEMCHR __memchr_ppc #include <string/memchr.c> |