diff options
author | Wilco Dijkstra <wdijkstr@arm.com> | 2019-06-12 11:42:34 +0100 |
---|---|---|
committer | Wilco Dijkstra <wdijkstr@arm.com> | 2019-06-12 11:42:34 +0100 |
commit | 680942b0167715e123d934b609060cd382f8e39f (patch) | |
tree | 9c6eb3f252e4046a96d62a29a471275a0af66b06 /malloc/alloc_buffer_copy_string.c | |
parent | 5e0a7ecb6629461b28adc1a5aabcc0ede122f201 (diff) | |
download | glibc-680942b0167715e123d934b609060cd382f8e39f.tar.gz glibc-680942b0167715e123d934b609060cd382f8e39f.tar.xz glibc-680942b0167715e123d934b609060cd382f8e39f.zip |
Improve performance of memmem
This patch significantly improves performance of memmem using a novel modified Horspool algorithm. Needles up to size 256 use a bad-character table indexed by hashed pairs of characters to quickly skip past mismatches. Long needles use a self-adapting filtering step to avoid comparing the whole needle repeatedly. By limiting the needle length to 256, the shift table only requires 8 bits per entry, lowering preprocessing overhead and minimizing cache effects. This limit also implies worst-case performance is linear. Small needles up to size 2 use a dedicated linear search. Very long needles use the Two-Way algorithm (to avoid increasing stack size or slowing down the common case, inlining is disabled). The performance gain is 6.6 times on English text on AArch64 using random needles with average size 8. Tested against GLIBC testsuite and randomized tests. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com> * string/memmem.c (__memmem): Rewrite to improve performance.
Diffstat (limited to 'malloc/alloc_buffer_copy_string.c')
0 files changed, 0 insertions, 0 deletions