| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the approach of this implementation was heavily investigated prior to
adopting it. attempts to obtain similar performance with pure C code
were capping out at about 75% of the performance of the asm, with
considerably larger code size, and were fragile in that the compiler
would sometimes compile part of memcpy into a call to itself.
therefore, just using the asm seems to be the best option.
this commit is the first to make use of the new subarch-specific asm
framework. the new armel directory is the location for arm asm that
should not be used for all arm subarchs, only the default one. armhf
is the name of the little-endian hardfloat-ABI subarch, which can use
the exact same asm. in both cases, the build system finds the asm by
following a memcpy.sub file.
the other two subarchs, armeb and armebhf, would need a big-endian
variant of this code. it would not be hard to adapt the code to big
endian, but I will hold off on doing so until there is demand for it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the concept of both versions is the same; they differ only in details.
for long runs, they use "rep movsl" or "rep movsq", and for small
runs, they use a trick, writing from both ends towards the middle,
that reduces the number of branches needed. in addition, if memset is
called multiple times with the same length, all branches will be
predicted; there are no loops.
for larger runs, there are likely faster approaches than "rep", at
least on some cpu models. for 32-bit, it's unlikely that there is any
faster approach that does not require non-baseline instructions; doing
anything fancier would require inspecting cpu capabilities. for
64-bit, there may very well be faster versions that work on all
models; further optimization could be explored in the future.
with these changes, memset is anywhere between 50% faster and 6 times
faster, depending on the cpu model and the length and alignment of the
destination buffer.
|
|
|
|
|
|
|
| |
there are still several more that are misleading, but SIGFPE (integer
division error misdescribed as floating point) and and SIGCHLD
(possibly non-exit status change events described as exiting) were the
worst offenders.
|
|
|
|
|
| |
the name format RTnn/RTnnn was chosen to minimized bloat while
uniquely identifying the signal.
|
| |
|
|
|
|
|
|
|
|
| |
GNU used several extensions that were incompatible with C99 and POSIX,
so they used alternate names for the standard functions.
The result is that we need these to run standards-conformant programs
that were linked with glibc.
|
|
|
|
|
|
|
|
|
| |
lenl-lenr is not a valid expression for a signed int return value from
strverscmp, since after implicit conversion from size_t to int this
difference could have the wrong sign or might even be zero. using the
difference for char values works since they're bounded well within the
range of differences representable by int, but it does not work for
size_t values.
|
|
|
|
| |
patch by Isaac Dunham.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
gcc seems to be generating identical or near-identical code for both
versions, but the newer code is more expressive of what it's doing.
|
|
|
|
| |
based on strstr. passes gnulib tests and a few quick checks of my own.
|
|
|
|
|
|
|
|
|
| |
when strchr fails, and important piece of information already
computed, the string length, is thrown away. have strchrnul (with
namespace protection) be the underlying function so this information
can be kept, and let strchr be a wrapper for it. this also allows
strcspn to be considerably faster in the case where the match set has
a single element that's not matched.
|
|
|
|
|
|
|
| |
testing with gcc 4.6.3 on x86, -Os, the old version does a duplicate
null byte check after the first loop. this is purely the compiler
being stupid, but the old code was also stupid and unintuitive in how
it expressed the check.
|
|
|
|
|
|
|
| |
for the sake of simplicity, I've only used rep movsb rather than
breaking up the copy for using rep movsd/q. on all modern cpus, this
seems to be fine, but if there are performance problems, there might
be a need to go back and add support for rep movsd/q.
|
|
|
|
|
|
|
|
|
| |
before restrict was added, memove called memcpy for forward copies and
used a byte-at-a-time loop for reverse copies. this was changed to
avoid invoking UB now that memcpy has an undefined copying order,
making memmove considerably slower.
performance is still rather bad, so I'll be adding asm soon.
|
|
|
|
|
|
|
|
| |
to deal with the fact that the public headers may be used with pre-c99
compilers, __restrict is used in place of restrict, and defined
appropriately for any supported compiler. we also avoid the form
[restrict] since older versions of gcc rejected it due to a bug in the
original c99 standard, and instead use the form *restrict.
|
|
|
|
|
|
| |
unlike the memmove commit, this one should be fine to leave in place.
wmemmove is not performance-critical, and even if it were, it's
already copying whole 32-bit words at a time instead of bytes.
|
|
|
|
|
|
|
|
| |
this commit introduces a performance regression in many uses of
memmove, which will need to be addressed before the next release. i'm
making it as a temporary measure so that the restrict patch can be
committed without invoking undefined behavior when memmove calls
memcpy with overlapping regions.
|
| |
|
| |
|
|
|
|
|
|
| |
since this interface is rarely used, it's probably best to lean
towards keeping code size down anyway. one-character needles will
still be found immediately by the initial wcschr call anyway.
|
| |
|
|
|
|
|
|
|
| |
if the buffer is too short, at least return a partial string. this is
helpful if the caller is lazy and does not check for failure. care is
taken to avoid writing anything if the buffer length is zero, and to
always null-terminate when the buffer length is non-zero.
|
|
|
|
| |
bug report and solution by Richard Pennington
|
|
|
|
| |
bug report and solution by Richard Pennington
|
|
|
|
|
| |
these are mostly untested and adapted directly from corresponding byte
string functions and similar.
|
|
|
|
|
|
| |
programs that use this tend to horribly botch international text
support, so it's questionable whether we want to support it even in
the long term... for now, it's just a dummy that calls strcmp.
|
| |
|
|
|
|
| |
also modify wcsncpy to use the same loop logic
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
this could actually cause rare crashes in the case where a short
string is located at the end of a page and the following page is not
readable, and in fact this was seen in gcc compiling certain files.
|
|
|
|
|
|
| |
search for bytes with high bit set was giving (potentially dangerous)
wrong results. i've tested, cleaned up, and hopefully sped up this
function now.
|
|
|
|
|
|
|
| |
sadly the C language does not specify any such implicit conversion, so
this is not a matter of just fixing warnings (as gcc treats it) but
actual errors. i would like to revisit a number of these changes and
possibly revise the types used to reduce the number of casts required.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
this only made the function unnecessarily slow on systems with
unaligned access, but would of course crash on systems that can't do
unaligned accesses (none of which have ports yet).
|
| |
|
|
|