| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
fcntl values 1024 and up are universal, arch-independent. later I'll
add some of the other linux-specific ones for notify, leases, pipe
size, etc. here too.
|
|
|
|
| |
F_* is in the reserved namespace so no feature test is needed
|
|
|
|
|
|
|
| |
the "< 0" test was always false due to use of an unsigned type. this
resulted in infinite loops on 32-bit machines (adding -1U to a pointer
is the same as adding -1) and crashes on 64-bit machines (offsetting
the string pointer by 4gb-1b when an illegal sequence was hit).
|
|
|
|
|
|
|
|
|
| |
this is legal since sa_* is in the reserved namespace for signal.h,
per posix. note that the sa_restorer field is not used anywhere, so
programs that are trying to use it may still break, but at least
they'll compile. if it turns out such programs actually need to be
able to set their own sa_restorer to function properly, i'll add the
necessary code to sigaction.c later.
|
|
|
|
|
|
|
|
|
|
|
| |
TRE wants to treat + and ? after a +, ?, or * as special; ? means
ungreedy and + is reserved for future use. however, this is
non-conformant. although redundant, these redundant characters have
well-defined (no-op) meaning for POSIX ERE, and are actually _literal_
characters (which TRE is wrongly ignoring) in POSIX BRE mode.
the simplest fix is to simply remove the unneeded nonstandard
functionality. as a plus, this shaves off a small amount of bloat.
|
| |
|
|
|
|
|
| |
this increases code size slightly, but it's considerably faster,
especially for power-of-2 bases.
|
|
|
|
|
|
| |
at -Os optimization level, gcc refuses to inline these functions even
though the inlined code would roughly the same size as the function
call, and much faster. the easy solution is to make them into macros.
|
|
|
|
|
|
|
| |
whenever the base was small enough that more than one digit could
still fit after UINTMAX_MAX/36-1 was reached, only the first would be
allowed; subsequent digits would trigger spurious overflow, making it
impossible to read the largest values in low bases.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| |
| | |
use (1-x)*(1+x) instead of (1-x*x) in asin.s
the later can be inaccurate with upward rounding when x is close to 1
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
when upscaling, even the very last digit is needed in cases where the
input is exact; no digits can be discarded. but when downscaling, any
digits less significant than the mantissa bits are destined for the
great bitbucket; the only influence they can have is their presence
(being nonzero). thus, we simply throw them away early. the result is
nearly a 4x performance improvement for processing huge values.
the particular threshold LD_B1B_DIG+3 is not chosen sharply; it's
simply a "safe" distance past the significant bits. it would be nice
to replace it with a sharp bound, but i suspect performance will be
comparable (within a few percent) anyway.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
now that this is the first operation, it can rely on the circular
buffer contents not being wrapped when it begins. we limit the number
of digits read slightly in the initial parsing loops too so that this
code does not have to consider the case where it might cause the
circular buffer to wrap; this is perfectly fine because KMAX is chosen
as a power of two for circular-buffer purposes and is much larger than
it otherwise needs to be, anyway.
these changes should not affect performance at all.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
upscaling by even one step too much creates 3-29 extra iterations for
the next loop. this is still suboptimal since it always goes by 2^29
rather than using a smaller upscale factor when nearing the target,
but performance on common, small-magnitude, few-digit values has
already more than doubled with this change.
more optimizations on the way...
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
for example, "1000000000" was being read as "1" due to this loop
exiting early. it's necessary to actually update z and zero the
entries so that the subsequent rounding code does not get confused;
before i did that, spurious inexact exceptions were being raised.
|
| |
| |
| |
| |
| |
| |
| | |
note that there's no need for a precise cutoff, because exponents this
large will always result in overflow or underflow (it's impossible to
read enough digits to compensate for the exponent magnitude; even at a
few nanoseconds per digit it would take hundreds of years).
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
the immediate benefit is a significant debloating of the float parsing
code by moving the responsibility for keeping track of the number of
characters read to a different module.
by linking shgetc with the stdio buffer logic, counting logic is
defered to buffer refill time, keeping the calls to shgetc fast and
light.
in the future, shgetc will also be useful for integrating the new
float code with scanf, which needs to not only count the characters
consumed, but also limit the number of characters read based on field
width specifiers.
shgetc may also become a useful tool for simplifying the integer
parsing code.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
this version is intended to be fully conformant to the ISO C, POSIX,
and IEEE standards for conversion of decimal/hex floating point
strings to float, double, and long double (ld64 or ld80 only at
present) values. in particular, all results are intended to be rounded
correctly according to the current rounding mode. further, this
implementation aims to set the floating point underflow, overflow, and
inexact flags to reflect the conversion performed.
a moderate amount of testing has been performed (by nsz and myself)
prior to integration of the code in musl, but it still may have bugs.
so far, only strto(d|ld|f) use the new code. scanf integration will be
done as a separate commit, and i will add implementations of the wide
character functions later.
|
| |
| |
| |
| | |
I forgot _GNU_SOURCE also has it declared here...
|
|/
|
|
|
|
|
|
| |
gcc makes this mapping by default anyway, but it will be disabled by
-fno-builtin (and presumably by -std=c99 or similar). for the main
program the error will be reported by the linker, and the issue can
easily be fixed, but for dynamic-loaded so files, the error cannot be
detected until dlopen time, at which point it has become very obscure.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when the "r" (register) constraint is used to let gcc choose a
register, gcc will sometimes assign the same register that was used
for one of the other fixed-register operands, if it knows the values
are the same. one common case is multiple zero arguments to a syscall.
this horribly breaks the intended usage, which is swapping the GOT
pointer from ebx into the temp register and back to perform the
syscall.
presumably there is a way to fix this with advanced usage of register
constaints on the inline asm, but having bad memories about hellish
compatibility issues with different gcc versions, for the time being
i'm just going to hard-code specific registers to be used. this may
hurt the compiler's ability to optimize, but it will fix serious
miscompilation issues.
so far the only function i know what compiled incorrectly is
getrlimit.c, and naturally the bug only applies to shared (PIC)
builds, but it may be more extensive and may have gone undetected..
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
the buffer in getaddrinfo really only matters when /etc/hosts is huge,
but in that case, the huge number of syscalls resulting from a tiny
buffer would seriously impact the performance of every name lookup.
the buffer in __dns.c has also been enlarged a bit so that typical
resolv.conf files will fit fully in the buffer. there's no need to
make it so large as to dominate the syscall overhead for large files,
because resolv.conf should never be large.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
the int part was wrong when -1 < x <= -0 (+0.0 instead of -0.0)
and the size and performace gain of the asm version was negligible
|
|
|
|
| |
cleaner implementation with unions and unsigned arithmetic
|
|
|
|
|
| |
modfl(+-inf) was wrong on ld80 because the explicit msb
was not taken into account during inf vs nan check
|
|
|
|
|
| |
previously a division was accidentally turned into integer div
(w = -i/NXT;) instead of long double div (w = -i; w /= NXT;)
|
| |
|
|
|
|
|
| |
It is probably not worth supporting gamma.
(it was already deprecated in 4.3BSD)
|
|
|
|
| |
(fldl instruction was used instead of flds and fldt)
|
|\ |
|
| |\ |
|
| | | |
|
| | | |
|
| | | |
|
|\| |
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
special care is made to avoid any inexact computations when either arg
is zero (in which case the exact absolute value of the other arg
should be returned) and to support the special condition that
hypot(±inf,nan) yields inf.
hypotl is not yet implemented since avoiding overflow is nontrivial.
|
| |
| |
| |
| |
| |
| |
| | |
the error status is required to be sticky after failure of dlopen or
dlsym until cleared by dlerror. applications and especially libraries
should never rely on this since it is not thread-safe and subject to
race conditions, but glib does anyway.
|