| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
based on patch by Dan Gohman, who caught this via compiler warnings.
analysis by Szabolcs Nagy determined that it's a bug, whereby errno
can be set incorrectly for values where the coercion from long double
to double causes rounding. it seems likely that floating point status
flags may be set incorrectly as a result too.
at the same time, clean up use of preprocessor concatenation involving
LDBL_MANT_DIG, which spuriously depends on it being a single unadorned
decimal integer literal, and instead use the equivalent formulation
2/LDBL_EPSILON. an equivalent change on the printf side was made in
commit bff6095d915f3e41206e47ea2a570ecb937ef926.
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit 6ffdc4579ffb34f4aab69ab4c081badabc7c0a9a set lnz in the code
path for non-zero digits after a huge string of zeros, but the
assignment of dc to lnz truncates if the value of dc does not fit in
int; this is possible for some pathologically long inputs, either via
strings on 64-bit systems or via scanf-family functions.
instead, simply set lnz to match the point at which we add the
artificial trailing 1 bit to simulate nonzero digits after a huge
run of zeros.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the mid-sized integer optimization relies on lnz set up properly
to mark the last non-zero decimal digit, but this was not done
if the non-zero digit lied outside the KMAX digits of the base
10^9 number representation.
so if the fractional part was a very long list of zeros (>2048*9 on
x86) followed by non-zero digits then the integer optimization could
kick in discarding the tiny non-zero fraction which can mean wrong
result on non-nearest rounding mode.
strtof, strtod and strtold were all affected.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in certain cases excessive trailing zeros could cause incorrect
rounding from long double to double or float in decfloat.
e.g. in strtof("9444733528689243848704.000000", 0) the argument
is 0x1.000001p+73, exactly halfway between two representible floats,
this incorrectly got rounded to 0x1.000002p+73 instead of 0x1p+73,
but with less trailing 0 the rounding was fine.
the fix makes sure that the z index always points one past the last
non-zero digit in the base 10^9 representation, this way trailing
zeros don't affect the rounding logic.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
just defining the necessary constants:
LD_B1B_MAX is 2^113 - 1 in base 10^9
KMAX is 2048 so the x array can hold up to 18432 decimal digits
(the worst case is converting 2^-16495 = 5^16495 * 10^-16495 to
binary, it requires the processing of int(log10(5)*16495)+1 = 11530
decimal digits after discarding the leading zeros, the conversion
requires some headroom in x, but KMAX is more than enough for that)
However this code is not optimal on archs with IEEE binary128
long double because the arithmetics is software emulated (on
all such platforms as far as i know) which means big and slow
strtod.
|
|
|
|
|
|
|
|
|
|
|
| |
this header evolved to facilitate the extremely lazy practice of
omitting explicit includes of the necessary headers in individual
stdio source files; not only was this sloppy, but it also increased
build time.
now, stdio_impl.h is only including the headers it needs for its own
use; any further headers needed by source files are included directly
where needed.
|
|
|
|
|
| |
this will prevent gnulib from wrapping our strtod to handle this
useless feature.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this affects at least the case of very long inputs, but may also
affect shorter inputs that become long due to growth while upscaling.
basically, the logic for the circular buffer indices of the initial
base-10^9 digit and the slot one past the final digit, and for
simplicity of the loop logic, assumes an invariant that they're not
equal. the upscale loop, which can increase the length of the
base-10^9 representation, attempted to preserve this invariant, but
was actually only ensuring that the end index did not loop around past
the start index, not that the two never become equal.
the main (only?) effect of this bug was that subsequent logic treats
the excessively long number as having no digits, leading to junk
results.
|
| |
|
|
|
|
|
|
| |
this caused misreading of certain floating point values that are exact
multiples of large powers of ten, unpredictable depending on prior
stack contents.
|
|
|
|
| |
also be extra careful to avoid wrapping the circular buffer early
|
|
|
|
|
|
|
|
|
|
|
| |
care is taken that the setting of errno correctly reflects underflow
condition. scanning exact denormal values does not result in ERANGE,
nor does scanning values (such as the usual string definition of
FLT_MIN) which are actually less than the smallest normal number but
which round to a normal result.
only the decimal case is handled so far; hex float require a separate
fix to come later.
|
|
|
|
|
|
| |
in principle this should just be an optimization, but it happens to
also fix a nasty bug where values like 0.00000000001 were getting
caught by the early zero detection path and wrongly scanned as zero.
|
|
|
|
| |
bug detected by glib test suite
|
| |
|
|
|
|
|
| |
this was basically harmless, but could have resulted in misreading
inputs with more than a few gigabytes worth of digits..
|
|
|
|
|
|
| |
this code worked in strtod, but not in scanf. more evidence that i
should design a better interface for discarding multiple tail
characters than just calling unget repeatedly...
|
|
|
|
|
|
|
| |
this off-by-one error was causing values with just one digit past the
decimal point to be treated by the integer case. in many cases it
would yield the correct result, but if expressions are evaluated in
excess precision, double rounding may occur.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when upscaling, even the very last digit is needed in cases where the
input is exact; no digits can be discarded. but when downscaling, any
digits less significant than the mantissa bits are destined for the
great bitbucket; the only influence they can have is their presence
(being nonzero). thus, we simply throw them away early. the result is
nearly a 4x performance improvement for processing huge values.
the particular threshold LD_B1B_DIG+3 is not chosen sharply; it's
simply a "safe" distance past the significant bits. it would be nice
to replace it with a sharp bound, but i suspect performance will be
comparable (within a few percent) anyway.
|
|
|
|
|
|
|
|
|
|
|
|
| |
now that this is the first operation, it can rely on the circular
buffer contents not being wrapped when it begins. we limit the number
of digits read slightly in the initial parsing loops too so that this
code does not have to consider the case where it might cause the
circular buffer to wrap; this is perfectly fine because KMAX is chosen
as a power of two for circular-buffer purposes and is much larger than
it otherwise needs to be, anyway.
these changes should not affect performance at all.
|
|
|
|
|
|
|
|
|
|
| |
upscaling by even one step too much creates 3-29 extra iterations for
the next loop. this is still suboptimal since it always goes by 2^29
rather than using a smaller upscale factor when nearing the target,
but performance on common, small-magnitude, few-digit values has
already more than doubled with this change.
more optimizations on the way...
|
| |
|
|
|
|
|
|
|
| |
for example, "1000000000" was being read as "1" due to this loop
exiting early. it's necessary to actually update z and zero the
entries so that the subsequent rounding code does not get confused;
before i did that, spurious inexact exceptions were being raised.
|
|
|
|
|
|
|
| |
note that there's no need for a precise cutoff, because exponents this
large will always result in overflow or underflow (it's impossible to
read enough digits to compensate for the exponent magnitude; even at a
few nanoseconds per digit it would take hundreds of years).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the immediate benefit is a significant debloating of the float parsing
code by moving the responsibility for keeping track of the number of
characters read to a different module.
by linking shgetc with the stdio buffer logic, counting logic is
defered to buffer refill time, keeping the calls to shgetc fast and
light.
in the future, shgetc will also be useful for integrating the new
float code with scanf, which needs to not only count the characters
consumed, but also limit the number of characters read based on field
width specifiers.
shgetc may also become a useful tool for simplifying the integer
parsing code.
|
|
this version is intended to be fully conformant to the ISO C, POSIX,
and IEEE standards for conversion of decimal/hex floating point
strings to float, double, and long double (ld64 or ld80 only at
present) values. in particular, all results are intended to be rounded
correctly according to the current rounding mode. further, this
implementation aims to set the floating point underflow, overflow, and
inexact flags to reflect the conversion performed.
a moderate amount of testing has been performed (by nsz and myself)
prior to integration of the code in musl, but it still may have bugs.
so far, only strto(d|ld|f) use the new code. scanf integration will be
done as a separate commit, and i will add implementations of the wide
character functions later.
|