| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
- add the rest of the junk traditionally in sys/param.h
- add prototypes for some nonstandard functions
- add _GNU_SOURCE to their source files so the compiler can check proto
|
|
|
|
|
| |
this was basically harmless, but could have resulted in misreading
inputs with more than a few gigabytes worth of digits..
|
| |
|
|
|
|
|
| |
this also includes a related fix for vswscanf's read function, which
was returning a spurious (uninitialized) character for empty strings.
|
| |
|
| |
|
|
|
|
|
|
| |
this code worked in strtod, but not in scanf. more evidence that i
should design a better interface for discarding multiple tail
characters than just calling unget repeatedly...
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
at this point, strto* and all scanf family functions are using the new
unified integer and floating point parser/converter code.
the wide scanf is largely a wrapper for ordinary byte-based scanf;
since numbers can only contain ascii characters, only strings need to
be handled specially.
|
| |
|
| |
|
|
|
|
|
| |
assuming other code is correct, this should be a no-op, but better to
be safe...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vfprintf temporarily swaps in a local buffer (for the duration of the
operation) when the target stream is unbuffered; this both simplifies
the implementation of functions like dprintf (they don't need their
own buffers) and eliminates the pathologically bad performance of
writing the formatted output with one or more write syscalls per
formatting field.
in cases like dprintf where we are dealing with a virgin FILE
structure, everything worked correctly. however for long-lived files
(like stderr), it's possible that the buffer bounds were already set
for the internal zero-size buffer. on the next write, __stdio_write
would pick up and use the new buffer provided by vfprintf, but the
bound (wend) field was still pointing at the internal zero-size
buffer's end. this in turn allowed unbounded writes to the temporary
buffer.
|
|
|
|
|
|
| |
the l prefix is redundant/no-op with printf, since default promotions
always promote floats to double; however, it is valid, and printf was
wrongly rejecting it.
|
| |
|
|
|
|
| |
not heavily tested but these functions appear to work correctly
|
|
|
|
|
|
|
|
|
|
| |
shunget cannot unget eof status, causing wcstol to leave endptr
pointing to the wrong place when scanning, for example, L"0x". cheap
fix is to make the read function provide an infinite stream of bogus
characters rather than eof. really this is something of a design flaw
in how the shgetc system is used for strto* and wcsto*; in the long
term, I believe multi-character unget should be scrapped and replaced
with a function that can subtract from the f->shcnt counter.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
advantages over the old code:
- correct results for floating point (old code was bogus)
- wide/regular scanf separated so scanf does not pull in wide code
- well-defined behavior on integers that overflow dest type
- support for %[a-b] ranges with %[ (impl-defined by widely used)
- no intermediate conversion of fmt string to wide string
- cleaner, easier to share code with strto* functions
- better standards conformance for corner cases
the old code remains in the source tree, as the wide versions of the
scanf-family functions are still using it. it will be removed when no
longer needed.
|
|
|
|
| |
this is needed for upcoming new scanf
|
| |
|
|
|
|
|
|
|
| |
I'm not sure if it's legal for wordexp to modify this field, but this
is the only easy/straightforward fix, and applications should not
care. if it's an issue, i can work out a different (but more complex)
solution later.
|
| |
|
|
|
|
|
|
|
| |
this off-by-one error was causing values with just one digit past the
decimal point to be treated by the integer case. in many cases it
would yield the correct result, but if expressions are evaluated in
excess precision, double rounding may occur.
|
|
|
|
|
|
|
| |
the "< 0" test was always false due to use of an unsigned type. this
resulted in infinite loops on 32-bit machines (adding -1U to a pointer
is the same as adding -1) and crashes on 64-bit machines (offsetting
the string pointer by 4gb-1b when an illegal sequence was hit).
|
|
|
|
|
|
|
|
|
|
|
| |
TRE wants to treat + and ? after a +, ?, or * as special; ? means
ungreedy and + is reserved for future use. however, this is
non-conformant. although redundant, these redundant characters have
well-defined (no-op) meaning for POSIX ERE, and are actually _literal_
characters (which TRE is wrongly ignoring) in POSIX BRE mode.
the simplest fix is to simply remove the unneeded nonstandard
functionality. as a plus, this shaves off a small amount of bloat.
|
| |
|
|
|
|
|
| |
this increases code size slightly, but it's considerably faster,
especially for power-of-2 bases.
|
|
|
|
|
|
| |
at -Os optimization level, gcc refuses to inline these functions even
though the inlined code would roughly the same size as the function
call, and much faster. the easy solution is to make them into macros.
|
|
|
|
|
|
|
| |
whenever the base was small enough that more than one digit could
still fit after UINTMAX_MAX/36-1 was reached, only the first would be
allowed; subsequent digits would trigger spurious overflow, making it
impossible to read the largest values in low bases.
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| |
| | |
use (1-x)*(1+x) instead of (1-x*x) in asin.s
the later can be inaccurate with upward rounding when x is close to 1
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
when upscaling, even the very last digit is needed in cases where the
input is exact; no digits can be discarded. but when downscaling, any
digits less significant than the mantissa bits are destined for the
great bitbucket; the only influence they can have is their presence
(being nonzero). thus, we simply throw them away early. the result is
nearly a 4x performance improvement for processing huge values.
the particular threshold LD_B1B_DIG+3 is not chosen sharply; it's
simply a "safe" distance past the significant bits. it would be nice
to replace it with a sharp bound, but i suspect performance will be
comparable (within a few percent) anyway.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
now that this is the first operation, it can rely on the circular
buffer contents not being wrapped when it begins. we limit the number
of digits read slightly in the initial parsing loops too so that this
code does not have to consider the case where it might cause the
circular buffer to wrap; this is perfectly fine because KMAX is chosen
as a power of two for circular-buffer purposes and is much larger than
it otherwise needs to be, anyway.
these changes should not affect performance at all.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
upscaling by even one step too much creates 3-29 extra iterations for
the next loop. this is still suboptimal since it always goes by 2^29
rather than using a smaller upscale factor when nearing the target,
but performance on common, small-magnitude, few-digit values has
already more than doubled with this change.
more optimizations on the way...
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
for example, "1000000000" was being read as "1" due to this loop
exiting early. it's necessary to actually update z and zero the
entries so that the subsequent rounding code does not get confused;
before i did that, spurious inexact exceptions were being raised.
|
| |
| |
| |
| |
| |
| |
| | |
note that there's no need for a precise cutoff, because exponents this
large will always result in overflow or underflow (it's impossible to
read enough digits to compensate for the exponent magnitude; even at a
few nanoseconds per digit it would take hundreds of years).
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
the immediate benefit is a significant debloating of the float parsing
code by moving the responsibility for keeping track of the number of
characters read to a different module.
by linking shgetc with the stdio buffer logic, counting logic is
defered to buffer refill time, keeping the calls to shgetc fast and
light.
in the future, shgetc will also be useful for integrating the new
float code with scanf, which needs to not only count the characters
consumed, but also limit the number of characters read based on field
width specifiers.
shgetc may also become a useful tool for simplifying the integer
parsing code.
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this version is intended to be fully conformant to the ISO C, POSIX,
and IEEE standards for conversion of decimal/hex floating point
strings to float, double, and long double (ld64 or ld80 only at
present) values. in particular, all results are intended to be rounded
correctly according to the current rounding mode. further, this
implementation aims to set the floating point underflow, overflow, and
inexact flags to reflect the conversion performed.
a moderate amount of testing has been performed (by nsz and myself)
prior to integration of the code in musl, but it still may have bugs.
so far, only strto(d|ld|f) use the new code. scanf integration will be
done as a separate commit, and i will add implementations of the wide
character functions later.
|