| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
this could lead to spurious failures of wide printf functions
|
| |
|
| |
|
|
|
|
|
| |
this also includes a related fix for vswscanf's read function, which
was returning a spurious (uninitialized) character for empty strings.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
at this point, strto* and all scanf family functions are using the new
unified integer and floating point parser/converter code.
the wide scanf is largely a wrapper for ordinary byte-based scanf;
since numbers can only contain ascii characters, only strings need to
be handled specially.
|
| |
|
| |
|
|
|
|
|
| |
assuming other code is correct, this should be a no-op, but better to
be safe...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vfprintf temporarily swaps in a local buffer (for the duration of the
operation) when the target stream is unbuffered; this both simplifies
the implementation of functions like dprintf (they don't need their
own buffers) and eliminates the pathologically bad performance of
writing the formatted output with one or more write syscalls per
formatting field.
in cases like dprintf where we are dealing with a virgin FILE
structure, everything worked correctly. however for long-lived files
(like stderr), it's possible that the buffer bounds were already set
for the internal zero-size buffer. on the next write, __stdio_write
would pick up and use the new buffer provided by vfprintf, but the
bound (wend) field was still pointing at the internal zero-size
buffer's end. this in turn allowed unbounded writes to the temporary
buffer.
|
|
|
|
|
|
| |
the l prefix is redundant/no-op with printf, since default promotions
always promote floats to double; however, it is valid, and printf was
wrongly rejecting it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
advantages over the old code:
- correct results for floating point (old code was bogus)
- wide/regular scanf separated so scanf does not pull in wide code
- well-defined behavior on integers that overflow dest type
- support for %[a-b] ranges with %[ (impl-defined by widely used)
- no intermediate conversion of fmt string to wide string
- cleaner, easier to share code with strto* functions
- better standards conformance for corner cases
the old code remains in the source tree, as the wide versions of the
scanf-family functions are still using it. it will be removed when no
longer needed.
|
|
|
|
|
|
|
|
| |
other cases with %x were probably broken too.
I would actually like to go ahead and replace this code in scanf with
calls to the new __intparse framework, but for now this calls for a
quick and unobtrusive fix without the risk of breaking other things.
|
|
|
|
|
|
|
|
|
|
| |
it should be noted that only the actual underlying buffer flush and
fill operations are cancellable, not reads from or writes to the
buffer. this behavior is compatible with POSIX, which makes all
cancellation points in stdio optional, and it achieves the goal of
allowing cancellation of a thread that's "stuck" on IO (due to a
non-responsive socket/pipe peer, slow/stuck hardware, etc.) without
imposing any measurable performance cost.
|
| |
|
|
|
|
| |
passing null pointer for %s is UB but lots of broken programs do it anyway
|
|
|
|
|
|
|
| |
for now this is just a tiny optimization, but later if we support
cancellation from __stdio_read and __stdio_write, it will be necessary
for the recusrive lock count to be zero in order for these functions
to know they are responsible for unlocking the FILE on cancellation.
|
|
|
|
|
|
|
|
| |
null termination is only added when current size grows.
in update modes, null termination is not added if it does not fit
(i.e. it is not allowed to clobber data).
these rules make very little sense, but that's how it goes..
|
|
|
|
|
|
| |
read should not be allowed past "current size".
append mode should write at "current size", not buffer size.
null termination should not be written except when "current size" grows.
|
| |
|
|
|
|
|
|
| |
disallow seek past end of buffer (per posix)
fix position accounting to include data buffered for read
don't set eof flag when no data was requested
|
|
|
|
|
| |
the addition is safe and cannot overflow because both operands are
positive when considered as signed quantities.
|
| |
|
|
|
|
|
|
| |
the expression -off is not safe in case off is the most-negative
value. instead apply - to base which is known to be non-negative and
bounded within sanity.
|
|
|
|
| |
testing so far has been minimal. may need further work.
|
| |
|
|
|
|
|
|
| |
not heavily tested, but it seems to be correct, including the odd
behavior that seeking is in terms of wide character count. this
precludes any simple buffering, so we just make the stream unbuffered.
|
|
|
|
|
| |
this is the first attempt, and may have bugs. only minimal testing has
been performed.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously, stdio used spinlocks, which would be unacceptable if we
ever add support for thread priorities, and which yielded
pathologically bad performance if an application attempted to use
flockfile on a key file as a major/primary locking mechanism.
i had held off on making this change for fear that it would hurt
performance in the non-threaded case, but actually support for
recursive locking had already inflicted that cost. by having the
internal locking functions store a flag indicating whether they need
to perform unlocking, rather than using the actual recursive lock
counter, i was able to combine the conditionals at unlock time,
eliminating any additional cost, and also avoid a nasty corner case
where a huge number of calls to ftrylockfile could cause deadlock
later at the point of internal locking.
this commit also fixes some issues with usage of pthread_self
conflicting with __attribute__((const)) which resulted in crashes with
some compiler versions/optimizations, mainly in flockfile prior to
pthread_create.
|
| |
|
|
|
|
|
|
|
|
| |
fread was calling f->read without checking that the file was in
reading mode. this could:
1. crash, if f->read was a null pointer
2. cause unwanted blocking on a terminal already at eof
3. allow reading on a write-only file
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
hopefully this resolves the rest of the issues with hideously
nonportable hacks in programs that use gnulib.
|
|
|
|
|
|
| |
this is a really ugly and backwards function, but its presence will
prevent lots of broken gnulib software from trying to define its own
version of fpurge and thereby failing to build or worse.
|
| |
|
|
|
|
|
|
|
|
|
| |
the observed symptom was that the code was incorrectly rounding up
1.0625 to 1.063 despite the rounding mode being round-to-nearest with
ties broken by rounding to even last place. however, the code was just
not right in many respects, and i'm surprised it worked as well as it
did. this time i tested the values that end up in the variables round,
small, and the expression round+small, and all look good.
|
|
|
|
|
|
| |
these should be tweaked according to testing. offhand i know 1000 is
too low and 5000 is likely to be sufficiently high. consider trying to
add futexes to file locking, too...
|
|
|
|
|
|
|
|
| |
the previous fix was incorrect, as it would prevent f->close(f) from
being called if fflush(f) failed. i believe this was the original
motivation for using | rather than ||. so now let's just use a second
statement to constrain the order of function calls, and to back to
using |.
|
|
|
|
|
| |
pcc turned up this bug by calling f->close(f) before fflush(f),
resulting in lost output and error on flush.
|
| |
|
|
|
|
|
|
|
|
|
| |
1. failed match of literal chars from the format string would always
return matching failure rather than input failure at eof, leading to
infinite loops in some programs.
2. unread of eof would wrongly adjust the character counts reported by
%n, yielding an off-by-one error.
|
| |
|