| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
at the point pclose might receive and act on cancellation, it has
already invalidated the FILE passed to it. thus, per musl's QOI
guarantees about cancellation and resource allocation/deallocation,
it's not a candidate for cancellation.
if it were required to be a cancellation point by posix, we would have
to switch the order of deallocation, but somehow still close the pipe
in order to trigger the child process to exit. i looked into doing
this, but the logic gets ugly, and i'm not sure the semantics are
conformant, so i'd rather just leave it alone unless there's a need to
change it.
|
| |
|
|
|
|
|
|
|
| |
close was the only cancellation point called from popen, but it left
popen with major resource leaks if any call to close got cancelled.
the easiest, cheapest fix is just to use a non-cancellable close
function.
|
|
|
|
|
| |
also check for failure of dup2 and abort the child rather than
reading/writing the wrong file.
|
|
|
|
|
|
| |
this one could never cause any problems unless the compiler/machine
goes to extra trouble to break oob pointer arithmetic, but it's best
to fix it anyway.
|
|
|
|
| |
patch by nsz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
large precision values could cause out-of-bounds pointer arithmetic in
computing the precision cutoff (used to avoid expensive long-precision
arithmetic when the result will be discarded). per the C standard,
this is undefined behavior. one would expect that it works anyway, and
in fact it did in most real-world cases, but it was randomly
(depending on aslr) crashing in i386 binaries running on x86_64
kernels. this is because linux puts the userspace stack near 4GB
(instead of near 3GB) when the kernel is 64-bit, leading to the
out-of-bounds pointer arithmetic overflowing past the end of address
space and giving a very low pointer value, which then compared lower
than a pointer it should have been higher than.
the new code rearranges the arithmetic so that no overflow can occur.
while this bug could crash printf with memory corruption, it's
unlikely to have security impact in real-world applications since the
ability to provide an extremely large field precision value under
attacker-control is required to trigger the bug.
|
|
|
|
|
| |
this is mildly ugly, but less ugly than gnulib trying to poke at the
definition of the FILE structure...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
for seekable files, posix imposed requirements on the offset of the
underlying open file description after a stream is closed. this was
correctly handled (as a side effect of the unconditional fflush call)
when streams were explicitly closed by fclose, but was not handled
correctly at program exit time, where fflush(0) was being used.
the weak symbol hackery is to pull in __stdio_exit if either of
__toread or __towrite is used, but avoid calling it twice so we don't
have to keep extra state. the new __stdio_exit is a streamlined fflush
variant that avoids performing any unnecessary operations and which
never unlocks the files or open file list, so we can be sure no other
threads write new data to a stream's buffer after it's already
flushed.
|
| |
|
|
|
|
|
|
|
| |
there is no need/use for a flush hook. the write function serves this
purpose already. i originally created the hook for implementing mem
streams based on a mistaken reading of posix, and later realized it
wasn't useful but never removed it until now.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the old behavior was to only consider a stream to be "reading" or
"writing" if it had buffered, unread/unwritten data. this reportedly
differs from the traditional behavior of these functions, which is
essentially to return true as much as possible without creating the
possibility that both __freading and __fwriting could return true.
gnulib expects __fwriting to return true as soon as a file is opened
write-only, and possibly expects other cases that depend on the
traditional behavior. and since these functions exist mostly for
gnulib (does anything else use them??), they should match the expected
behavior to avoid even more ugly hacks and workarounds...
|
| |
|
|
|
|
| |
signedness issue kept %ls with no precision from working at all
|
|
|
|
|
|
|
| |
printf was not printing too many characters, but it was reading one
too many wchar_t elements from the input. this could lead to crashes
if running off the page, or spurious failure if the conversion of the
extra wchar_t resulted in EILSEQ.
|
|
|
|
|
|
| |
the field width limit was not being cleared before reading the
literal, causing spurious failures in scanf in cases like "%2d:"
scanning "00:".
|
|
|
|
|
|
| |
for some nonsensical reason, glibc's headers use inline functions that
redirect some of the standard functions to ugly nonstandard names (and
likewise for some of their nonstandard functions).
|
|
|
|
|
|
|
|
|
|
|
|
| |
unfortunately in dynamic-linked programs, these macros cause
pthread_self to be initialized, which costs a couple syscalls, and
(much worse) would necessarily fail, crash, and burn on ancient (2.4
and earlier) kernels where setting up a thread pointer does not work.
i'd like to do this in a more generic way that avoids all use of
cleanup push/pop before pthread_self has been successfully called and
avoids ugly if/else constructs like the one in this commit, but for
now, this will suffice.
|
|
|
|
| |
this could lead to spurious failures of wide printf functions
|
| |
|
| |
|
|
|
|
|
| |
this also includes a related fix for vswscanf's read function, which
was returning a spurious (uninitialized) character for empty strings.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
at this point, strto* and all scanf family functions are using the new
unified integer and floating point parser/converter code.
the wide scanf is largely a wrapper for ordinary byte-based scanf;
since numbers can only contain ascii characters, only strings need to
be handled specially.
|
| |
|
| |
|
|
|
|
|
| |
assuming other code is correct, this should be a no-op, but better to
be safe...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vfprintf temporarily swaps in a local buffer (for the duration of the
operation) when the target stream is unbuffered; this both simplifies
the implementation of functions like dprintf (they don't need their
own buffers) and eliminates the pathologically bad performance of
writing the formatted output with one or more write syscalls per
formatting field.
in cases like dprintf where we are dealing with a virgin FILE
structure, everything worked correctly. however for long-lived files
(like stderr), it's possible that the buffer bounds were already set
for the internal zero-size buffer. on the next write, __stdio_write
would pick up and use the new buffer provided by vfprintf, but the
bound (wend) field was still pointing at the internal zero-size
buffer's end. this in turn allowed unbounded writes to the temporary
buffer.
|
|
|
|
|
|
| |
the l prefix is redundant/no-op with printf, since default promotions
always promote floats to double; however, it is valid, and printf was
wrongly rejecting it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
advantages over the old code:
- correct results for floating point (old code was bogus)
- wide/regular scanf separated so scanf does not pull in wide code
- well-defined behavior on integers that overflow dest type
- support for %[a-b] ranges with %[ (impl-defined by widely used)
- no intermediate conversion of fmt string to wide string
- cleaner, easier to share code with strto* functions
- better standards conformance for corner cases
the old code remains in the source tree, as the wide versions of the
scanf-family functions are still using it. it will be removed when no
longer needed.
|
|
|
|
|
|
|
|
| |
other cases with %x were probably broken too.
I would actually like to go ahead and replace this code in scanf with
calls to the new __intparse framework, but for now this calls for a
quick and unobtrusive fix without the risk of breaking other things.
|
|
|
|
|
|
|
|
|
|
| |
it should be noted that only the actual underlying buffer flush and
fill operations are cancellable, not reads from or writes to the
buffer. this behavior is compatible with POSIX, which makes all
cancellation points in stdio optional, and it achieves the goal of
allowing cancellation of a thread that's "stuck" on IO (due to a
non-responsive socket/pipe peer, slow/stuck hardware, etc.) without
imposing any measurable performance cost.
|
| |
|
|
|
|
| |
passing null pointer for %s is UB but lots of broken programs do it anyway
|
|
|
|
|
|
|
| |
for now this is just a tiny optimization, but later if we support
cancellation from __stdio_read and __stdio_write, it will be necessary
for the recusrive lock count to be zero in order for these functions
to know they are responsible for unlocking the FILE on cancellation.
|
|
|
|
|
|
|
|
| |
null termination is only added when current size grows.
in update modes, null termination is not added if it does not fit
(i.e. it is not allowed to clobber data).
these rules make very little sense, but that's how it goes..
|
|
|
|
|
|
| |
read should not be allowed past "current size".
append mode should write at "current size", not buffer size.
null termination should not be written except when "current size" grows.
|
| |
|
|
|
|
|
|
| |
disallow seek past end of buffer (per posix)
fix position accounting to include data buffered for read
don't set eof flag when no data was requested
|
|
|
|
|
| |
the addition is safe and cannot overflow because both operands are
positive when considered as signed quantities.
|
| |
|
|
|
|
|
|
| |
the expression -off is not safe in case off is the most-negative
value. instead apply - to base which is known to be non-negative and
bounded within sanity.
|
|
|
|
| |
testing so far has been minimal. may need further work.
|
| |
|
|
|
|
|
|
| |
not heavily tested, but it seems to be correct, including the odd
behavior that seeking is in terms of wide character count. this
precludes any simple buffering, so we just make the stream unbuffered.
|