| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
this one could never cause any problems unless the compiler/machine
goes to extra trouble to break oob pointer arithmetic, but it's best
to fix it anyway.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
large precision values could cause out-of-bounds pointer arithmetic in
computing the precision cutoff (used to avoid expensive long-precision
arithmetic when the result will be discarded). per the C standard,
this is undefined behavior. one would expect that it works anyway, and
in fact it did in most real-world cases, but it was randomly
(depending on aslr) crashing in i386 binaries running on x86_64
kernels. this is because linux puts the userspace stack near 4GB
(instead of near 3GB) when the kernel is 64-bit, leading to the
out-of-bounds pointer arithmetic overflowing past the end of address
space and giving a very low pointer value, which then compared lower
than a pointer it should have been higher than.
the new code rearranges the arithmetic so that no overflow can occur.
while this bug could crash printf with memory corruption, it's
unlikely to have security impact in real-world applications since the
ability to provide an extremely large field precision value under
attacker-control is required to trigger the bug.
|
|
|
|
| |
signedness issue kept %ls with no precision from working at all
|
|
|
|
|
|
|
| |
printf was not printing too many characters, but it was reading one
too many wchar_t elements from the input. this could lead to crashes
if running off the page, or spurious failure if the conversion of the
extra wchar_t resulted in EILSEQ.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vfprintf temporarily swaps in a local buffer (for the duration of the
operation) when the target stream is unbuffered; this both simplifies
the implementation of functions like dprintf (they don't need their
own buffers) and eliminates the pathologically bad performance of
writing the formatted output with one or more write syscalls per
formatting field.
in cases like dprintf where we are dealing with a virgin FILE
structure, everything worked correctly. however for long-lived files
(like stderr), it's possible that the buffer bounds were already set
for the internal zero-size buffer. on the next write, __stdio_write
would pick up and use the new buffer provided by vfprintf, but the
bound (wend) field was still pointing at the internal zero-size
buffer's end. this in turn allowed unbounded writes to the temporary
buffer.
|
|
|
|
|
|
| |
the l prefix is redundant/no-op with printf, since default promotions
always promote floats to double; however, it is valid, and printf was
wrongly rejecting it.
|
|
|
|
| |
passing null pointer for %s is UB but lots of broken programs do it anyway
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
the observed symptom was that the code was incorrectly rounding up
1.0625 to 1.063 despite the rounding mode being round-to-nearest with
ties broken by rounding to even last place. however, the code was just
not right in many respects, and i'm surprised it worked as well as it
did. this time i tested the values that end up in the variables round,
small, and the expression round+small, and all look good.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
this change makes it so most calls to fprintf(stderr, ...) will result
in a single writev syscall, as opposed to roughly 2*N syscalls (and
possibly more) where N is the number of format specifiers. in
principle we could use a much larger buffer, but it's best not to
increase the stack requirements too much. most messages are under 80
chars.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the biggest change in this commit is that stdio now uses readv to fill
the caller's buffer and the FILE buffer with a single syscall, and
likewise writev to flush the FILE buffer and write out the caller's
buffer in a single syscall.
making this change required fundamental architectural changes to
stdio, so i also made a number of other improvements in the process:
- the implementation no longer assumes that further io will fail
following errors, and no longer blocks io when the error flag is set
(though the latter could easily be changed back if desired)
- unbuffered mode is no longer implemented as a one-byte buffer. as a
consequence, scanf unreading has to use ungetc, to the unget buffer
has been enlarged to hold at least 2 wide characters.
- the FILE structure has been rearranged to maintain the locations of
the fields that might be used in glibc getc/putc type macros, while
shrinking the structure to save some space.
- error cases for fflush, fseek, etc. should be more correct.
- library-internal macros are used for getc_unlocked and putc_unlocked
now, eliminating some ugly code duplication. __uflow and __overflow
are no longer used anywhere but these macros. switch to read or
write mode is also separated so the code can be better shared, e.g.
with ungetc.
- lots of other small things.
|
|
|
|
|
|
|
| |
sadly the C language does not specify any such implicit conversion, so
this is not a matter of just fixing warnings (as gcc treats it) but
actual errors. i would like to revisit a number of these changes and
possibly revise the types used to reduce the number of casts required.
|
| |
|
| |
|
|
|