| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
|
| |
|
|
|
|
|
|
| |
aside from being invalid, the early check only optimized the error
case, and likely pessimized the common case by separating the
two branches on isascii(c) at opposite ends of the function.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
these functions were written to handle clearing eof status, but failed
to account for the __toread function's handling of eof. with this
patch applied, __toread still returns EOF when the file is in eof
status, so that read operations will fail, but it also sets up valid
buffer pointers for read mode, which are set to the end of the buffer
rather than the beginning in order to make the whole buffer available
to ungetc/ungetwc.
minor changes to __uflow were needed since it's now possible to have
non-zero buffer pointers while in eof status. as made, these changes
remove a 'fast path' bypassing the function call to __toread, which
could be reintroduced with slightly different logic, but since
ordinary files have a syscall in f->read, optimizing the code path
does not seem worthwhile.
the __stdio_read function is also updated not to zero the read buffer
pointers on eof/error. while not necessary for correctness, this
change avoids the overhead of calling __toread in ungetc after
reaching eof, and it also reduces code size and increases consistency
with the fmemopen read operation which does not zero the pointers.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the equivalent checks for newly opened stdio output streams, used to
determine buffering mode, are also fixed.
on most archs, the TCGETS ioctl command shares a value with
SNDCTL_TMR_TIMEBASE, part of the OSS sound API which was apparently
used with certain MIDI and timer devices. for file descriptors
referring to such a device, TCGETS will not fail with ENOTTY as
expected; it may produce a different error, or may succeed, and if it
succeeds it changes the mode of the device. while it's unlikely that
such devices are in use, this is in principle very harmful behavior
for an operation which is supposed to do nothing but query whether the
fd refers to a tty.
TIOCGWINSZ, used to query logical window size for a terminal, was
chosen as an alternate ioctl to perform the isatty check. it does not
share a value with any other ioctl commands, and it succeeds on any
tty device.
this change also cleans up strace output to be less ugly and
misleading.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously, aio operations were not tracked by file descriptor; each
operation was completely independent. this resulted in non-conforming
behavior for non-seekable/append-mode writes (which are required to be
ordered) and made it impossible to implement aio_cancel, which in turn
made closing file descriptors with outstanding aio operations unsafe.
the new implementation is significantly heavier (roughly twice the
size, and seems to be slightly slower) and presently aims mainly at
correctness, not performance.
most of the public interfaces have been moved into a single file,
aio.c, because there is little benefit to be had from splitting them.
whenever any aio functions are used, aio_cancel and the internal
queue lifetime management and fd-to-queue mapping code must be linked,
and these functions make up the bulk of the code size.
the close function's interaction with aio is implemented with weak
alias magic, to avoid pulling in heavy aio cancellation code in
programs that don't use aio, and the expensive cancellation path
(which includes signal blocking) is optimized out when there are no
active aio queues.
|
|
|
|
|
|
|
|
|
| |
formally, it seems a sign is only required when the '+' modifier
appears in the format specifier, in which case either '+' or '-' must
be present in the output. but the specification is written such that
an optional negative sign is part of the output format anyway, and the
simplest approach to fixing the problem is removing the code that was
suppressing the sign.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously, write errors neither stopped further output attempts nor
caused the function to return an error to the caller. this could
result in silent loss of output, possibly in the middle of output in
the event of a non-permanent error.
the simplest solution is temporarily clearing the error flag for the
target stream, then suppressing further output when the error flag is
set and checking/restoring it at the end of the operation to determine
the correct return value.
since the wide version of the code internally calls the narrow fprintf
to perform some of its underlying operations, initial clearing of the
error flag is suppressed when performing a narrow vfprintf on a
wide-oriented stream. this is not a problem since the behavior of
narrow operations on wide-oriented streams is undefined.
|
|
|
|
|
|
|
|
|
|
|
|
| |
in this case there are two conflicting rules in play: that an explicit
precision of zero with the value zero produces no output, and that the
'#' modifier for octal increases the precision sufficiently to yield a
leading zero. ISO C (7.19.6.1 paragraph 6 in C99+TC3) includes a
parenthetical remark to clarify that the precision-increasing behavior
takes precedence, but the corresponding text in POSIX off of which I
based the implementation is missing this remark.
this issue was covered in WG14 DR#151.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit 5345c9b884e7c4e73eb2c8bb83b8d0df20f95afb added a linked list to
track the FILE streams currently locked (via flockfile) by a thread.
due to a failure to fully link newly added members, removal from the
list could leave behind references which could later result in writes
to already-freed memory and possibly other memory corruption.
implicit stdio locking was unaffected; the list is only used in
conjunction with explicit flockfile locking.
this bug was not present in any releases; it was introduced and fixed
during the same release cycle.
patch by Timo Teräs, who discovered and tracked down the bug.
|
|
|
|
|
|
|
|
|
|
|
| |
previously, fgets, fputs, fread, and fwrite completely omitted locking
and access to the FILE object when their arguments yielded a zero
length read or write operation independent of the FILE state. this
optimization was invalid; it wrongly skipped marking the stream as
byte-oriented (a C conformance bug) and exposed observably missing
synchronization (a POSIX conformance bug) where one of these functions
could wrongly complete despite another thread provably holding the
lock.
|
|
|
|
|
|
|
|
|
| |
the C standard requires that "the contents of the array remain
unchanged" in this case.
this patch also changes the behavior on read errors, but in that case
"the array contents are indeterminate", so the application cannot
inspect them anyway.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this is analogous commit fffc5cda10e0c5c910b40f7be0d4fa4e15bb3f48
which fixed the corresponding issue for mutexes.
the robust list can't be used here because the locks do not share a
common layout with mutexes. at some point it may make sense to simply
incorporate a mutex object into the FILE structure and use it, but
that would be a much more invasive change, and it doesn't mesh well
with the current design that uses a simpler code path for internal
locking and pulls in the recursive-mutex-like code when the flockfile
API is used explicitly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously we detected this bug in configure and issued advice for a
workaround, but this turned out not to work. since then gcc 4.9.0 has
appeared in several distributions, and now 4.9.1 has been released
without a fix despite this being a wrong code generation bug which is
supposed to be a release-blocker, per gcc policy.
since the scope of the bug seems to affect only data objects (rather
than functions) whose definitions are overridable, and there are only
a very small number of these in musl, I am just changing them from
const to volatile for the time being. simply removing the const would
be sufficient to make gcc 4.9.1 work (the non-const case was
inadvertently fixed as part of another change in gcc), and this would
also be sufficient with 4.9.0 if we forced -O0 on the affected files
or on the whole build. however it's cleaner to just remove all the
broken compiler detection and use volatile, which will ensure that
they are never constant-folded. the quality of a non-broken compiler's
output should not be affected except for the fact that these objects
are no longer const and thus possibly add a few bytes to data/bss.
this change can be reconsidered and possibly reverted at some point in
the future when the broken gcc versions are no longer relevant.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the purpose of this logic is to avoid linking __stdio_exit unless any
stdio reads (which might require repositioning the file offset at exit
time) or writes (which might require flushing at exit time) could have
been performed.
previously, exit called two wrapper functions for __stdio_exit named
__flush_on_exit and __seek_on_exit. both of these functions actually
performed both tasks (seek and flushing) by calling the underlying
__stdio_exit. in order to avoid doing this twice, an overridable data
object __towrite_used was used to cause __seek_on_exit to act as a nop
when __towrite was linked.
now, exit only makes one call, directly to __stdio_exit. this is
satisfiable by a weak dummy definition in exit.c, but the real
definition is pulled in by either __toread.c or __towrite.c through
their referencing a symbol which is defined only in __stdio_exit.c.
|
|
|
|
|
|
| |
in some cases, these functions internally call a byte-based input or
output function before calling getwc/putwc, so they cannot rely on the
latter to set the orientation.
|
|
|
|
|
|
| |
when the orientation of the stream was already set, fwide was
incorrectly returning its argument (the requested orientation) rather
than the actual orientation of the stream.
|
|
|
|
|
|
|
|
|
|
| |
prior to version 1.1.0, the difference between pthread_self (the
public function) and __pthread_self (the internal macro or inline
function) was that the former would lazily initialize the thread
pointer if it was not already initialized, whereas the latter would
crash in this case. since lazy initialization is no longer supported,
use of pthread_self no longer makes sense; it simply generates larger,
slower code.
|
|
|
|
|
|
|
| |
since there is no easy way to detect whether open honored or ignored
the O_CLOEXEC flag, the optimal solution to providing a fallback is
simply to make the fcntl syscall to set the close-on-exec flag
immediately after open returns.
|
|
|
|
|
|
|
|
| |
this condition could only happen due to malloc failure.
the fdopen operation is also moved to take place after the unlink to
minimize the window during which a link to the file exists in the
directory table.
|
|
|
|
|
|
|
|
| |
the old implementation preallocated a buffer in order to try to avoid
calling vsnprintf more than once. not only did this potentially lead
to memory fragmentation from trimming with realloc; it also pulled in
realloc/free, which otherwise might not be needed in a static linked
program.
|
|
|
|
|
|
|
| |
CONCAT(0x1p,LDBL_MANT_DIG) is not safe outside of libc,
use 2/LDBL_EPSILON instead.
fix was proposed by Morten Welinder.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
such archs are expected to omit definitions of the SYS_* macros for
syscalls their kernels lack from arch/$ARCH/bits/syscall.h. the
preprocessor is then able to select the an appropriate implementation
for affected functions. two basic strategies are used on a
case-by-case basis:
where the old syscalls correspond to deprecated library-level
functions, the deprecated functions have been converted to wrappers
for the modern function, and the modern function has fallback code
(omitted at the preprocessor level on new archs) to make use of the
old syscalls if the new syscall fails with ENOSYS. this also improves
functionality on older kernels and eliminates the incentive to program
with deprecated library-level functions for the sake of compatibility
with older kernels.
in other situations where the old syscalls correspond to library-level
functions which are not deprecated but merely lack some new features,
such as the *at functions, the old syscalls are still used on archs
which support them. this may change at some point in the future if or
when fallback code is added to the new functions to make them usable
(possibly with reduced functionality) on old kernels.
|
| |
|
|
|
|
|
|
|
| |
these all now use the shared __randname function internally, rather
than duplicating logic for producing a random name. incorrect usage of
the access syscall (which works with real uid/gid, not effective) has
been removed, along with unnecessary heavy dependencies like snprintf.
|
|
|
|
|
|
|
|
|
|
| |
open is handled specially because it is used from so many places, in
so many variants (2 or 3 arguments, setting errno or not, and
cancellable or not). trying to do it as a function would not only
increase bloat, but would also risk subtle breakage.
this is the first step towards supporting "new" archs where linux
lacks "old" syscalls.
|
|
|
|
|
|
|
|
|
|
| |
the subsequent rounding code assumes the end pointer (z) accurately
reflects the end of significance in the decimal expansion, but for
certain large integers, spurious trailing zero slots were left behind
when applying the binary exponent.
issue reported by Morten Welinder; the analysis of the cause was
performed by nsz, who also proposed this change.
|
|
|
|
|
|
|
|
|
|
|
|
| |
the code to strip trailing zeros was only looking in the last slot for
up to 9 zeros, assuming that the rounding code had already removed
fully-zero slots from the end. however, this ignored cases where the
rounding code did not run at all, which occur when the value being
printed is exactly representable in the requested precision.
the simplest solution is to move the code that strips trailing zero
slots to run unconditionally, immediately after rounding, rather than
as the last step of rounding.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in cases where rounding caused a carry, the slot into which the carry
was taking place was unconditionally treated as valid, despite the
possibility that it could be a new slot prior to the beginning of the
existing non-rounded number. in theory this could lead to unbounded
runaway carry, but in order for that to happen, the whole
uninitialized buffer would need to have been pre-filled with 32-bit
integer values greater than or equal to 999999999.
patch based on proposed fix by Morten Welinder, who also discovered
and reported the bug.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this is the first step in an overhaul aimed at greatly simplifying and
optimizing everything dealing with thread-local state.
previously, the thread pointer was initialized lazily on first access,
or at program startup if stack protector was in use, or at certain
random places where inconsistent state could be reached if it were not
initialized early. while believed to be fully correct, the logic was
fragile and non-obvious.
in the first phase of the thread pointer overhaul, support is retained
(and in some cases improved) for systems/situation where loading the
thread pointer fails, e.g. old kernels.
some notes on specific changes:
- the confusing use of libc.main_thread as an indicator that the
thread pointer is initialized is eliminated in favor of an explicit
has_thread_pointer predicate.
- sigaction no longer needs to ensure that the thread pointer is
initialized before installing a signal handler (this was needed to
prevent a situation where the signal handler caused the thread
pointer to be initialized and the subsequent sigreturn cleared it
again) but it still needs to ensure that implementation-internal
thread-related signals are not blocked.
- pthread tsd initialization for the main thread is deferred in a new
manner to minimize bloat in the static-linked __init_tp code.
- pthread_setcancelstate no longer needs special handling for the
situation before the thread pointer is initialized. it simply fails
on systems that cannot support a thread pointer, which are
non-conforming anyway.
- pthread_cleanup_push/pop now check for missing thread pointer and
nop themselves out in this case, so stdio no longer needs to avoid
the cancellable path when the thread pointer is not available.
a number of cases remain where certain interfaces may crash if the
system does not support a thread pointer. at this point, these should
be limited to pthread interfaces, and the number of such cases should
be fewer than before.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the printf floating point formatting code contains an optimization to
avoid computing digits that will be thrown away by rounding at the
specified (or default) precision. while it was correctly retaining all
places up to the last decimal place to be printed, it was not
retaining enough precision to see the next nonzero decimal place in
all cases. this could cause incorrect rounding down in round-to-even
(default) rounding mode, for example, when printing 0.5+DBL_EPSILON
with "%.0f".
in the fix, LDBL_MANT_DIG/3 is a lazy (non-sharp) upper bound on the
number of zeros between any two nonzero decimal digits.
|
|
|
|
|
|
|
|
|
|
|
| |
empirically the overflow was an off-by-one, and it did not seem to be
overwriting meaningful data. rather than simply increasing the buffer
size by one, however, I have attempted to make the size obviously
correct in terms of bounds on the number of iterations for the loops
that fill the buffer. this still results in no more than a negligible
size increase of the buffer on the stack (6-7 32-bit slots) and is a
"safer" fix unless/until somebody wants to do the proof that a smaller
buffer would suffice.
|
|
|
|
|
|
|
| |
this saves a syscall in the case where the underlying open already
took place with O_APPEND, which is common because fopen with append
modes sets O_APPEND at the time of open before passing the file
descriptor to __fdopen.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when there is unflushed output, ftello (and ftell) compute the logical
stream position as the underlying file descriptor's offset plus an
adjustment for the amount of buffered data. however, this can give the
wrong result for append-mode streams where the unflushed writes should
adjust the logical position to be at the end of the file, as if a seek
to end-of-file takes place before the write.
the solution turns out to be a simple trick: when ftello (indirectly)
calls lseek to determine the current file offset, use SEEK_END instead
of SEEK_CUR if the stream is append-mode and there's unwritten
buffered data.
the ISO C rules regarding switching between reading and writing for a
stream opened in an update mode, along with the POSIX rules regarding
switching "active handles", conveniently leave undefined the
hypothetical usage cases where this fix might lead to observably
incorrect offsets.
the bug being fixed was discovered via the test case for glibc issue
|
|
|
|
|
| |
this glibc abi compatibility function was missed when the scanf
aliases were added.
|
| |
|
|
|
|
| |
add missing va_end and remove some unnecessary code.
|
| |
|
| |
|
|
|
|
| |
the wide variant was missed in the previous commit.
|
|
|
|
|
|
|
| |
invalid format strings invoke undefined behavior, so this is not a
conformance issue, but it's nicer for scanf to report the error safely
instead of calling free on a potentially-uninitialized pointer or a
pointer to memory belonging to the caller.
|
|
|
|
|
|
| |
check in configure to be polite (failing early if we're going to fail)
and in vfprintf.c since that is the point at which a mismatching type
would be extremely dangerous.
|
|
|
|
|
|
|
|
|
| |
for conversion specifiers, alloc is always set when the specifier is
parsed. however, if scanf stops due to mismatching literal text,
either an uninitialized (if no conversions have been performed yet) or
stale (from the previous conversion) of the flag will be used,
possibly causing an invalid pointer to be passed to free when the
function returns.
|
|
|
|
|
| |
this seems to have been a regression from the refactoring which added
the 'm' modifier.
|
| |
|
|
|
|
|
| |
this commit only covers the byte-based scanf-family functions. the
wide functions still lack support for the 'm' modifier.
|
|
|
|
|
|
| |
this brings the wide version of the code into alignment with the
byte-based version, in preparation for adding support for the m
(malloc) modifier.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
the concept here is that %s and %c are essentially special-cases of
%[, with some minimal additional special-casing.
aside from simplifying the code and reducing the number of complex
code-paths that would need changing to make optimizations later, the
main purpose of this change is to simplify addition of the 'm'
modifier which causes scanf to allocate storage for the string being
read.
|