| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the new version is largely the work of Solar Designer, with minor
changes for integration with musl. compared to the old code, text size
is reduced by about 7k, stack space usage by about 70k, and
performance is greatly improved by avoiding expensive calculation of
constant tables on each run.
this version also adds support for extended des-based password hashes,
which allow for unlimited key (password) length and configurable
iteration counts.
i've also published the interface for crypt_r in a new crypt.h header.
especially since this is not a standard interface, i did not feel
compelled to match the glibc abi for the crypt_data structure. the
glibc structure is way too big to allocate on the stack; in fact it's
so big that the first usage may cause the main thread to exceed its
pre-committed stack size of 128k and thus could cause the program to
crash even on systems with overcommit disabled. the only legitimate
use of crypt_data for crypt_r is to store the hash string to return,
so i've reserved 256 bytes, which should be more than sufficient
(longest known password hashes are ~60 characters, and beyond that is
possibly even exceeding some implementations' passwd file field size
limit).
|
|
|
|
| |
based on a patch submitted by Kristian L. <email@thexception.net>
|
|
|
|
|
|
| |
on old kernels, there's no way to detect errors; we must assume
negative syscall return values are pgrp ids. but if the F_GETOWN_EX
fcntl works, we can get a reliable answer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The long double adjustment was wrong:
The usual check is
mant_bits & 0x7ff == 0x400
before doing a mant_bits++ or mant_bits-- adjustment since
this is the only case when rounding an inexact ld80 into
double can go wrong. (only in nearest rounding mode)
After such a check the ++ and -- is ok (the mantissa will end
in 0x401 or 0x3ff).
fma is a bit different (we need to add 3 numbers with correct
rounding: hi_xy + lo_xy + z so we should survive two roundings
at different places without precision loss)
The adjustment in fma only checks for zero low bits
mant_bits & 0x3ff == 0
this way the adjusted value is correct when rounded to
double or *less* precision.
(this is an important piece in the fma puzzle)
Unfortunately in this case the -- is not a correct adjustment
because mant_bits might underflow so further checks are needed
and this was the source of the bug.
|
|
|
|
|
|
| |
unicode char data has both "W" and "F" wide types and the old table
only included the "W" ones. this omitted U+3000 (ideographic space)
and all the wide-ascii, etc.
|
|
|
|
|
| |
this is silly, but it makes apps that read binary junk and interpret
it as ld80 "safer", and it gets gnulib to stop replacing printf...
|
|
|
|
| |
it should return the error code rather than 0/-1 and setting errno.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
at the point pclose might receive and act on cancellation, it has
already invalidated the FILE passed to it. thus, per musl's QOI
guarantees about cancellation and resource allocation/deallocation,
it's not a candidate for cancellation.
if it were required to be a cancellation point by posix, we would have
to switch the order of deallocation, but somehow still close the pipe
in order to trigger the child process to exit. i looked into doing
this, but the logic gets ugly, and i'm not sure the semantics are
conformant, so i'd rather just leave it alone unless there's a need to
change it.
|
| |
|
|
|
|
|
|
|
| |
close was the only cancellation point called from popen, but it left
popen with major resource leaks if any call to close got cancelled.
the easiest, cheapest fix is just to use a non-cancellable close
function.
|
|
|
|
|
| |
also check for failure of dup2 and abort the child rather than
reading/writing the wrong file.
|
|
|
|
|
| |
posix has resolved to add this usage; for now, we just avoid writing
anything to the new locale object since it's not used anyway.
|
|
|
|
|
|
|
| |
if the buffer is too short, at least return a partial string. this is
helpful if the caller is lazy and does not check for failure. care is
taken to avoid writing anything if the buffer length is zero, and to
always null-terminate when the buffer length is non-zero.
|
|
|
|
|
|
| |
this one could never cause any problems unless the compiler/machine
goes to extra trouble to break oob pointer arithmetic, but it's best
to fix it anyway.
|
|
|
|
| |
patch by nsz
|
|
|
|
|
|
|
|
| |
dynamic-allocation of the structure is not valid; it can crash an
application if malloc fails. since localeconv is not specified to have
failure conditions, the object needs to have static storage duration.
need to review whether all the values are right or not still..
|
|
|
|
|
| |
this was actually dangerously wrong, but presumably nobody uses this
broken function anymore anyway..
|
|
|
|
|
|
|
| |
if we eventually have build options, it might be nice to make an
option to dummy this out again, in case anybody needs a system-wide
disable for disk/ssd-thrashing, etc. that some daemons do when
logging...
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
large precision values could cause out-of-bounds pointer arithmetic in
computing the precision cutoff (used to avoid expensive long-precision
arithmetic when the result will be discarded). per the C standard,
this is undefined behavior. one would expect that it works anyway, and
in fact it did in most real-world cases, but it was randomly
(depending on aslr) crashing in i386 binaries running on x86_64
kernels. this is because linux puts the userspace stack near 4GB
(instead of near 3GB) when the kernel is 64-bit, leading to the
out-of-bounds pointer arithmetic overflowing past the end of address
space and giving a very low pointer value, which then compared lower
than a pointer it should have been higher than.
the new code rearranges the arithmetic so that no overflow can occur.
while this bug could crash printf with memory corruption, it's
unlikely to have security impact in real-world applications since the
ability to provide an extremely large field precision value under
attacker-control is required to trigger the bug.
|
|
|
|
|
|
| |
request/patch by william haddonthethird, slightly modifed to add
_GNU_SOURCE feature test macro so that the compiler can verify the
prototype matches.
|
|
|
|
|
| |
this is mildly ugly, but less ugly than gnulib trying to poke at the
definition of the FILE structure...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
for seekable files, posix imposed requirements on the offset of the
underlying open file description after a stream is closed. this was
correctly handled (as a side effect of the unconditional fflush call)
when streams were explicitly closed by fclose, but was not handled
correctly at program exit time, where fflush(0) was being used.
the weak symbol hackery is to pull in __stdio_exit if either of
__toread or __towrite is used, but avoid calling it twice so we don't
have to keep extra state. the new __stdio_exit is a streamlined fflush
variant that avoids performing any unnecessary operations and which
never unlocks the files or open file list, so we can be sure no other
threads write new data to a stream's buffer after it's already
flushed.
|
| |
|
|
|
|
|
|
|
| |
there is no need/use for a flush hook. the write function serves this
purpose already. i originally created the hook for implementing mem
streams based on a mistaken reading of posix, and later realized it
wasn't useful but never removed it until now.
|
| |
|
| |
|
|
|
|
| |
apparently this was never tested before.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the old behavior was to only consider a stream to be "reading" or
"writing" if it had buffered, unread/unwritten data. this reportedly
differs from the traditional behavior of these functions, which is
essentially to return true as much as possible without creating the
possibility that both __freading and __fwriting could return true.
gnulib expects __fwriting to return true as soon as a file is opened
write-only, and possibly expects other cases that depend on the
traditional behavior. and since these functions exist mostly for
gnulib (does anything else use them??), they should match the expected
behavior to avoid even more ugly hacks and workarounds...
|
| |
|
|
|
|
|
| |
it probably does not matter for /dev/null, but this should be done
consistently anyway.
|
|
|
|
|
|
|
|
|
|
| |
this is required in case dtors use stdio.
also remove the old comments; one was cruft from when the code used to
be using function pointers and conditional calls, and has little
motivation now that we're using weak symbols. the other was just
complaining about having to support dtors even though the cost was
made essentially zero in the non-use case by the way it's done here.
|
| |
|
|
|
|
|
|
|
| |
these are not exposed publicly in any header, but the few programs
that use them (modutils/kmod, etc.) are declaring the functions
themselves rather than making the syscalls directly, and it doesn't
really hurt to have them (same as the capset junk).
|
|
|
|
|
|
|
|
|
|
|
| |
based on patch by Emil Renner Berthing, with minor changes to dirent.h
for LFS64 and organization of declarations
this code should work unmodified once a real strverscmp is added, but
I've been hesitant to add it because the GNU strverscmp behavior is
harmful in a lot of cases (for instance if you have numeric filenames
in hex). at some point I plan on trying to design a variant of the
algorithm that behaves better on a mix of filename styles.
|
|
|
|
|
|
|
| |
these were left in glibc for binary compatibility after the public
part of the interface was removed, and libcap kept using them (with
its own copy of the header files) rather than just making the syscalls
directly. might as well add them since they're so small...
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
i originally omitted these (optional, per POSIX) interfaces because i
considered them backwards implementation details. however, someone
later brought to my attention a fairly legitimate use case: allocating
thread stacks in memory that's setup for sharing and/or fast transfer
between CPU and GPU so that the thread can move data to a GPU directly
from automatic-storage buffers without having to go through additional
buffer copies.
perhaps there are other situations in which these interfaces are
useful too.
|
| |
|
|
|
|
| |
signedness issue kept %ls with no precision from working at all
|
|
|
|
|
|
|
| |
printf was not printing too many characters, but it was reading one
too many wchar_t elements from the input. this could lead to crashes
if running off the page, or spurious failure if the conversion of the
extra wchar_t resulted in EILSEQ.
|
|
|
|
|
|
| |
the field width limit was not being cleared before reading the
literal, causing spurious failures in scanf in cases like "%2d:"
scanning "00:".
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the error will propagate up and be printed to the user at program
start time; at runtime, dlopen will just fail and leave a message for
dlerror.
previously, if mprotect failed, subsequent attempts to perform
relocations would crash the program. this was resulting in an
increasing number of false bug reports on grsec systems where rwx
permission is not possible in cases where users were wrongly
attempting to use non-PIC code in shared libraries. supporting that
usage is in theory possible, but the x86_64 toolchain does not even
support textrels, and the cost of keeping around the necessary
information to handle textrels without rwx permissions is
disproportionate to the benefit (which is essentially just supporting
broken library setups on grsec machines).
also, i unified the error-out code in map_library now that there are 3
places from which munmap might have to be called.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Per POSIX, "The abort() function shall cause abnormal process
termination to occur, unless the signal SIGABRT is being caught and
the signal handler does not return."
If SIGABRT is blocked or if a signal handler is installed and does
return, abort is still required to cause abnormal program termination.
We cannot use a_crash() to do this, since a SIGILL handler could also
be installed (and might even longjmp out of the abort, not expecting
to be invoked from within abort), nor can we rely on resetting the
signal handler and re-raising the signal (this has race conditions in
multi-threaded programs). On the other hand, SIGKILL is a perfectly
safe, unblockable way to obtain abnormal program termination, and it
requires no ugly loop-and-retry logic.
|
|
|
|
|
|
| |
for some nonsensical reason, glibc's headers use inline functions that
redirect some of the standard functions to ugly nonstandard names (and
likewise for some of their nonstandard functions).
|