| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
| |
this reflects that it is no longer intended for consumption outside of
the malloc implementation.
|
|
|
|
|
| |
this eliminates consumers of malloc_impl.h outside of the malloc
implementation.
|
| |
|
|
|
|
|
| |
clock_adjtime always returns the current clock setting in struct
timex, so it's always possible that the time64 version is needed.
|
|
|
|
|
|
| |
the 64-bit time code path used the wrong (time32) syscall. fortunately
this code path is not yet taken unless attempting to set a post-Y2038
time.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this has been a longstanding issue reported many times over the years,
with it becoming increasingly clear that it could be hit in practice.
under concurrent malloc and free from multiple threads, it's possible
to hit usage patterns where unbounded amounts of new memory are
obtained via brk/mmap despite the total nominal usage being small and
bounded.
the underlying cause is that, as a fundamental consequence of keeping
locking as fine-grained as possible, the state where free has unbinned
an already-free chunk to merge it with a newly-freed one, but has not
yet re-binned the combined chunk, is exposed to other threads. this is
bad even with small chunks, and leads to suboptimal use of memory, but
where it really blows up is where the already-freed chunk in question
is the large free region "at the top of the heap". in this situation,
other threads momentarily see a state of having almost no free memory,
and conclude that they need to obtain more.
as far as I can tell there is no fix for this that does not harm
performance. the fix made here forces all split/merge of free chunks
to take place under a single lock, which also takes the place of the
old free_lock, being held at least momentarily at the time of free to
determine whether there are neighboring free chunks that need merging.
as a consequence, the pretrim, alloc_fwd, and alloc_rev operations no
longer make sense and are deleted. simplified merging now takes place
inline in free (__bin_chunk) and realloc.
as commented in the source, holding the split_merge_lock precludes any
chunk transition from in-use to free state. for the most part, it also
precludes change to chunk header sizes. however, __memalign may still
modify the sizes of an in-use chunk to split it into two in-use
chunks. arguably this should require holding the split_merge_lock, but
that would necessitate refactoring to expose it externally, which is a
mess. and it turns out not to be necessary, at least assuming the
existing sloppy memory model malloc has been using, because if free
(__bin_chunk) or realloc sees any unsynchronized change to the size,
it will also see the in-use bit being set, and thereby can't do
anything with the neighboring chunk that changed size.
|
|
|
|
|
|
|
|
|
| |
the design used here relies on the barrier provided by the first lock
operation after the process returns to single-threaded state to
synchronize with actions by the last thread that exited. by storing
the intent to change modes in the same object used to detect whether
locking is needed, it's possible to avoid an extra (possibly costly)
memory load after the lock is taken.
|
|
|
|
| |
these are all flags that can be single-byte values.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
after all but the last thread exits, the next thread to observe
libc.threads_minus_1==0 and conclude that it can skip locking fails to
synchronize with any changes to memory that were made by the
last-exiting thread. this can produce data races.
on some archs, at least x86, memory synchronization is unlikely to be
a problem; however, with the inline locks in malloc, skipping the lock
also eliminated the compiler barrier, and caused code that needed to
re-check chunk in-use bits after obtaining the lock to reuse a stale
value, possibly from before the process became single-threaded. this
in turn produced corruption of the heap state.
some uses of libc.threads_minus_1 remain, especially for allocation of
new TLS in the dynamic linker; otherwise, it could be removed
entirely. it's made non-volatile to reflect that the remaining
accesses are only made under lock on the thread list.
instead of libc.threads_minus_1, libc.threaded is now used for
skipping locks. the difference is that libc.threaded is permanently
true once an additional thread has been created. this will produce
some performance regression in processes that are mostly
single-threaded but occasionally creating threads. in the future it
may be possible to bring back the full lock-skipping, but more care
needs to be taken to produce a safe design.
|
|
|
|
|
|
|
|
|
| |
since the backend for LOCK() skips locking if single-threaded, it's
unsafe to make the process appear single-threaded before the last use
of lock.
this fixes potential unsynchronized access to a linked list via
__dl_thread_cleanup.
|
|
|
|
|
| |
presently all archs define SIGSTKFLT but this is not correct. change
strsignal as a prerequisite for fixing that.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the internal __res_msend returns 0 on timeout without having obtained
any conclusive answer, but in this case has not filled in meaningful
anslen. res_send wrongly treated that as success, but returned a zero
answer length. any reasonable caller would eventually end up treating
that as an error when attempting to parse/validate it, but it should
just be reported as an error.
alternatively we could return the last-received inconclusive answer
(typically servfail), but doing so would require internal changes in
__res_msend. this may be considered later.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the old logic here likely dates back, at least in inspiration, to
before it was recognized that transient errors must not be allowed to
reflect the contents of successful results and must be reported to the
application.
here, the dns backend for getaddrinfo, when performing a paired query
for v4 and v6 addresses, accepted results for one address family even
if the other timed out. (the __res_msend backend does not propagate
error rcodes back to the caller, but continues to retry until timeout,
so other error conditions were not actually possible.)
this patch moves the checks to take place before answer parsing, and
performs them for each answer rather than only the answer to the first
query. if nxdomain is seen it's assumed to apply to both queries since
that's how dns semantics work.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the AD (authenticated data) bit in outgoing dns queries is defined by
rfc3655 to request that the nameserver report (via the same bit in the
response) whether the result is authenticated by DNSSEC. while all
results returned by a DNSSEC conforming nameserver will be either
authenticated or cryptographically proven to lack DNSSEC protection,
for some applications it's necessary to be able to distinguish these
two cases. in particular, conforming and compatible handling of DANE
(TLSA) records requires enforcing them only in signed zones.
when the AD bit was first defined for queries, there were reports of
compatibility problems with broken firewalls and nameservers dropping
queries with it set. these problems are probably a thing of the past,
and broken nameservers are already unsupported. however, since there
is no use in the AD bit with the netdb.h interfaces, explicitly clear
it in the queries they make. this ensures that, even with broken
setups, the standard functions will work, and at most the res_*
functions break.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
unsigned char promotes to int, which can overflow when shifted left by
24 bits or more. this has been reported multiple times but then
forgotten. it's expected to be benign UB, but can trap when built with
explicit overflow catching (ubsan or similar). fix it now.
note that promotion to uint32_t is safe and portable even outside of
the assumptions usually made in musl, since either uint32_t has rank
at least unsigned int, so that no further default promotions happen,
or int is wide enough that the shift can't overflow. this is a
desirable property to have in case someone wants to reuse the code
elsewhere.
|
|
|
|
|
|
|
|
| |
analogous to commit b287cd745c2243f8e5114331763a5a9813b5f6ee but for
the custom FILE stream type the wcstol and wcstod family use. __toread
could be used here as well, but there's a simple direct fix to make
the buffer pointers initially valid for subtraction, so just do that
to avoid pulling in stdio exit code in programs that don't use stdio.
|
|
|
|
|
|
| |
the sh version of fesetround or'd the new rounding mode onto the
control register without clearing the old rounding mode bits, making
changes sticky. this was the root cause of multiple test failures.
|
|
|
|
|
|
|
| |
apparently this function was intended at some point to be used by
strto* family as well, and thus was put in its own file; however, as
far as I can tell, it's only ever been used by vsscanf. move it to the
same file to reduce the number of source files and external symbols.
|
| |
|
|
|
|
|
|
| |
this idea came up when I thought we might need to zero the UNGET
portion of buf as well, but it seems like a useful improvement even
when that turned out not to be necessary.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
shgetc sets up to be able to perform an "unget" operation without the
caller having to remember and pass back the character value, and for
this purpose used a conditional store idiom:
if (f->rpos[-1] != c) f->rpos[-1] = c
to make it safe to use with non-writable buffers (setup by the
sh_fromstring macro or __string_read with sscanf).
however, validity of this depends on the buffer space at rpos[-1]
being initialized, which is not the case under some conditions
(including at least unbuffered files and fmemopen ones).
whenever data was read "through the buffer", the desired character
value is already in place and does not need to be written. thus,
rather than testing for the absence of the value, we can test for
rpos<=buf, indicating that the last character read could not have come
from the buffer, and thereby that we have a "real" buffer (possibly of
zero length) with writable pushback (UNGET bytes) below it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
as reported/analyzed by Pascal Cuoq, the shlim and shcnt
macros/functions are called by the scanf core (vfscanf) with f->rpos
potentially null (if the FILE is not yet activated for reading at the
time of the call). in this case, they compute differences between a
null pointer (f->rpos) and a non-null one (f->buf), resulting in
undefined behavior.
it's unlikely that any observably wrong behavior occurred in practice,
at least without LTO, due to limits on what's visible to the compiler
from translation unit boundaries, but this has not been checked.
fix is simply ensuring that the FILE is activated for read mode before
entering the main scanf loop, and erroring out early if it can't be.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
TZ containg a timezone name with >TZNAME_MAX characters currently
breaks musl's timezone parsing. getname() stops after TZNAME_MAX
characters. getoff() will consume no characters (because the next
character is not a digit) and incorrectly return 0. Then, because
there are remaining alphabetic characters, __daylight == 1, and
dst_off == -3600.
getname() must consume the entire timezone name, even if it will not
fit in d/__tzname, so when it returns, s points to the offset digits.
|
|
|
|
|
| |
Parsing the timezone name must stop when reaching the null terminator.
In that case, there is no '>' to skip.
|
|
|
|
|
|
|
|
| |
Commit d9bdfd164 ("fix memccpy to not access buffer past given size")
correctly added a check for 'n' nonzero, but made the pre-existing test
'*s==c' redundant: n!=0 implies *s==c. Remove the unnecessary check.
Reported by Alexey Izbyshev.
|
|
|
|
|
| |
change the current O(n) lookup to O(1) based on the machinery
described in "How To Write Shared Libraries" (Appendix B).
|
|
|
|
|
|
|
|
|
|
|
|
| |
kernel commit 4693916846269d633a3664586650dbfac2c5562f (first included
in release v4.14) silently fixed a bug whereby the reserved space
(which was later used for high bits of time) in IPC_STAT structures
was left untouched rather than zeroed. this means that a caller that
wants to read the high bits needs to pre-zero the memory.
since it's not clear that these operations are permitted to modify the
destination buffer on failure, use a temp buffer and copy back to the
caller's buffer on success.
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit 59324c8b0950ee94db846a50554183c845ede160 added __socketcall
analogous to __syscall, returning the negated error rather than
setting errno. use it to simplify the fallback path of socket(),
avoiding extern calls and access to errno.
Author: Rich Felker <dalias@aerifal.cx>
Date: Tue Jul 30 17:51:16 2019 -0400
make __socketcall analogous to __syscall, error-returning
|
|
|
|
|
|
|
|
|
|
| |
this reverts commit 4ee039f3545976f9e3e25a7e5d7b58f1f2316dc3, which
added the helper as a hack to make vdprintf usable before relocation,
contingent on strong assumptions about the arch and tooling, back when
the dynamic linker did not have a real staged model for
self-relocation. since commit f3ddd173806fd5c60b3f034528ca24542aecc5b9
this has been unnecessary and the function was just wasting size and
execution time.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The final rounding operation should be done with the correct sign
otherwise huge results may incorrectly get rounded to or away from
infinity in upward or downward rounding modes.
This affected sinh and sinhf which set the sign on the result after
a potentially overflowing mul. There may be other non-nearest rounding
issues, but this was a known long standing issue with large ulp error
(depending on how ulp is defined near infinity).
The fix should have no effect on sinh and sinhf performance but may
have a tiny effect on cosh and coshf.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Handle when after reduction |y| > pi/4+tiny. This happens in directed
rounding modes because the fast round to int code does not give the
nearest integer. In such cases the reduction may not be symmetric
between x and -x so e.g. cos(x)==cos(-x) may not hold (but polynomial
evaluation is not symmetric either with directed rounding so fixing
that would require more changes with bigger performance impact).
The fix only adds two predictable branches in nearest rounding mode,
simple ubenchmark does not show relevant performance regression in
nearest rounding mode.
The code could be improved: e.g reducing the medium size threshold
such that two step reduction is enough instead of three, and the
single precision case can avoid the issue by doing the round to int
differently, but this fix was kept minimal.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
because struct stat is no longer assumed to correspond to the
structure used by the stat-family syscalls, it's not valid to make any
of these syscalls directly using a buffer of type struct stat.
commit 9493892021eac4edf1776d945bcdd3f7a96f6978 moved all logic around
this change for stat-family functions into fstatat.c, making the
others wrappers for it. but a few other direct uses of the syscall
were overlooked. the ones in tmpnam/tempnam are harmless since the
syscalls are just used to test for file existence. however, the uses
in fchmodat and __map_file depend on getting accurate file properties,
and these functions may actually have been broken one or more mips
variants due to removal of conversion hacks from syscall_arch.h.
as a low-risk fix, simply use struct kstat in place of struct stat in
the affected places.
|
|
|
|
|
|
|
|
|
| |
these did not truncate excess precision in the return value. fixing
them looks like considerable work, and the current C code seems to
outperform them significantly anyway.
long double functions are left in place because they are not subject
to excess precision issues and probably better than the C code.
|
|
|
|
| |
this commit is for the sake of reviewable history.
|
| |
|
|
|
|
|
| |
analogous to commit 1c9afd69051a64cf085c6fb3674a444ff9a43857 for
atan[2][f].
|
|
|
|
|
|
|
|
|
|
|
|
| |
for functions implemented in C, this is a requirement of C11 (F.6);
strictly speaking that text does not apply to standard library
functions, but it seems to be intended to apply to them, and C2x is
expected to make it a requirement.
failure to drop excess precision is particularly bad for inverse trig
functions, where a value with excess precision can be outside the
range of the function (entire range, or range for a particular
subdomain), breaking reasonable invariants a caller may expect.
|
|
|
|
|
|
|
| |
this extends commit 5a105f19b5aae79dd302899e634b6b18b3dcd0d6, removing
timer[fd]_settime and timer[fd]_gettime. the timerfd ones are likely
to have been used in software that started using them before it could
rely on libc exposing functions.
|
|
|
|
|
| |
this extends commit 5a105f19b5aae79dd302899e634b6b18b3dcd0d6, removing
clock_settime, clock_getres, clock_nanosleep, and settimeofday.
|