| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
it previously was returning the pseudo-monotonic-realtime clock
returned by times() rather than process cputime. it also violated C
namespace by pulling in times().
we now use clock_gettime() if available because times() has
ridiculously bad resolution. still provide a fallback for ancient
kernels without clock_gettime.
|
|
|
|
|
|
|
|
|
|
|
|
| |
this is a "nonstandard" function that was "rejected" by POSIX, but
nonetheless had its behavior documented in the POSIX rationale for
fork. it's present on solaris and possibly some other systems, and
duplicates the whole calling process, not just a single thread. glibc
does not have this function. it should not be used in programs
intending to be portable, but may be useful for testing,
checkpointing, etc. and it's an interesting (and quite small) example
of the usefulness of the __synccall framework originally written to
work around deficiencies in linux's setuid syscall.
|
|
|
|
|
|
|
|
|
| |
fix up clone signature to match the actual behavior. the new
__syncall_wait function allows a __synccall callback to wait for other
threads to continue without returning, so that it can resume action
after the caller finishes. this interface could be made significantly
more general/powerful with minimal effort, but i'll wait to do that
until it's actually useful for something.
|
|
|
|
|
| |
due to the barrier, it's safe just to block signals in the new thread,
rather than blocking and unblocking in the parent thread.
|
| |
|
|
|
|
|
|
|
| |
if a timer thread leaves signals unblocked, any future attempt by the
main thread to prevent the process from being terminated by blocking
signals will fail, since the signal can still be delivered to the
timer thread.
|
| |
|
|
|
|
|
|
|
|
| |
this works around pcc's lack of working support for weak references,
and in principle is nice because it gets us back to the stage where
the only weak symbol feature we use is weak aliases, nothing else.
having fewer dependencies on fancy linker features is a good thing.
|
|
|
|
|
|
|
|
|
|
|
|
| |
the new absolute-time-based wait kernelside was hard to get right and
basically just code duplication. it could only improve "performance"
when waiting, and even then, the improvement was just slight drop in
cpu usage during a wait.
actually, with vdso clock_gettime, the "old" way will be even faster
than the "new" way if the time has already expired, since it will not
invoke any syscalls. it can determine entirely in userspace that it
needs to return ETIMEDOUT.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
normally we allow cancellation to be acted upon when a syscall fails
with EINTR, since there is no useful status to report to the caller in
this case, and the signal that caused the interruption was almost
surely the cancellation request, anyway.
however, unlike all other syscalls, close has actually performed its
resource-deallocation function whenever it returns, even when it
returned an error. if we allow cancellation at this point, the caller
has no way of informing the program that the file descriptor was
closed, and the program may later try to close the file descriptor
again, possibly closing a different, newly-opened file.
the workaround looks ugly (special-casing one syscall), but it's
actually the case that close is the one and only syscall (at least
among cancellation points) with this ugly property.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
if saved, signal mask would not be restored unless some low signals
were masked. if not saved, signal mask could be wrongly restored to
uninitialized values. in any, wrong mask would be restored.
i believe this function was written for a very old version of the
jmp_buf structure which did not contain a final 0 field for
compatibility with siglongjmp, and never updated...
|
|
|
|
|
|
|
| |
cleanup push and pop are also no-ops if pthread_exit is not reachable.
this can make a big difference for library code which needs to protect
itself against cancellation, but which is unlikely to actually be used
in programs with threads/cancellation.
|
| |
|
|
|
|
|
|
|
| |
previously, pthread_cleanup_push/pop were pulling in all of
pthread_create due to dependency on the __pthread_unwind_next
function. this was not needed, as cancellation cleanup handlers can
never be called unless pthread_exit or pthread_cancel is reachable.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
like mutexes and semaphores, rwlocks suffered from a race condition
where the unlock operation could access the lock memory after another
thread successfully obtained the lock (and possibly destroyed or
unmapped the object). this has been fixed in the same way it was fixed
for other lock types.
in addition, the previous implementation favored writers over readers.
in the absence of other considerations, that is the best behavior for
rwlocks, and posix explicitly allows it. however posix also requires
read locks to be recursive. if writers are favored, any attempt to
obtain a read lock while a writer is waiting for the lock will fail,
causing "recursive" read locks to deadlock. this can be avoided by
keeping track of which threads already hold read locks, but doing so
requires unbounded memory usage, and there must be a fallback case
that favors readers in case memory allocation failed. and all of this
must be synchronized. the cost, complexity, and risk of errors in
getting it right is too great, so we simply favor readers.
tracking of the owner of write locks has been removed, as it was not
useful for anything. it could allow deadlock detection, but it's not
clear to me that returning EDEADLK (which a buggy program is likely to
ignore) is better than deadlocking; at least the latter behavior
prevents further data corruption. a correct program cannot invoke this
situation anyway.
the reader count and write lock state, as well as the "last minute"
waiter flag have all been combined into a single atomic lock. this
means all state transitions for the lock are atomic compare-and-swap
operations. this makes establishing correctness much easier and may
improve performance.
finally, some code duplication has been cleaned up. more is called
for, especially the standard __timedwait idiom repeated in all locks.
|
|
|
|
|
| |
it's unclear whether EINVAL or ENOSYS is used when the operation is
not supported, so check for both...
|
| |
|
|
|
|
|
|
|
|
| |
futex returns EINVAL, not ENOSYS, when op is not supported.
unfortunately this looks just like EINVAL from other causes, and we
end up running the fallback code and getting EINVAL again. fortunately
this case should be rare since correct code should not generate EINVAL
anyway.
|
|
|
|
|
| |
this dec used to be performed by the cancellation handler, which was
called when popped.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
new features:
- FUTEX_WAIT_BITSET op will be used for timed waits if available. this
saves a call to clock_gettime.
- error checking for the timespec struct is now inside __timedwait so
it doesn't need to be duplicated everywhere. cond_timedwait still
needs to duplicate it to avoid unlocking the mutex, though.
- pushing and popping the cancellation handler is delegated to
__timedwait, and cancellable/non-cancellable waits are unified.
|
|
|
|
|
|
|
| |
this change is needed to fix a race condition and ensure that it's
possible to unlock and destroy or unmap the mutex as soon as
pthread_mutex_lock succeeds. POSIX explicitly gives such an example in
the rationale and requires an implementation to allow such usage.
|
|
|
|
| |
sigaddset was not accepting SIGCANCEL as a valid signal number.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the race condition these changes address is described in glibc bug
report number 12674:
http://sourceware.org/bugzilla/show_bug.cgi?id=12674
up until now, musl has shared the bug, and i had not been able to
figure out how to eliminate it. in short, the problem is that it's not
valid for sem_post to inspect the waiters count after incrementing the
semaphore value, because another thread may have already successfully
returned from sem_wait, (rightly) deemed itself the only remaining
user of the semaphore, and chosen to destroy and free it (or unmap the
shared memory it's stored in). POSIX is not explicit in blessing this
usage, but it gives a very explicit analogous example with mutexes
(which, in musl and glibc, also suffer from the same race condition
bug) in the rationale for pthread_mutex_destroy.
the new semaphore implementation augments the waiter count with a
redundant waiter indication in the semaphore value itself,
representing the presence of "last minute" waiters that may have
arrived after sem_post read the waiter count. this allows sem_post to
read the waiter count prior to incrementing the semaphore value,
rather than after incrementing it, so as to avoid accessing the
semaphore memory whatsoever after the increment takes place.
a similar, but much simpler, fix should be possible for mutexes and
other locking primitives whose usage rules are stricter than
semaphores.
|
|
|
|
| |
i had missed the fact that a couple values were unassigned...
|
|
|
|
|
|
|
|
|
|
|
| |
per POSIX and RFC 3493:
If the specified address family is AF_INET, AF_INET6, or AF_UNSPEC,
the service can be specified as a string specifying a decimal port
number.
021 is a valid decimal number, therefore, interpreting it as octal
seems to be non-conformant.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this race is fundamentally due to linux's bogus requirement that
userspace, rather than kernelspace, fill in the siginfo structure. an
intervening signal handler that calls fork could cause both the parent
and child process to send signals claiming to be from the parent,
which could in turn have harmful effects depending on what the
recipient does with the signal. we simply block all signals for the
interval between getuid and sigqueue syscalls (much like what raise()
does already) to prevent the race and make the getuid/sigqueue pair
atomic.
this will be a non-issue if linux is fixed to validate the siginfo
structure or fill it in from kernelspace.
|
|
|
|
|
|
| |
it's nicer for the function that doesn't use errno to be independent,
and have the other one call it. saves some time and avoids clobbering
errno.
|
|
|
|
|
|
|
|
| |
setrlimit is supposed to be per-process, not per-thread, but again
linux gets it wrong. work around this in userspace. not only is it
needed for correctness; setxid also depends on the resource limits for
all threads being the same to avoid situations where temporarily
unlimiting the limit succeeds in some threads but fails in others.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously, stdio used spinlocks, which would be unacceptable if we
ever add support for thread priorities, and which yielded
pathologically bad performance if an application attempted to use
flockfile on a key file as a major/primary locking mechanism.
i had held off on making this change for fear that it would hurt
performance in the non-threaded case, but actually support for
recursive locking had already inflicted that cost. by having the
internal locking functions store a flag indicating whether they need
to perform unlocking, rather than using the actual recursive lock
counter, i was able to combine the conditionals at unlock time,
eliminating any additional cost, and also avoid a nasty corner case
where a huge number of calls to ftrylockfile could cause deadlock
later at the point of internal locking.
this commit also fixes some issues with usage of pthread_self
conflicting with __attribute__((const)) which resulted in crashes with
some compiler versions/optimizations, mainly in flockfile prior to
pthread_create.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
changing credentials in a multi-threaded program is extremely
difficult on linux because it requires synchronizing the change
between all threads, which have their own thread-local credentials on
the kernel side. this is further complicated by the fact that changing
the real uid can fail due to exceeding RLIMIT_NPROC, making it
possible that the syscall will succeed in some threads but fail in
others.
the old __rsyscall approach being replaced was robust in that it would
report failure if any one thread failed, but in this case, the program
would be left in an inconsistent state where individual threads might
have different uid. (this was not as bad as glibc, which would
sometimes even fail to report the failure entirely!)
the new approach being committed refuses to change real user id when
it cannot temporarily set the rlimit to infinity. this is completely
POSIX conformant since POSIX does not require an implementation to
allow real-user-id changes for non-privileged processes whatsoever.
still, setting the real uid can fail due to memory allocation in the
kernel, but this can only happen if there is not already a cached
object for the target user. thus, we forcibly serialize the syscalls
attempts, and fail the entire operation on the first failure. this
*should* lead to an all-or-nothing success/failure result, but it's
still fragile and highly dependent on kernel developers not breaking
things worse than they're already broken.
ideally linux will eventually add a CLONE_USERCRED flag that would
give POSIX conformant credential changes without any hacks from
userspace, and all of this code would become redundant and could be
removed ~10 years down the line when everyone has abandoned the old
broken kernels. i'm not holding my breath...
|
| |
|
|
|
|
|
| |
this helps some tiny programs be even more tiny, and barly increases
code size even if both are used.
|
|
|
|
|
|
|
|
|
|
|
| |
thanks to mikachu
per POSIX:
The setenv() function shall fail if:
[EINVAL] The name argument is a null pointer, points to an empty
string, or points to a string containing an '=' character.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
instead of creating temp dso objects on the stack and moving them to
the heap if dlopen/dlsym are used, use static objects to begin with,
and just donate them to malloc if we no longer need them.
|
|
|
|
|
|
|
|
| |
these changes also make it so clock_gettime(CLOCK_REALTIME, &ts) works
even on pre-2.6 kernels, emulated via the gettimeofday syscall. there
is no cost for the fallback check, as it falls under the error case
that already must be checked for storing the error code in errno, but
which would normally be hidden inside __syscall_ret.
|
|
|
|
|
|
|
| |
we cannot report failure after forking, so the idea is to ensure prior
to fork that fd 0,1,2 exist. this will prevent dup2 from possibly
hitting a resource limit and failing in the child process. fcntl
rather than dup2 is used prior to forking to avoid race conditions.
|