about summary refs log tree commit diff
path: root/src/thread
Commit message (Collapse)AuthorAgeFilesLines
* fix signed left-shift overflow in pthread_condattr_setpsharedRich Felker2015-03-041-1/+1
|
* make all objects used with atomic operations volatileRich Felker2015-03-039-16/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the memory model we use internally for atomics permits plain loads of values which may be subject to concurrent modification without requiring that a special load function be used. since a compiler is free to make transformations that alter the number of loads or the way in which loads are performed, the compiler is theoretically free to break this usage. the most obvious concern is with atomic cas constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be transformed to a_cas(p,*p,f(*p)); where the latter is intended to show multiple loads of *p whose resulting values might fail to be equal; this would break the atomicity of the whole operation. but even more fundamental breakage is possible. with the changes being made now, objects that may be modified by atomics are modeled as volatile, and the atomic operations performed on them by other threads are modeled as asynchronous stores by hardware which happens to be acting on the request of another thread. such modeling of course does not itself address memory synchronization between cores/cpus, but that aspect was already handled. this all seems less than ideal, but it's the best we can do without mandating a C11 compiler and using the C11 model for atomics. in the case of pthread_once_t, the ABI type of the underlying object is not volatile-qualified. so we are assuming that accessing the object through a volatile-qualified lvalue via casts yields volatile access semantics. the language of the C standard is somewhat unclear on this matter, but this is an assumption the linux kernel also makes, and seems to be the correct interpretation of the standard.
* suppress masked cancellation in pthread_joinRich Felker2015-03-021-1/+5
| | | | | | like close, pthread_join is a resource-deallocation function which is also a cancellation point. the intent of masked cancellation mode is to exempt such functions from failure with ECANCELED.
* fix namespace issue in pthread_join affecting thrd_joinRich Felker2015-03-021-1/+2
| | | | | | pthread_testcancel is not in the ISO C reserved namespace and thus cannot be used here. use the namespace-protected version of the function instead.
* factor cancellation cleanup push/pop out of futex __timedwait functionRich Felker2015-03-027-24/+21
| | | | | | | | | | | | | previously, the __timedwait function was optionally a cancellation point depending on whether it was passed a pointer to a cleaup function and context to register. as of now, only one caller actually used such a cleanup function (and it may face removal soon); most callers either passed a null pointer to disable cancellation or a dummy cleanup function. now, __timedwait is never a cancellation point, and __timedwait_cp is the cancellable version. this makes the intent of the calling code more obvious and avoids ugly dummy functions and long argument lists.
* fix failure of internal futex __timedwait to report ECANCELEDRich Felker2015-02-271-1/+1
| | | | | | | | | | as part of abstracting the futex wait, this function suppresses all futex error values which callers should not see using a whitelist approach. when the masked cancellation mode was added, the new ECANCELED error was not whitelisted. this omission caused the new pthread_cond_wait code using masked cancellation to exhibit a spurious wake (rather than acting on cancellation) when the request arrived after blocking on the cond var.
* fix breakage in pthread_cond_wait due to typoRich Felker2015-02-231-1/+1
| | | | | | | | | | | | due to accidental use of = instead of ==, the error code was always set to zero in the signaled wake case for non-shared cv waits. suppressing ETIMEDOUT (the only possible wait error) is harmless and actually permitted in this case, but suppressing mutex errors could give the caller false information about the state of the mutex. commit 8741ffe625363a553e8f509dc3ca7b071bdbab47 introduced this regression and commit d9da1fb8c592469431c764732d09f7756340190e preserved it when reorganizing the code.
* simplify cond var code now that cleanup handler is not neededRich Felker2015-02-221-86/+63
|
* fix pthread_cond_wait cancellation raceRich Felker2015-02-221-5/+38
| | | | | | | | | | | | | it's possible that signaling a waiter races with cancellation of that same waiter. previously, cancellation was acted upon, causing the signal to be consumed with no waiter returning. by using the new masked cancellation state, it's possible to refuse to act on the cancellation request and instead leave it pending. to ease review and understanding of the changes made, this commit leaves the unwait function, which was previously the cancellation cleanup handler, in place. additional simplifications could be made by removing it.
* add new masked cancellation modeRich Felker2015-02-212-10/+16
| | | | | | | | | | | | this is a new extension which is presently intended only for experimental and internal libc use. interface and behavior details may change subject to feedback and experience from using it internally. the basic concept for the new PTHREAD_CANCEL_MASKED state is that the first cancellation point to observe the cancellation request fails with an errno value of ECANCELED rather than acting on cancellation, allowing the caller to process the status and choose whether/how to act upon it.
* prepare cancellation syscall asm for possibility of __cancel returningRich Felker2015-02-205-11/+32
|
* make pthread_exit responsible for disabling cancellationRich Felker2015-02-162-3/+2
| | | | | this requirement is tucked away in XSH 2.9.5 Thread Cancellation under the heading Thread Cancellation Cleanup Handlers.
* use the internal macro name FUTEX_PRIVATE in __waitSzabolcs Nagy2015-02-091-1/+1
| | | | | the name was recently added for the setxid/synccall rework, so use the name now that we have it.
* fix missing memory barrier in cancellation signal handlerRich Felker2015-02-031-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | in practice this was probably a non-issue, because the necessary barrier almost certainly exists in kernel space -- implementing signal delivery without such a barrier seems impossible -- but for the sake of correctness, it should be done here too. in principle, without a barrier, it is possible that the thread to be cancelled does not see the store of its cancellation flag performed by another thread. this affects both the case where the signal arrives before entering the critical program counter range from __cp_begin to __cp_end (in which case both the signal handler and the inline check fail to see the value which was already stored) and the case where the signal arrives during the critical range (in which case the signal handler should be responsible for cancellation, but when it does not see the cancellation flag, it assumes the signal is spurious and refuses to act on it). in the fix, the barrier is placed only in the signal handler, not in the inline check at the beginning of the critical program counter range. if the signal handler runs before the critical range is entered, it will of course take no action, but its barrier will ensure that the inline check subsequently sees the store. if on the other hand the inline check runs first, it may miss seeing the store, but the subsequent signal handler in the critical range will act upon the cancellation request. this strategy avoids adding a memory barrier in the common, non-cancellation code path.
* overhaul __synccall and fix AS-safety and other issues in set*idRich Felker2015-01-152-45/+138
| | | | | | | | | | | | | | | | | | | | | multi-threaded set*id and setrlimit use the internal __synccall function to work around the kernel's wrongful treatment of these process properties as thread-local. the old implementation of __synccall failed to be AS-safe, despite POSIX requiring setuid and setgid to be AS-safe, and was not rigorous in assuring that all threads were caught. in a worst case, threads late in the process of exiting could retain permissions after setuid reported success, in which case attacks to regain dropped permissions may have been possible under the right conditions. the new implementation of __synccall depends on the presence of /proc/self/task and will fail if it can't be opened, but is able to determine that it has caught all threads, and does not use any locks except its own. it thereby achieves AS-safety simply by blocking signals to preclude re-entry in the same thread. with this commit, all known conformance and safety issues in set*id functions should be fixed.
* suppress EINTR in sem_wait and sem_timedwaitRich Felker2015-01-151-1/+1
| | | | | | | per POSIX, the EINTR condition is an optional error for these functions, not a mandatory one. since old kernels (pre-2.6.22) failed to honor SA_RESTART for the futex syscall, it's dangerous to trust EINTR from the kernel. thankfully POSIX offers an easy way out.
* fix __aeabi_read_tp oversight in arm atomics/tls overhaulRich Felker2014-11-221-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | calls to __aeabi_read_tp may be generated by the compiler to access TLS on pre-v6 targets. previously, this function was hard-coded to call the kuser helper, which would crash on kernels with kuser helper removed. to fix the problem most efficiently, the definition of __aeabi_read_tp is moved so that it's an alias for the new __a_gettp. however, on v7+ targets, code to initialize the runtime choice of thread-pointer loading code is not even compiled, meaning that defining __aeabi_read_tp would have caused an immediate crash due to using the default implementation of __a_gettp with a HCF instruction. fortunately there is an elegant solution which reduces overall code size: putting the native thread-pointer loading instruction in the default code path for __a_gettp, so that separate default/native code paths are not needed. this function should never be called before __set_thread_area anyway, and if it is called early on pre-v6 hardware, the old behavior (crashing) is maintained. ideally __aeabi_read_tp would not be called at all on v7+ targets anyway -- in fact, prior to the overhaul, the same problem existed, but it was never caught by users building for v7+ with kuser disabled. however, it's possible for calls to __aeabi_read_tp to end up in a v7+ binary if some of the object files were built for pre-v7 targets, e.g. in the case of static libraries that were built separately, so this case needs to be handled.
* overhaul ARM atomics/tls for performance and compatibilityRich Felker2014-11-191-12/+1
| | | | | | | | | | | | | | | | | | | | | | | | previously, builds for pre-armv6 targets hard-coded use of the "kuser helper" system for atomics and thread-pointer access, resulting in binaries that fail to run (crash) on systems where this functionality has been disabled (as a security/hardening measure) in the kernel. additionally, builds for armv6 hard-coded an outdated/deprecated memory barrier instruction which may require emulation (extremely slow) on future models. this overhaul replaces the behavior for all pre-armv7 builds (both of the above cases) to perform runtime detection of the appropriate mechanisms for barrier, atomic compare-and-swap, and thread pointer access. detection is based on information provided by the kernel in auxv: presence of the HWCAP_TLS bit for AT_HWCAP and the architecture version encoded in AT_PLATFORM. direct use of the instructions is preferred when possible, since probing for the existence of the kuser helper page would be difficult and would incur runtime cost. for builds targeting armv7 or later, the runtime detection code is not compiled at all, and much more efficient versions of the non-cas atomic operations are provided by using ldrex/strex directly rather than wrapping cas.
* manually "shrink wrap" fast path in pthread_onceRich Felker2014-10-201-8/+12
| | | | | | | this change is a workaround for the inability of current compilers to perform "shrink wrapping" optimizations. in casual testing, it roughly doubled the performance of pthread_once when called on an already-finished once control object.
* eliminate global waiters count in pthread_onceRich Felker2014-10-131-9/+13
|
* fix missing barrier in pthread_once/call_once shortcut pathRich Felker2014-10-101-2/+6
| | | | | | | | | | | | | | | | | | | | | these functions need to be fast when the init routine has already run, since they may be called very often from code which depends on global initialization having taken place. as such, a fast path bypassing atomic cas on the once control object was used to avoid heavy memory contention. however, on archs with weakly ordered memory, the fast path failed to ensure that the caller actually observes the side effects of the init routine. preliminary performance testing showed that simply removing the fast path was not practical; a performance drop of roughly 85x was observed with 20 threads hammering the same once control on a 24-core machine. so the new explicit barrier operation from atomic.h is used to retain the fast path while ensuring memory visibility. performance may be reduced on some archs where the barrier actually makes a difference, but the previous behavior was unsafe and incorrect on these archs. future improvements to the implementation of a_barrier should reduce the impact.
* add C11 thread creation and related thread functionsRich Felker2014-09-079-7/+82
| | | | | | | | | | | | | | | | | based on patch by Jens Gustedt. the main difficulty here is handling the difference between start function signatures and thread return types for C11 threads versus POSIX threads. pointers to void are assumed to be able to represent faithfully all values of int. the function pointer for the thread start function is cast to an incorrect type for passing through pthread_create, but is cast back to its correct type before calling so that the behavior of the call is well-defined. changes to the existing threads implementation were kept minimal to reduce the risk of regressions, and duplication of code that carries implementation-specific assumptions was avoided for ease and safety of future maintenance.
* add C11 condition variable functionsJens Gustedt2014-09-066-0/+57
| | | | | Because of the clear separation for private pthread_cond_t these interfaces are quite simple and direct.
* add C11 mutex functionsJens Gustedt2014-09-066-0/+69
|
* add C11 thread functions operating on tss_t and once_flagJens Gustedt2014-09-065-0/+42
| | | | | | These all have POSIX equivalents, but aside from tss_get, they all have minor changes to the signature or return value and thus need to exist as separate functions.
* use weak symbols for the POSIX functions that will be used by C threadsJens Gustedt2014-09-0614-28/+73
| | | | | | | | | | The intent of this is to avoid name space pollution of the C threads implementation. This has two sides to it. First we have to provide symbols that wouldn't pollute the name space for the C threads implementation. Second we have to clean up some internal uses of POSIX functions such that they don't implicitly drag in such symbols.
* make non-waiting paths of sem_[timed]wait and pthread_join cancelableRich Felker2014-09-052-0/+3
| | | | | | | per POSIX these functions are both cancellation points, so they must act on any cancellation request which is pending prior to the call. previously, only the code path where actual waiting took place could act on cancellation.
* refrain from spinning on locks when there is already a waiterRich Felker2014-08-255-5/+5
| | | | | | | | | | if there is already a waiter for a lock, spinning on the lock is essentially an attempt to steal it from whichever waiter would obtain it via any priority rules in place, and is therefore undesirable. in the current implementation, there is always an inherent race window at unlock during which a newly-arriving thread may steal the lock from the existing waiters, but we should aim to keep this window minimal rather than enlarging it.
* spin before waiting on futex in mutex and rwlock lock operationsRich Felker2014-08-253-0/+20
|
* spin in sem_[timed]wait before performing futex waitRich Felker2014-08-251-0/+5
| | | | | | | | | empirically, this increases the maximum rate of wait/post operations between two threads by 20-150 times on machines I tested, including x86 and arm. conceptually, it makes sense to do some spinning because semaphores are intended to be usable as a notification mechanism between threads, not just as locks, and low-latency notification is a valuable property to have.
* sanitize number of spins in userspace before futex waitRich Felker2014-08-252-2/+2
| | | | | | | | | | | | | | | the previous spin limit of 10000 was utterly unreasonable. empirically, it could consume up to 200000 cycles, whereas a failed futex wait (EAGAIN) typically takes 1000 cycles or less, and even a true wait/wake round seems much less expensive. the new counts (100 for general wait, 200 in barrier) were simply chosen to be in the range of what's reasonable without having adverse effects on casual micro-benchmark tests I have been running. they may still be too high, from a standpoint of not wasting cpu cycles, but at least they're a lot better than before. rigorous testing across different archs and cpu models should be performed at some point to determine whether further adjustments should be made.
* fix false ownership of stdio FILEs due to tid reuseRich Felker2014-08-231-0/+2
| | | | | | | | | | | | | this is analogous commit fffc5cda10e0c5c910b40f7be0d4fa4e15bb3f48 which fixed the corresponding issue for mutexes. the robust list can't be used here because the locks do not share a common layout with mutexes. at some point it may make sense to simply incorporate a mutex object into the FILE structure and use it, but that would be a much more invasive change, and it doesn't mesh well with the current design that uses a simpler code path for internal locking and pulls in the recursive-mutex-like code when the flockfile API is used explicitly.
* fix fallback checks for kernels without private futex supportRich Felker2014-08-224-4/+4
| | | | for unknown syscall commands, the kernel produces ENOSYS, not EINVAL.
* fix use of uninitialized memory with application-provided thread stacksRich Felker2014-08-221-0/+2
| | | | | | | | | | the subsequent code in pthread_create and the code which copies TLS initialization images to the new thread's TLS space assume that the memory provided to them is zero-initialized, which is true when it's obtained by pthread_create using mmap. however, when the caller provides a stack using pthread_attr_setstack, pthread_create cannot make any assumptions about the contents. simply zero-filling the relevant memory in this case is the simplest and safest fix.
* further simplify and optimize new cond varRich Felker2014-08-181-29/+21
| | | | | | | | | | | | | | the main idea of the changes made is to have waiters wait directly on the "barrier" lock that was used to prevent them from making forward progress too early rather than first waiting on the atomic state value and then attempting to lock the barrier. in addition, adjustments to the mutex waiter count are optimized. previously, each waking waiter decremented the count (unless it was the first) then immediately incremented it again for the next waiter (unless it was the last). this was a roundabout was of achieving the equivalent of incrementing it once for the first waiter and decrementing it once for the last.
* simplify and improve new cond var implementationRich Felker2014-08-181-40/+22
| | | | | | | | | | | | | | | | | | | | previously, wake order could be unpredictable: if a waiter happened to leave its futex wait on the state early, e.g. due to EAGAIN while restarting after a signal handler, it could acquire the mutex out of turn. handling this required ugly O(n) list walking in the unwait function and accounting to remove waiters that already woke from the list. with the new changes, the "barrier" locks in each waiter node are only unlocked in turn. in addition to simplifying the code, this seems to improve performance slightly, probably by reducing the number of accesses threads make to each other's stacks. as an additional benefit, unrecoverable mutex re-locking errors (mainly ENOTRECOVERABLE for robust mutexes) no longer need to be handled with deadlock; they can be reported to the caller, since the unlocking sequence makes it unnecessary to rely on the mutex to synchronize access to the waiter list.
* redesign cond var implementation to fix multiple issuesRich Felker2014-08-175-88/+209
| | | | | | | | | | | | | | | | | | | | | | | | | | the immediate issue that was reported by Jens Gustedt and needed to be fixed was corruption of the cv/mutex waiter states when switching to using a new mutex with the cv after all waiters were unblocked but before they finished returning from the wait function. self-synchronized destruction was also handled poorly and may have had race conditions. and the use of sequence numbers for waking waiters admitted a theoretical missed-wakeup if the sequence number wrapped through the full 32-bit space. the new implementation is largely documented in the comments in the source. the basic principle is to use linked lists initially attached to the cv object, but detachable on signal/broadcast, made up of nodes residing in automatic storage (stack) on the threads that are waiting. this eliminates the need for waiters to access the cv object after they are signaled, and allows us to limit wakeup to one waiter at a time during broadcasts even when futex requeue cannot be used. performance is also greatly improved, roughly double some tests. basically nothing is changed in the process-shared cond var case, where this implementation does not work, since processes do not have access to one another's local storage.
* fix possible failure-to-wake deadlock with robust mutexesRich Felker2014-08-171-1/+4
| | | | | | | | | | | | | | | | | | when the kernel is responsible for waking waiters on a robust mutex whose owner died, it does not have a waiters count available and must rely entirely on the waiter bit of the lock value. normally, this bit is only set by newly arriving waiters, so it will be clear if no new waiters arrived after the current owner obtained the lock, even if there are other waiters present. leaving it clear is desirable because it allows timed-lock operations to remove themselves as waiters and avoid causing unnecessary futex wake syscalls. however, for process-shared robust mutexes, we need to set the bit whenever there are existing waiters so that the kernel will know to wake them. for non-process-shared robust mutexes, the wake happens in userspace and can look at the waiters count, so the bit does not need to be set in the non-process-shared case.
* make pointers used in robust list volatileRich Felker2014-08-173-9/+16
| | | | | | | | | | | | | | | | | | | | when manipulating the robust list, the order of stores matters, because the code may be asynchronously interrupted by a fatal signal and the kernel will then access the robust list in what is essentially an async-signal context. previously, aliasing considerations made it seem unlikely that a compiler could reorder the stores, but proving that they could not be reordered incorrectly would have been extremely difficult. instead I've opted to make all the pointers used as part of the robust list, including those in the robust list head and in the individual mutexes, volatile. in addition, the format of the robust list has been changed to point back to the head at the end, rather than ending with a null pointer. this is to match the documented kernel robust list ABI. the null pointer, which was previously used, only worked because faults during access terminate the robust list processing.
* fix robust mutex unrecoverable status, and related clean-upRich Felker2014-08-163-12/+4
| | | | | | | | | | | | | | | | | a robust mutex should not enter the unrecoverable status until it's unlocked without marking it consistent. previously, flag 8 in the type was used as an indication of unrecoverable, but only honored after successful locking; this resulted in a race window where the unrecoverable mutex could appear to a second thread as locked/busy again while the first thread was in the process of observing it as unrecoverable. now, flag 8 is used to mean that the mutex is in the process of being recovered, but not yet marked consistent. the flag only takes effect in pthread_mutex_unlock, where it causes the value 0x40000000 (owner dead flag, with old owner tid 0, an otherwise impossible state) to be stored in the lock. subsequent lock attempts will interpret this state as unrecoverable.
* fix false ownership of mutexes due to tid reuse, using robust listRich Felker2014-08-164-23/+26
| | | | | | | | | | | | | | | | | | | | | | | per the resolution of Austin Group issue 755, the POSIX requirement that ownership be enforced for recursive and error-checking mutexes does not allow a random new thread to acquire ownership of an orphaned mutex just because it happened to be assigned the same tid as the original owner that exited with the mutex locked. one possible fix for this issue would be to disallow the kernel thread to terminate when it exited with mutexes held, permanently reserving the tid against reuse. however, this does not solve the problem for process-shared mutexes where lifetime cannot be controlled, so it was not used. the alternate approach I've taken is to reuse the robust mutex system for non-robust recursive and error-checking mutexes. when a thread exits, the kernel (or the new userspace robust-list code added in commit b092f1c5fa9c048e12d002c7b972df5ecbe96d1d) will set the owner-died bit for these orphaned mutexes, but since the mutex-type is not robust, pthread_mutex_trylock will not allow a new owner to acquire them. instead, they remain in a state of being permanently locked, as desired.
* enable private futex for process-local robust mutexesRich Felker2014-08-163-1/+25
| | | | | | | | | | | | | | | | | | | | | | the kernel always uses non-private wake when walking the robust list when a thread or process exits, so it's not able to wake waiters listening with the private futex flag. this problem is solved by doing the equivalent in userspace as the last step of pthread_exit. care is taken to remove mutexes from the robust list before unlocking them so that the kernel will not attempt to access them again, possibly after another thread locks them. this removal code can treat the list as singly-linked, since no further code which would add or remove items is able to run at this point. moreover, the pending pointer is not needed since the mutexes being unlocked are all process-local; in the case of asynchronous process termination, they all cease to exist. since a process-local robust mutex cannot come into existence without a call to pthread_mutexattr_setrobust in the same process, the code for userspace robust list processing is put in that source file, and a weak alias to a dummy function is used to avoid pulling in this bloat as part of pthread_exit in static-linked programs.
* make futex operations use private-futex mode when possibleRich Felker2014-08-1522-64/+74
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | private-futex uses the virtual address of the futex int directly as the hash key rather than requiring the kernel to resolve the address to an underlying backing for the mapping in which it lies. for certain usage patterns it improves performance significantly. in many places, the code using futex __wake and __wait operations was already passing a correct fixed zero or nonzero flag for the priv argument, so no change was needed at the site of the call, only in the __wake and __wait functions themselves. in other places, especially where the process-shared attribute for a synchronization object was not previously tracked, additional new code is needed. for mutexes, the only place to store the flag is in the type field, so additional bit masking logic is needed for accessing the type. for non-process-shared condition variable broadcasts, the futex requeue operation is unable to requeue from a private futex to a process-shared one in the mutex structure, so requeue is simply disabled in this case by waking all waiters. for robust mutexes, the kernel always performs a non-private wake when the owner dies. in order not to introduce a behavioral regression in non-process-shared robust mutexes (when the owning thread dies), they are simply forced to be treated as process-shared for now, giving correct behavior at the expense of performance. this can be fixed by adding explicit code to pthread_exit to do the right thing for non-shared robust mutexes in userspace rather than relying on the kernel to do it, and will be fixed in this way later. since not all supported kernels have private futex support, the new code detects EINVAL from the futex syscall and falls back to making the call without the private flag. no attempt to cache the result is made; caching it and using the cached value efficiently is somewhat difficult, and not worth the complexity when the benefits would be seen only on ancient kernels which have numerous other limitations and bugs anyway.
* add or1k (OpenRISC 1000) architecture portStefan Kristiansson2014-07-184-0/+64
| | | | | | | | | | | | | | | With the exception of a fenv implementation, the port is fully featured. The port has been tested in or1ksim, the golden reference functional simulator for OpenRISC 1000. It passes all libc-test tests (except the math tests that requires a fenv implementation). The port assumes an or1k implementation that has support for atomic instructions (l.lwa/l.swa). Although it passes all the libc-test tests, the port is still in an experimental state, and has yet experienced very little 'real-world' use.
* work around constant folding bug 61144 in gcc 4.9.0 and 4.9.1Rich Felker2014-07-162-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | previously we detected this bug in configure and issued advice for a workaround, but this turned out not to work. since then gcc 4.9.0 has appeared in several distributions, and now 4.9.1 has been released without a fix despite this being a wrong code generation bug which is supposed to be a release-blocker, per gcc policy. since the scope of the bug seems to affect only data objects (rather than functions) whose definitions are overridable, and there are only a very small number of these in musl, I am just changing them from const to volatile for the time being. simply removing the const would be sufficient to make gcc 4.9.1 work (the non-const case was inadvertently fixed as part of another change in gcc), and this would also be sufficient with 4.9.0 if we forced -O0 on the affected files or on the whole build. however it's cleaner to just remove all the broken compiler detection and use volatile, which will ensure that they are never constant-folded. the quality of a non-broken compiler's output should not be affected except for the fact that these objects are no longer const and thus possibly add a few bytes to data/bss. this change can be reconsidered and possibly reverted at some point in the future when the broken gcc versions are no longer relevant.
* rename file containing pthread_cleanup_push and pop for consistencyRich Felker2014-07-061-0/+0
|
* rework cancellation weak alias logic not to depend on archive orderRich Felker2014-07-063-6/+12
| | | | | | | | | | | if the order of object files in the static archive libc.a was not respected by the linker, the old logic could wrongly cause POSIX symbols outside of the ISO C namespace to be pulled into pure C programs. this should not happen with well-behaved linkers, but relying on the link order was a bad idea anyway. files are renamed to better reflect their contents now that they don't need names to control their order as members in the archive file.
* eliminate use of cached pid from thread structureRich Felker2014-07-054-8/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | the main motivation for this change is to remove the assumption that the tid of the main thread is also the pid of the process. (the value returned by the set_tid_address syscall was used to fill both fields despite it semantically being the tid.) this is historically and presently true on linux and unlikely to change, but it conceivably could be false on other systems that otherwise reproduce the linux syscall api/abi. only a few parts of the code were actually still using the cached pid. in a couple places (aio and synccall) it was a minor optimization to avoid a syscall. caching could be reintroduced, but lazily as part of the public getpid function rather than at program startup, if it's deemed important for performance later. in other places (cancellation and pthread_kill) the pid was completely unnecessary; the tkill syscall can be used instead of tgkill. this is actually a rather subtle issue, since tgkill is supposedly a solution to race conditions that can affect use of tkill. however, as documented in the commit message for commit 7779dbd2663269b465951189b4f43e70839bc073, tgkill does not actually solve this race; it just limits it to happening within one process rather than between processes. we use a lock that avoids the race in pthread_kill, and the use in the cancellation signal handler is self-targeted and thus not subject to tid reuse races, so both are safe regardless of which syscall (tgkill or tkill) is used.
* add locale frameworkRich Felker2014-07-021-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | this commit adds non-stub implementations of setlocale, duplocale, newlocale, and uselocale, along with the data structures and minimal code needed for representing the active locale on a per-thread basis and optimizing the common case where thread-local locale settings are not in use. at this point, the data structures only contain what is necessary to represent LC_CTYPE (a single flag) and LC_MESSAGES (a name for use in finding message translation files). representation for the other categories will be added later; the expectation is that a single pointer will suffice for each. for LC_CTYPE, the strings "C" and "POSIX" are treated as special; any other string is accepted and treated as "C.UTF-8". for other categories, any string is accepted after being truncated to a maximum supported length (currently 15 bytes). for LC_MESSAGES, the name is kept regardless of whether libc itself can use such a message translation locale, since applications using catgets or gettext should be able to use message locales libc is not aware of. for other categories, names which are not successfully loaded as locales (which, at present, means all names) are treated as aliases for "C". setlocale never fails. locale settings are not yet used anywhere, so this commit should have no visible effects except for the contents of the string returned by setlocale.
* separate __tls_get_addr implementation from dynamic linker/init_tlsRich Felker2014-06-191-0/+17
| | | | | | | | | | | | | | | such separation serves multiple purposes: - by having the common path for __tls_get_addr alone in its own function with a tail call to the slow case, code generation is greatly improved. - by having __tls_get_addr in it own file, it can be replaced on a per-arch basis as needed, for optimization or ABI-specific purposes. - by removing __tls_get_addr from __init_tls.c, a few bytes of code are shaved off of static binaries (which are unlikely to use this function unless the linker messed up).