about summary refs log tree commit diff
path: root/src/thread/pthread_mutex_trylock.c
Commit message (Collapse)AuthorAgeFilesLines
* use weak symbols for the POSIX functions that will be used by C threadsJens Gustedt2014-09-061-1/+3
| | | | | | | | | | The intent of this is to avoid name space pollution of the C threads implementation. This has two sides to it. First we have to provide symbols that wouldn't pollute the name space for the C threads implementation. Second we have to clean up some internal uses of POSIX functions such that they don't implicitly drag in such symbols.
* fix possible failure-to-wake deadlock with robust mutexesRich Felker2014-08-171-1/+4
| | | | | | | | | | | | | | | | | | when the kernel is responsible for waking waiters on a robust mutex whose owner died, it does not have a waiters count available and must rely entirely on the waiter bit of the lock value. normally, this bit is only set by newly arriving waiters, so it will be clear if no new waiters arrived after the current owner obtained the lock, even if there are other waiters present. leaving it clear is desirable because it allows timed-lock operations to remove themselves as waiters and avoid causing unnecessary futex wake syscalls. however, for process-shared robust mutexes, we need to set the bit whenever there are existing waiters so that the kernel will know to wake them. for non-process-shared robust mutexes, the wake happens in userspace and can look at the waiters count, so the bit does not need to be set in the non-process-shared case.
* make pointers used in robust list volatileRich Felker2014-08-171-3/+5
| | | | | | | | | | | | | | | | | | | | when manipulating the robust list, the order of stores matters, because the code may be asynchronously interrupted by a fatal signal and the kernel will then access the robust list in what is essentially an async-signal context. previously, aliasing considerations made it seem unlikely that a compiler could reorder the stores, but proving that they could not be reordered incorrectly would have been extremely difficult. instead I've opted to make all the pointers used as part of the robust list, including those in the robust list head and in the individual mutexes, volatile. in addition, the format of the robust list has been changed to point back to the head at the end, rather than ending with a null pointer. this is to match the documented kernel robust list ABI. the null pointer, which was previously used, only worked because faults during access terminate the robust list processing.
* fix robust mutex unrecoverable status, and related clean-upRich Felker2014-08-161-8/+2
| | | | | | | | | | | | | | | | | a robust mutex should not enter the unrecoverable status until it's unlocked without marking it consistent. previously, flag 8 in the type was used as an indication of unrecoverable, but only honored after successful locking; this resulted in a race window where the unrecoverable mutex could appear to a second thread as locked/busy again while the first thread was in the process of observing it as unrecoverable. now, flag 8 is used to mean that the mutex is in the process of being recovered, but not yet marked consistent. the flag only takes effect in pthread_mutex_unlock, where it causes the value 0x40000000 (owner dead flag, with old owner tid 0, an otherwise impossible state) to be stored in the lock. subsequent lock attempts will interpret this state as unrecoverable.
* fix false ownership of mutexes due to tid reuse, using robust listRich Felker2014-08-161-12/+16
| | | | | | | | | | | | | | | | | | | | | | | per the resolution of Austin Group issue 755, the POSIX requirement that ownership be enforced for recursive and error-checking mutexes does not allow a random new thread to acquire ownership of an orphaned mutex just because it happened to be assigned the same tid as the original owner that exited with the mutex locked. one possible fix for this issue would be to disallow the kernel thread to terminate when it exited with mutexes held, permanently reserving the tid against reuse. however, this does not solve the problem for process-shared mutexes where lifetime cannot be controlled, so it was not used. the alternate approach I've taken is to reuse the robust mutex system for non-robust recursive and error-checking mutexes. when a thread exits, the kernel (or the new userspace robust-list code added in commit b092f1c5fa9c048e12d002c7b972df5ecbe96d1d) will set the owner-died bit for these orphaned mutexes, but since the mutex-type is not robust, pthread_mutex_trylock will not allow a new owner to acquire them. instead, they remain in a state of being permanently locked, as desired.
* make futex operations use private-futex mode when possibleRich Felker2014-08-151-13/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | private-futex uses the virtual address of the futex int directly as the hash key rather than requiring the kernel to resolve the address to an underlying backing for the mapping in which it lies. for certain usage patterns it improves performance significantly. in many places, the code using futex __wake and __wait operations was already passing a correct fixed zero or nonzero flag for the priv argument, so no change was needed at the site of the call, only in the __wake and __wait functions themselves. in other places, especially where the process-shared attribute for a synchronization object was not previously tracked, additional new code is needed. for mutexes, the only place to store the flag is in the type field, so additional bit masking logic is needed for accessing the type. for non-process-shared condition variable broadcasts, the futex requeue operation is unable to requeue from a private futex to a process-shared one in the mutex structure, so requeue is simply disabled in this case by waking all waiters. for robust mutexes, the kernel always performs a non-private wake when the owner dies. in order not to introduce a behavioral regression in non-process-shared robust mutexes (when the owning thread dies), they are simply forced to be treated as process-shared for now, giving correct behavior at the expense of performance. this can be fixed by adding explicit code to pthread_exit to do the right thing for non-shared robust mutexes in userspace rather than relying on the kernel to do it, and will be fixed in this way later. since not all supported kernels have private futex support, the new code detects EINVAL from the futex syscall and falls back to making the call without the private flag. no attempt to cache the result is made; caching it and using the cached value efficiently is somewhat difficult, and not worth the complexity when the benefits would be seen only on ancient kernels which have numerous other limitations and bugs anyway.
* replace all remaining internal uses of pthread_self with __pthread_selfRich Felker2014-06-101-1/+1
| | | | | | | | | | prior to version 1.1.0, the difference between pthread_self (the public function) and __pthread_self (the internal macro or inline function) was that the former would lazily initialize the thread pointer if it was not already initialized, whereas the latter would crash in this case. since lazy initialization is no longer supported, use of pthread_self no longer makes sense; it simply generates larger, slower code.
* recovering ownerdead robust mutex must reset recursive lock countRich Felker2011-10-031-0/+1
|
* use count=0 instead of 1 for recursive mutex with only one lock referenceRich Felker2011-10-031-2/+0
| | | | | | | this simplifies the code paths slightly, but perhaps what's nicer is that it makes recursive mutexes fully reentrant, i.e. locking and unlocking from a signal handler works even if the interrupted code was in the middle of locking or unlocking.
* avoid accessing mutex memory after atomic unlockRich Felker2011-08-021-7/+7
| | | | | | | this change is needed to fix a race condition and ensure that it's possible to unlock and destroy or unmap the mutex as soon as pthread_mutex_lock succeeds. POSIX explicitly gives such an example in the rationale and requires an implementation to allow such usage.
* debloat: use __syscall instead of syscall where possibleRich Felker2011-04-171-1/+1
| | | | | | don't waste time (and significant code size due to function call overhead!) setting errno when the result of a syscall does not matter or when it can't fail.
* cheap trick to further optimize locking normal mutexesRich Felker2011-04-141-1/+1
|
* revert mutex "optimization" that turned out to be worseRich Felker2011-03-291-1/+1
|
* global cleanup to use the new syscall interfaceRich Felker2011-03-201-2/+2
|
* implement robust mutexesRich Felker2011-03-171-3/+35
| | | | | | some of this code should be cleaned up, e.g. using macros for some of the bit flags, masks, etc. nonetheless, the code is believed to be working and correct at this point.
* unify lock and owner fields of mutex structureRich Felker2011-03-171-3/+2
| | | | | | this change is necessary to free up one slot in the mutex structure so that we can use doubly-linked lists in the implementation of robust mutexes.
* optimize contended normal mutex case; add int compare-and-swap atomicRich Felker2011-03-171-1/+1
|
* simplify logic, slightly optimize contended case for non-default mutex typesRich Felker2011-03-161-4/+2
|
* correct error returns for error-checking mutexesRich Felker2011-03-161-1/+1
|
* simplify and optimize pthread_mutex_trylockRich Felker2011-03-081-17/+16
|
* fix and optimize non-default-type mutex behaviorRich Felker2011-03-081-15/+12
| | | | | | | | | problem 1: mutex type from the attribute was being ignored by pthread_mutex_init, so recursive/errorchecking mutexes were never being used at all. problem 2: ownership of recursive mutexes was not being enforced at unlock time.
* reorganize pthread data structures and move the definitions to alltypes.hRich Felker2011-02-171-13/+13
| | | | | | | | this allows sys/types.h to provide the pthread types, as required by POSIX. this design also facilitates forcing ABI-compatible sizes in the arch-specific alltypes.h, while eliminating the need for developers changing the internals of the pthread types to poke around with arch-specific headers they may not be able to test.
* initial check-in, version 0.5.0 v0.5.0Rich Felker2011-02-121-0/+28