about summary refs log tree commit diff
path: root/nptl/sysdeps/unix/sysv/linux/x86_64
Commit message (Collapse)AuthorAgeFilesLines
* Move remaining nptl/sysdeps/unix/sysv/linux/x86_64/ files.Roland McGrath2014-05-1425-5323/+0
|
* x86: Consolidate NPTL fork.Roland McGrath2014-05-141-30/+0
|
* Consolidate not-cancel.h files.Roland McGrath2014-05-141-1/+0
|
* x86_64: Remove useless pthread_spin_{init,unlock} wrapper files.Roland McGrath2014-05-142-2/+0
|
* Move x86_64 compat-timer.h out of nptl/Roland McGrath2014-05-141-45/+0
|
* Move x86_64 timer_*.c out of nptl/Roland McGrath2014-05-146-230/+0
|
* x86: Consolidate NPTL/non versions of cloneRoland McGrath2014-05-141-9/+0
|
* x86: Consolidate NPTL/non versions of vforkRoland McGrath2014-05-142-74/+0
|
* Fix dwarf2 unwinding through futex functions.Andi Kleen2014-03-261-179/+30
| | | | | | | | | | | | | | | | | | | When profiling programs with lock problems with perf record -g dwarf, libunwind can currently not backtrace through the futex and unlock functions in pthread. This is because they use out of line sections, and those are not correctly described in dwarf2 (I believe needs dwarf3 or 4). This patch first removes the out of line sections. They only save a single jump, but cause a lot of pain. Then it converts the now inline lock code to use the now standard gas .cfi_* commands. With these changes libunwind/perf can backtrace through the futex functions now. Longer term it would be likely better to just use C futex() functions on x86 like all the other architectures. This would clean the code up even more.
* Use glibc_likely instead __builtin_expect.Ondřej Bílka2014-02-101-1/+1
|
* Update copyright notices with scripts/update-copyrightsAllan McRae2014-01-0132-32/+32
|
* Remove --disable-versioning.Joseph Myers2013-09-041-1/+1
|
* Add the low level infrastructure for pthreads lock elision with TSXAndi Kleen2013-07-021-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lock elision using TSX is a technique to optimize lock scaling It allows to run locks in parallel using hardware support for a transactional execution mode in 4th generation Intel Core CPUs. See http://www.intel.com/software/tsx for more Information. This patch implements a simple adaptive lock elision algorithm based on RTM. It enables elision for the pthread mutexes and rwlocks. The algorithm keeps track whether a mutex successfully elides or not, and stops eliding for some time when it is not. When the CPU supports RTM the elision path is automatically tried, otherwise any elision is disabled. The adaptation algorithm and its tuning is currently preliminary. The code adds some checks to the lock fast paths. Micro-benchmarks show little to no difference without RTM. This patch implements the low level "lll_" code for lock elision. Followon patches hook this into the pthread implementation Changes with the RTM mutexes: ----------------------------- Lock elision in pthreads is generally compatible with existing programs. There are some obscure exceptions, which are expected to be uncommon. See the manual for more details. - A broken program that unlocks a free lock will crash. There are ways around this with some tradeoffs (more code in hot paths) I'm still undecided on what approach to take here; have to wait for testing reports. - pthread_mutex_destroy of a lock mutex will not return EBUSY but 0. - There's also a similar situation with trylock outside the mutex, "knowing" that the mutex must be held due to some other condition. In this case an assert failure cannot be recovered. This situation is usually an existing bug in the program. - Same applies to the rwlocks. Some of the return values changes (for example there is no EDEADLK for an elided lock, unless it aborts. However when elided it will also never deadlock of course) - Timing changes, so broken programs that make assumptions about specific timing may expose already existing latent problems. Note that these broken programs will break in other situations too (loaded system, new faster hardware, compiler optimizations etc.) - Programs with non recursive mutexes that take them recursively in a thread and which would always deadlock without elision may not always see a deadlock. The deadlock will only happen on an early or delayed abort (which typically happens at some point) This only happens for mutexes not explicitely set to PTHREAD_MUTEX_NORMAL or PTHREAD_MUTEX_ADAPTIVE_NP. PTHREAD_MUTEX_NORMAL mutexes do not elide. The elision default can be set at configure time. This patch implements the basic infrastructure for elision.
* x86*: Return syscall error for lll_futex_wake.Carlos O'Donell2013-06-101-4/+5
| | | | | | | | | | | | | | | | | | | | | | It is very very possible that the futex syscall returns an error and that the caller of lll_futex_wake may want to look at that error and propagate the failure. This patch allows a caller to see the syscall error. There are no users of the syscall error at present, but future cleanups are now be able to check for the error. -- nplt/ 2013-06-10 Carlos O'Donell <carlos@redhat.com> * sysdeps/unix/sysv/linux/i386/lowlevellock.h (lll_futex_wake): Return syscall error. * sysdeps/unix/sysv/linux/x86_64/lowlevellock.h (lll_futex_wake): Return syscall error.
* Fix static build when configured with --disable-hidden-pltSiddhesh Poyarekar2013-04-041-1/+1
| | | | | | | | | | | | | | | Fixes BZ #15337. Static builds fail with the following warning: /home/tools/glibc/glibc/nptl/../nptl/sysdeps/unix/sysv/linux/x86_64/cancellation.S:80: undefined reference to `__GI___pthread_unwind' When the source is configured with --disable-hidden-plt. This is because the preprocessor conditional in cancellation.S only checks if the build is for SHARED, whereas hidden_def is defined appropriately only for a SHARED build that will have symbol versioning *and* hidden defs are enabled. The last case is false here.
* Update copyright notices with scripts/update-copyrights.Joseph Myers2013-01-0232-33/+32
|
* Add script to update copyright notices and reformat some to facilitate its use.Joseph Myers2013-01-011-1/+1
|
* Adjust mutex lock in condvar_cleanup if we got it from requeue_piSiddhesh Poyarekar2012-10-162-2/+9
| | | | This completes the fix to bz #14652.
* Take lock in pthread_cond_wait cleanup handler only when neededSiddhesh Poyarekar2012-10-102-4/+32
| | | | | | | | | | [BZ #14652] When a thread waiting in pthread_cond_wait with a PI mutex is cancelled after it has returned successfully from the futex syscall but just before async cancellation is disabled, it enters its cancellation handler with the mutex held and simply calling a mutex_lock again will result in a deadlock. Hence, it is necessary to see if the thread owns the lock and try to lock it only if it doesn't.
* Unlock mutex before going back to waiting for PI mutexesSiddhesh Poyarekar2012-10-052-84/+88
| | | | | | | | | | [BZ #14417] A futex call with FUTEX_WAIT_REQUEUE_PI returns with the mutex locked on success. If such a successful thread is pipped to the cond_lock by another spuriously woken waiter, it could be sent back to wait on the futex with the mutex lock held, thus causing a deadlock. So it is necessary that the thread relinquishes the mutex before going back to sleep.
* Fix clone flag name in comment to CLONE_CHILD_CLEARTID.Siddhesh Poyarekar2012-10-021-1/+1
|
* Remove __ASSUME_POSIX_TIMERS.Joseph Myers2012-08-161-30/+0
|
* Remove unused pseudo_end labelAndreas Schwab2012-07-251-3/+2
|
* Use x86-64 bits/pthreadtypes.h/semaphore.h for i386/x86-64H.J. Lu2012-05-303-280/+1
|
* Remove use of INTDEF/INTUSE in nptlAndreas Schwab2012-05-304-22/+10
|
* Add systemtap static probe points in generic and x86_64 pthread code.Roland McGrath2012-05-258-27/+49
|
* Use R*_LP to load pointer and operate on stackH.J. Lu2012-05-151-31/+32
|
* Use LP_OP(cmp) and RCX_LP on dep_mutex pointerH.J. Lu2012-05-151-4/+4
|
* Use LP_OP(op), LP_SIZE and ASM_ADDR in sem_wait.SH.J. Lu2012-05-151-6/+6
|
* se LP_OP(op), LP_SIZE and ASM_ADDR in sem_timedwait.SH.J. Lu2012-05-151-9/+9
|
* Use LP_OP(cmp) on NWAITERSH.J. Lu2012-05-151-1/+1
|
* Use LP_SIZE and ASM_ADDR in pthread_once.SH.J. Lu2012-05-151-3/+3
|
* Use LP_OP(cmp), R*_LP, LP_SIZE and ASM_ADDRH.J. Lu2012-05-151-20/+20
|
* Use LP_OP(cmp), R*_LP, LP_SIZE and ASM_ADDRH.J. Lu2012-05-151-24/+24
|
* Use LP_OP(cmp) and RCX_LP on dep_mutex pointerH.J. Lu2012-05-151-6/+6
|
* Use LP_OP(mov) and RDI_LP on pointerH.J. Lu2012-05-151-3/+3
|
* Use LP_SIZE and load timeout pointer into RDX_LPH.J. Lu2012-05-151-4/+4
|
* Add x32 pthread typesH.J. Lu2012-05-141-14/+27
|
* Check __x86_64__ for __cleanup_fct_attributeH.J. Lu2012-05-111-1/+1
|
* Use __NR_futex to define SYS_futexH.J. Lu2012-03-191-1/+1
|
* Fix stray references to __pthread_attrDavid S. Miller2012-02-271-1/+1
| | | | | | | | | | * sysdeps/unix/sysv/linux/i386/bits/pthreadtypes.h: Don't refer to non-existing __pthread_attr. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h: Likewise. * sysdeps/unix/sysv/linux/s390/bits/pthreadtypes.h: Likewise. * sysdeps/unix/sysv/linux/sh/bits/pthreadtypes.h: Likewise. * sysdeps/unix/sysv/linux/sparc/bits/pthreadtypes.h: Likewise. * sysdeps/unix/sysv/linux/x86_64/bits/pthreadtypes.h: Likewise.
* Fix name mangling of pthread_attr_t after changeUlrich Drepper2012-02-261-1/+1
|
* Work around problem of pthread_attr_t definition with old compilersUlrich Drepper2012-02-261-2/+6
|
* Fix up POSIX testing in conformtestUlrich Drepper2012-02-261-3/+2
|
* Remove unused Makefile.Marek Polacek2012-02-151-4/+0
|
* Replace FSF snail mail address with URLs.Paul Eggert2012-02-0934-102/+68
|
* Handle EAGAIN from FUTEX_WAIT_REQUEUE_PIAndreas Schwab2011-11-301-2/+74
|
* Add missing register initialization in x86-64 pthread_cond_timedwaitUlrich Drepper2011-10-291-3/+3
|
* Remove support for !USE___THREADUlrich Drepper2011-09-104-48/+3
|
* Fix macro used in testH.J. Lu2011-09-081-1/+1
|