| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Signed-off-by: Carlos O'Donell <carlos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Signed-off-by: Carlos O'Donell <carlos@redhat.com>
---
nptl/
2013-07-19 Dominik Vogt <vogt@de.ibm.com>
* sysdeps/unix/sysv/linux/x86/elision-conf.c:
Remove __rwlock_rtm_enabled and __rwlock_rtm_read_retries.
(elision_init): Don't set __rwlock_rtm_enabled.
* sysdeps/unix/sysv/linux/x86/elision-conf.h:
Remove __rwlock_rtm_enabled.
|
| |
|
|
|
|
| |
Can be enabled with --enable-lock-elision=yes at configure time.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add elision paths to the basic mutex locks.
The normal path has a check for RTM and upgrades the lock
to RTM when available. Trylocks cannot automatically upgrade,
so they check for elision every time.
We use a 4 byte value in the mutex to store the lock
elision adaptation state. This is separate from the adaptive
spin state and uses a separate field.
Condition variables currently do not support elision.
Recursive mutexes and condition variables may be supported at some point,
but are not in the current implementation. Also "trylock" will
not automatically enable elision unless some other lock call
has been already called on the lock.
This version does not use IFUNC, so it means every lock has one
additional check for elision. Benchmarking showed the overhead
to be negligible.
|
|
|
|
|
|
|
|
|
|
| |
Add Enable/disable flags used internally
Extend the mutex initializers to have the fields needed for
elision. The layout stays the same, and this is not visible
to programs.
These changes are not exposed outside pthread
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Lock elision using TSX is a technique to optimize lock scaling
It allows to run locks in parallel using hardware support for
a transactional execution mode in 4th generation Intel Core CPUs.
See http://www.intel.com/software/tsx for more Information.
This patch implements a simple adaptive lock elision algorithm based
on RTM. It enables elision for the pthread mutexes and rwlocks.
The algorithm keeps track whether a mutex successfully elides or not,
and stops eliding for some time when it is not.
When the CPU supports RTM the elision path is automatically tried,
otherwise any elision is disabled.
The adaptation algorithm and its tuning is currently preliminary.
The code adds some checks to the lock fast paths. Micro-benchmarks
show little to no difference without RTM.
This patch implements the low level "lll_" code for lock elision.
Followon patches hook this into the pthread implementation
Changes with the RTM mutexes:
-----------------------------
Lock elision in pthreads is generally compatible with existing programs.
There are some obscure exceptions, which are expected to be uncommon.
See the manual for more details.
- A broken program that unlocks a free lock will crash.
There are ways around this with some tradeoffs (more code in hot paths)
I'm still undecided on what approach to take here; have to wait for testing reports.
- pthread_mutex_destroy of a lock mutex will not return EBUSY but 0.
- There's also a similar situation with trylock outside the mutex,
"knowing" that the mutex must be held due to some other condition.
In this case an assert failure cannot be recovered. This situation is
usually an existing bug in the program.
- Same applies to the rwlocks. Some of the return values changes
(for example there is no EDEADLK for an elided lock, unless it aborts.
However when elided it will also never deadlock of course)
- Timing changes, so broken programs that make assumptions about specific timing
may expose already existing latent problems. Note that these broken programs will
break in other situations too (loaded system, new faster hardware, compiler
optimizations etc.)
- Programs with non recursive mutexes that take them recursively in a thread and
which would always deadlock without elision may not always see a deadlock.
The deadlock will only happen on an early or delayed abort (which typically
happens at some point)
This only happens for mutexes not explicitely set to PTHREAD_MUTEX_NORMAL
or PTHREAD_MUTEX_ADAPTIVE_NP. PTHREAD_MUTEX_NORMAL mutexes do not elide.
The elision default can be set at configure time.
This patch implements the basic infrastructure for elision.
|
|
|
|
|
| |
This patch reserves four pointer to be used in future Event-Based
Branch framework for PowerPC.
|
|
|
|
|
|
|
| |
This patch introduces two new convenience functions to set the default
thread attributes used for creating threads. This allows a programmer
to set the default thread attributes just once in a process and then
run pthread_create without additional attributes.
|
|
|
|
|
|
|
|
|
| |
Resolves BZ #15618.
pthread_attr_getaffinity_np may write beyond bounds of the input
cpuset buffer if the size of the input buffer is smaller than the
buffer present in the input pthread attributes. Fix is to copy to the
extent of the minimum of the source and the destination.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is very very possible that the futex syscall returns an
error and that the caller of lll_futex_wake may want to
look at that error and propagate the failure.
This patch allows a caller to see the syscall error.
There are no users of the syscall error at present, but
future cleanups are now be able to check for the error.
--
nplt/
2013-06-10 Carlos O'Donell <carlos@redhat.com>
* sysdeps/unix/sysv/linux/i386/lowlevellock.h
(lll_futex_wake): Return syscall error.
* sysdeps/unix/sysv/linux/x86_64/lowlevellock.h
(lll_futex_wake): Return syscall error.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
[BZ #10686]
* sysdeps/x86_64/tls.h (struct tcbhead_t): Add __private_ss
field.
* sysdeps/i386/tls.h (struct tcbhead_t): Likewise.
|
|
|
|
|
| |
Define inline functions that wrap around validation for each of the
pthread attributes to reduce duplication in code.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The sem_post.c file uses atomic functions without including
atomic.h. Add `#include <atomic.h>' to the file to prevent
any compile time warnings when other headers change and
atomic.h isn't implicitly included.
---
nptl/
2013-04-07 Carlos O'Donell <carlos@redhat.com>
* sysdeps/unix/sysv/linux/sem_post.c: Include atomic.h.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes BZ #15337.
Static builds fail with the following warning:
/home/tools/glibc/glibc/nptl/../nptl/sysdeps/unix/sysv/linux/x86_64/cancellation.S:80:
undefined reference to `__GI___pthread_unwind'
When the source is configured with --disable-hidden-plt. This is
because the preprocessor conditional in cancellation.S only checks if
the build is for SHARED, whereas hidden_def is defined appropriately
only for a SHARED build that will have symbol versioning *and* hidden
defs are enabled. The last case is false here.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
* sysdeps/unix/sysv/linux/sparc/lowlevellock.h
(FUTEX_WAIT_REQUEUE_PI): Define.
(FUTEX_CMP_REQUEUE_PI): Likewise.
(lll_futex_wait_requeue_pi): Likewise.
(lll_futex_timed_wait_requeue_pi): Likewise.
(lll_futex_cmp_requeue_pi): Likewise.
|
| |
|
|
|
|
| |
Include stdlib.h to get declaration of exit(3)
|
|
|
|
|
| |
Add FUTEX_*_REQUEUE_PI support for the default C code and also add
implementations for s-390 and ppc.
|
| |
|
|
|
|
|
|
| |
absolute timeout"
This reverts commit 1bd57044e963abb886cb912beadea714815a3d5c.
|
|
|
|
|
|
| |
* sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S
(__pthread_cond_timedwait): If possible use FUTEX_WAIT_BITSET to
directly use absolute timeout.
|
| |
|
| |
|
|
|
|
|
|
|
| |
nptl/
* sysdeps/unix/sysv/linux/sparc/lowlevellock.h
(lll_futex_timed_wait_bitset): New macro.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the FUTEX_WAIT operation takes a relative timeout, the
pthread_cond_timedwait and other timed function implementations have
to get a relative timeout from the absolute timeout parameter it gets
before it makes the futex syscall. This value is then converted back
into an absolute timeout within the kernel. This is a waste and has
hence been improved upon by a FUTEX_WAIT_BITSET operation (OR'd with
FUTEX_CLOCK_REALTIME to make the kernel use the realtime clock instead
of the default monotonic clock). This was implemented only in the x86
and sh assembly code and not in the C code. This patch implements
support for FUTEX_WAIT_BITSET whenever available (since linux-2.6.29)
for s390 and powerpc.
|
|
|
|
|
|
|
|
| |
nptl/
* sysdeps/unix/sysv/linux/sparc/lowlevellock.h (BUSY_WAIT_NOP):
Add missing spaces.
(__cpu_relax): Likewise.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
nptl/
* sysdeps/unix/sysv/linux/sparc/lowlevellock.h (BUSY_WAIT_NOP):
Define when we have v9 instructions available.
* sysdeps/unix/sysv/linux/sparc/sparc64/cpu_relax.S: New file.
* sysdeps/unix/sysv/linux/sparc/sparc32/sparcv9/cpu_relax.S: New
file.
* sysdeps/unix/sysv/linux/sparc/sparc32/sparcv9/Makefile: New
file.
* sysdeps/unix/sysv/linux/sparc/sparc64/Makefile: Add cpu_relax
to libpthread-routines.
|
|
|
|
| |
This completes the fix to bz #14652.
|
|
|
|
|
|
|
|
| |
The macro pthread_cleanup_push_defer_np in pthread.h has a misaligned
line continuation marker. This marker was previously aligned, but
recent changes have moved it out of alignment. This change realigns
the marker. This also reduces the diff against the hppa version of
pthread.h where the marker is aligned.
|
|
|
|
|
|
|
|
|
|
| |
[BZ #14652]
When a thread waiting in pthread_cond_wait with a PI mutex is
cancelled after it has returned successfully from the futex syscall
but just before async cancellation is disabled, it enters its
cancellation handler with the mutex held and simply calling a
mutex_lock again will result in a deadlock. Hence, it is necessary to
see if the thread owns the lock and try to lock it only if it doesn't.
|
| |
|
|
|
|
|
|
|
| |
[BZ #14568]
* sysdeps/sparc/tls.h (DB_THREAD_SELF_INCLUDE): Delete.
(DB_THREAD_SELF): Use constants for the register offsets. Correct
the case of a 64-bit debugger with a 32-bit inferior.
|