From ed19993b5b0d05d62cc883571519a67dae481a14 Mon Sep 17 00:00:00 2001 From: Torvald Riegel Date: Wed, 25 May 2016 23:43:36 +0200 Subject: New condvar implementation that provides stronger ordering guarantees. This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise. --- nptl/pthread_cond_signal.c | 99 +++++++++++++++++++++++++++------------------- 1 file changed, 58 insertions(+), 41 deletions(-) (limited to 'nptl/pthread_cond_signal.c') diff --git a/nptl/pthread_cond_signal.c b/nptl/pthread_cond_signal.c index b3a6d3d2a4..a95d5690af 100644 --- a/nptl/pthread_cond_signal.c +++ b/nptl/pthread_cond_signal.c @@ -19,62 +19,79 @@ #include #include #include -#include +#include #include #include +#include +#include #include -#include #include +#include "pthread_cond_common.c" +/* See __pthread_cond_wait for a high-level description of the algorithm. */ int __pthread_cond_signal (pthread_cond_t *cond) { - int pshared = (cond->__data.__mutex == (void *) ~0l) - ? LLL_SHARED : LLL_PRIVATE; - LIBC_PROBE (cond_signal, 1, cond); - /* Make sure we are alone. */ - lll_lock (cond->__data.__lock, pshared); - - /* Are there any waiters to be woken? */ - if (cond->__data.__total_seq > cond->__data.__wakeup_seq) + /* First check whether there are waiters. Relaxed MO is fine for that for + the same reasons that relaxed MO is fine when observing __wseq (see + below). */ + unsigned int wrefs = atomic_load_relaxed (&cond->__data.__wrefs); + if (wrefs >> 3 == 0) + return 0; + int private = __condvar_get_private (wrefs); + + __condvar_acquire_lock (cond, private); + + /* Load the waiter sequence number, which represents our relative ordering + to any waiters. Relaxed MO is sufficient for that because: + 1) We can pick any position that is allowed by external happens-before + constraints. In particular, if another __pthread_cond_wait call + happened before us, this waiter must be eligible for being woken by + us. The only way do establish such a happens-before is by signaling + while having acquired the mutex associated with the condvar and + ensuring that the signal's critical section happens after the waiter. + Thus, the mutex ensures that we see that waiter's __wseq increase. + 2) Once we pick a position, we do not need to communicate this to the + program via a happens-before that we set up: First, any wake-up could + be a spurious wake-up, so the program must not interpret a wake-up as + an indication that the waiter happened before a particular signal; + second, a program cannot detect whether a waiter has not yet been + woken (i.e., it cannot distinguish between a non-woken waiter and one + that has been woken but hasn't resumed execution yet), and thus it + cannot try to deduce that a signal happened before a particular + waiter. */ + unsigned long long int wseq = __condvar_load_wseq_relaxed (cond); + unsigned int g1 = (wseq & 1) ^ 1; + wseq >>= 1; + bool do_futex_wake = false; + + /* If G1 is still receiving signals, we put the signal there. If not, we + check if G2 has waiters, and if so, quiesce and switch G1 to the former + G2; if this results in a new G1 with waiters (G2 might have cancellations + already, see __condvar_quiesce_and_switch_g1), we put the signal in the + new G1. */ + if ((cond->__data.__g_size[g1] != 0) + || __condvar_quiesce_and_switch_g1 (cond, wseq, &g1, private)) { - /* Yes. Mark one of them as woken. */ - ++cond->__data.__wakeup_seq; - ++cond->__data.__futex; - -#if (defined lll_futex_cmp_requeue_pi \ - && defined __ASSUME_REQUEUE_PI) - pthread_mutex_t *mut = cond->__data.__mutex; - - if (USE_REQUEUE_PI (mut) - /* This can only really fail with a ENOSYS, since nobody can modify - futex while we have the cond_lock. */ - && lll_futex_cmp_requeue_pi (&cond->__data.__futex, 1, 0, - &mut->__data.__lock, - cond->__data.__futex, pshared) == 0) - { - lll_unlock (cond->__data.__lock, pshared); - return 0; - } - else -#endif - /* Wake one. */ - if (! __builtin_expect (lll_futex_wake_unlock (&cond->__data.__futex, - 1, 1, - &cond->__data.__lock, - pshared), 0)) - return 0; - - /* Fallback if neither of them work. */ - lll_futex_wake (&cond->__data.__futex, 1, pshared); + /* Add a signal. Relaxed MO is fine because signaling does not need to + establish a happens-before relation (see above). We do not mask the + release-MO store when initializing a group in + __condvar_quiesce_and_switch_g1 because we use an atomic + read-modify-write and thus extend that store's release sequence. */ + atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, 2); + cond->__data.__g_size[g1]--; + /* TODO Only set it if there are indeed futex waiters. */ + do_futex_wake = true; } - /* We are done. */ - lll_unlock (cond->__data.__lock, pshared); + __condvar_release_lock (cond, private); + + if (do_futex_wake) + futex_wake (cond->__data.__g_signals + g1, 1, private); return 0; } -- cgit 1.4.1