about summary refs log tree commit diff
path: root/nptl/pthread_mutex_destroy.c
diff options
context:
space:
mode:
authorStefan Liebler <stli@linux.ibm.com>2018-10-17 12:23:04 +0200
committerStefan Liebler <stli@linux.ibm.com>2018-10-17 12:23:04 +0200
commit403b4feb22dcbc85ace72a361d2a951380372471 (patch)
treefe2235a8bce38cad1bbef3b4f28da82dc353e138 /nptl/pthread_mutex_destroy.c
parentce5a7de6cd1479a1e78fda0db023bd4effa072a4 (diff)
downloadglibc-403b4feb22dcbc85ace72a361d2a951380372471.tar.gz
glibc-403b4feb22dcbc85ace72a361d2a951380372471.tar.xz
glibc-403b4feb22dcbc85ace72a361d2a951380372471.zip
Fix race in pthread_mutex_lock while promoting to PTHREAD_MUTEX_ELISION_NP [BZ #23275]
The race leads either to pthread_mutex_destroy returning EBUSY
or triggering an assertion (See description in bugzilla).

This patch is fixing the race by ensuring that the elision path is
used in all cases if elision is enabled by the GLIBC_TUNABLES framework.

The __kind variable in struct __pthread_mutex_s is accessed concurrently.
Therefore we are now using the atomic macros.

The new testcase tst-mutex10 is triggering the race on s390x and intel.
Presumably also on power, but I don't have access to a power machine
with lock-elision. At least the code for power is the same as on the other
two architectures.

ChangeLog:

	[BZ #23275]
	* nptl/tst-mutex10.c: New File.
	* nptl/Makefile (tests): Add tst-mutex10.
	(tst-mutex10-ENV): New variable.
	* sysdeps/unix/sysv/linux/s390/force-elision.h: (FORCE_ELISION):
	Ensure that elision path is used if elision is available.
	* sysdeps/unix/sysv/linux/powerpc/force-elision.h (FORCE_ELISION):
	Likewise.
	* sysdeps/unix/sysv/linux/x86/force-elision.h: (FORCE_ELISION):
	Likewise.
	* nptl/pthreadP.h (PTHREAD_MUTEX_TYPE, PTHREAD_MUTEX_TYPE_ELISION)
	(PTHREAD_MUTEX_PSHARED): Use atomic_load_relaxed.
	* nptl/pthread_mutex_consistent.c (pthread_mutex_consistent): Likewise.
	* nptl/pthread_mutex_getprioceiling.c (pthread_mutex_getprioceiling):
	Likewise.
	* nptl/pthread_mutex_lock.c (__pthread_mutex_lock_full)
	(__pthread_mutex_cond_lock_adjust): Likewise.
	* nptl/pthread_mutex_setprioceiling.c (pthread_mutex_setprioceiling):
	Likewise.
	* nptl/pthread_mutex_timedlock.c (__pthread_mutex_timedlock): Likewise.
	* nptl/pthread_mutex_trylock.c (__pthread_mutex_trylock): Likewise.
	* nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_full): Likewise.
	* sysdeps/nptl/bits/thread-shared-types.h (struct __pthread_mutex_s):
	Add comments.
	* nptl/pthread_mutex_destroy.c (__pthread_mutex_destroy):
	Use atomic_load_relaxed and atomic_store_relaxed.
	* nptl/pthread_mutex_init.c (__pthread_mutex_init):
	Use atomic_store_relaxed.
Diffstat (limited to 'nptl/pthread_mutex_destroy.c')
-rw-r--r--nptl/pthread_mutex_destroy.c11
1 files changed, 8 insertions, 3 deletions
diff --git a/nptl/pthread_mutex_destroy.c b/nptl/pthread_mutex_destroy.c
index 5a22611541..713ea68496 100644
--- a/nptl/pthread_mutex_destroy.c
+++ b/nptl/pthread_mutex_destroy.c
@@ -27,12 +27,17 @@ __pthread_mutex_destroy (pthread_mutex_t *mutex)
 {
   LIBC_PROBE (mutex_destroy, 1, mutex);
 
-  if ((mutex->__data.__kind & PTHREAD_MUTEX_ROBUST_NORMAL_NP) == 0
+  /* See concurrency notes regarding __kind in struct __pthread_mutex_s
+     in sysdeps/nptl/bits/thread-shared-types.h.  */
+  if ((atomic_load_relaxed (&(mutex->__data.__kind))
+       & PTHREAD_MUTEX_ROBUST_NORMAL_NP) == 0
       && mutex->__data.__nusers != 0)
     return EBUSY;
 
-  /* Set to an invalid value.  */
-  mutex->__data.__kind = -1;
+  /* Set to an invalid value.  Relaxed MO is enough as it is undefined behavior
+     if the mutex is used after it has been destroyed.  But you can reinitialize
+     it with pthread_mutex_init.  */
+  atomic_store_relaxed (&(mutex->__data.__kind), -1);
 
   return 0;
 }