about summary refs log tree commit diff
path: root/src/thread
diff options
context:
space:
mode:
authorRich Felker <dalias@aerifal.cx>2020-05-21 23:32:45 -0400
committerRich Felker <dalias@aerifal.cx>2020-05-22 17:39:57 -0400
commite01b5939b38aea5ecbe41670643199825874b26c (patch)
tree100e5d227741abd1653158b2e236ba3c24621113 /src/thread
parent4d5aa20a94a2d3fae3e69289dc23ecafbd0c16c4 (diff)
downloadmusl-e01b5939b38aea5ecbe41670643199825874b26c.tar.gz
musl-e01b5939b38aea5ecbe41670643199825874b26c.tar.xz
musl-e01b5939b38aea5ecbe41670643199825874b26c.zip
don't use libc.threads_minus_1 as relaxed atomic for skipping locks
after all but the last thread exits, the next thread to observe
libc.threads_minus_1==0 and conclude that it can skip locking fails to
synchronize with any changes to memory that were made by the
last-exiting thread. this can produce data races.

on some archs, at least x86, memory synchronization is unlikely to be
a problem; however, with the inline locks in malloc, skipping the lock
also eliminated the compiler barrier, and caused code that needed to
re-check chunk in-use bits after obtaining the lock to reuse a stale
value, possibly from before the process became single-threaded. this
in turn produced corruption of the heap state.

some uses of libc.threads_minus_1 remain, especially for allocation of
new TLS in the dynamic linker; otherwise, it could be removed
entirely. it's made non-volatile to reflect that the remaining
accesses are only made under lock on the thread list.

instead of libc.threads_minus_1, libc.threaded is now used for
skipping locks. the difference is that libc.threaded is permanently
true once an additional thread has been created. this will produce
some performance regression in processes that are mostly
single-threaded but occasionally creating threads. in the future it
may be possible to bring back the full lock-skipping, but more care
needs to be taken to produce a safe design.
Diffstat (limited to 'src/thread')
-rw-r--r--src/thread/__lock.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/src/thread/__lock.c b/src/thread/__lock.c
index 45557c88..5b9b144e 100644
--- a/src/thread/__lock.c
+++ b/src/thread/__lock.c
@@ -18,7 +18,7 @@
 
 void __lock(volatile int *l)
 {
-	if (!libc.threads_minus_1) return;
+	if (!libc.threaded) return;
 	/* fast path: INT_MIN for the lock, +1 for the congestion */
 	int current = a_cas(l, 0, INT_MIN + 1);
 	if (!current) return;