about summary refs log tree commit diff
path: root/arch/powerpc
diff options
context:
space:
mode:
authorRich Felker <dalias@aerifal.cx>2014-08-25 15:43:40 -0400
committerRich Felker <dalias@aerifal.cx>2014-08-25 15:43:40 -0400
commitea818ea8340c13742a4f41e6077f732291aea4bc (patch)
treef2a97e8f8a25fc3337002aa30bb235575593af86 /arch/powerpc
parent5345c9b884e7c4e73eb2c8bb83b8d0df20f95afb (diff)
downloadmusl-ea818ea8340c13742a4f41e6077f732291aea4bc.tar.gz
musl-ea818ea8340c13742a4f41e6077f732291aea4bc.tar.xz
musl-ea818ea8340c13742a4f41e6077f732291aea4bc.zip
add working a_spin() atomic for non-x86 targets
conceptually, a_spin needs to be at least a compiler barrier, so the
compiler will not optimize out loops (and the load on each iteration)
while spinning. it should also be a memory barrier, or the spinning
thread might keep spinning without noticing stores from other threads,
thus delaying for longer than it should.

ideally, an optimal a_spin implementation that avoids unnecessary
cache/memory contention should be chosen for each arch, but for now,
the easiest thing is to perform a useless a_cas on the calling
thread's stack.
Diffstat (limited to 'arch/powerpc')
-rw-r--r--arch/powerpc/atomic.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/powerpc/atomic.h b/arch/powerpc/atomic.h
index 1044886d..1c50361e 100644
--- a/arch/powerpc/atomic.h
+++ b/arch/powerpc/atomic.h
@@ -80,6 +80,7 @@ static inline void a_store(volatile int *p, int x)
 
 static inline void a_spin()
 {
+	a_cas(&(int){0}, 0, 0);
 }
 
 static inline void a_crash()