diff options
author | Rich Felker <dalias@aerifal.cx> | 2014-07-19 18:23:24 -0400 |
---|---|---|
committer | Rich Felker <dalias@aerifal.cx> | 2014-07-19 18:23:24 -0400 |
commit | 884cc0c7e253601b96902120ed689f34d12f8aa0 (patch) | |
tree | e9304a54bebbab4da81c08cbc8ee46bc8df77cac /arch/microblaze/atomic.h | |
parent | 1456b7ae6b72a4f2c446243acdde7c951268d4ab (diff) | |
download | musl-884cc0c7e253601b96902120ed689f34d12f8aa0.tar.gz musl-884cc0c7e253601b96902120ed689f34d12f8aa0.tar.xz musl-884cc0c7e253601b96902120ed689f34d12f8aa0.zip |
fix microblaze atomic store
as far as I can tell, microblaze is strongly ordered, but this does not seem to be well-documented and the assumption may need revisiting. even with strong ordering, however, a volatile C assignment is not sufficient to implement atomic store, since it does not preclude reordering by the compiler with respect to non-volatile stores and loads. simply flanking a C store with empty volatile asm blocks with memory clobbers would achieve the desired result, but is likely to result in worse code generation, since the address and value for the store may need to be spilled. actually writing the store in asm, so that there's only one asm block, should give optimal code generation while satisfying the requirement for having a compiler barrier.
Diffstat (limited to 'arch/microblaze/atomic.h')
-rw-r--r-- | arch/microblaze/atomic.h | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/arch/microblaze/atomic.h b/arch/microblaze/atomic.h index 90fcd8b6..da9949aa 100644 --- a/arch/microblaze/atomic.h +++ b/arch/microblaze/atomic.h @@ -95,7 +95,9 @@ static inline void a_dec(volatile int *x) static inline void a_store(volatile int *p, int x) { - *p=x; + __asm__ __volatile__ ( + "swi %1, %0" + : "=m"(*p) : "r"(x) : "memory" ); } static inline void a_spin() |