diff options
author | Adhemerval Zanella <adhemerval.zanella@linaro.org> | 2018-05-09 10:32:25 -0300 |
---|---|---|
committer | Adhemerval Zanella <adhemerval.zanella@linaro.org> | 2019-01-03 18:38:14 -0200 |
commit | 85c828a4626adda906f8844dc9c5a166c72d4f7d (patch) | |
tree | 034152296ec94a9e33e360833068b01515fea4c7 /sysdeps/i386 | |
parent | d0d7f85f66a19c3110d550c3c24247f7b4f2c58a (diff) | |
download | glibc-85c828a4626adda906f8844dc9c5a166c72d4f7d.tar.gz glibc-85c828a4626adda906f8844dc9c5a166c72d4f7d.tar.xz glibc-85c828a4626adda906f8844dc9c5a166c72d4f7d.zip |
x86_64: Remove wrong THREAD_ATOMIC_* macros
The x86 defines optimized THREAD_ATOMIC_* macros where reference always the current thread instead of the one indicated by input 'descr' argument. It work as long the input is the self thread pointer, however it generates wrong code if the semantic is to set a bit atomicialy from another thread. This is not an issue for current GLIBC usage, however the new cancellation code expects that some synchronization code to atomically set bits from different threads. The generic code generates an additional load to reference to TLS segment, for instance the code: THREAD_ATOMIC_BIT_SET (THREAD_SELF, cancelhandling, CANCELED_BIT); Compiles to: lock;orl $4, %fs:776 Where with patch changes it now compiles to: mov %fs:16,%rax lock;orl $4, 776(%rax) If some usage indeed proves to be a hotspot we can add an extra macro with a more descriptive name (THREAD_ATOMIC_BIT_SET_SELF for instance) where x86_64 might optimize it. Checked on x86_64-linux-gnu. * sysdeps/x86_64/nptl/tls.h (THREAD_ATOMIC_CMPXCHG_VAL, THREAD_ATOMIC_AND, THREAD_ATOMIC_BIT_SET): Remove macros.
Diffstat (limited to 'sysdeps/i386')
0 files changed, 0 insertions, 0 deletions