diff options
author | Siddhesh Poyarekar <siddhesh@redhat.com> | 2015-07-24 06:09:47 +0530 |
---|---|---|
committer | Siddhesh Poyarekar <siddhesh@redhat.com> | 2015-07-24 06:09:47 +0530 |
commit | a81a00ff94a43af85f7aefceb6d31f3c0f11151d (patch) | |
tree | dabb84f77f81948b442e8dc25e7fa6845934ff02 | |
parent | b301e68e4b027340743d050aa32651039c1cb7bc (diff) | |
download | glibc-a81a00ff94a43af85f7aefceb6d31f3c0f11151d.tar.gz glibc-a81a00ff94a43af85f7aefceb6d31f3c0f11151d.tar.xz glibc-a81a00ff94a43af85f7aefceb6d31f3c0f11151d.zip |
Mention dl_load_lock by name in the comments
Mention dl_load_lock by name instead of just 'load lock' in the comments. This makes it unambigious which lock we're talking about.
-rw-r--r-- | ChangeLog | 5 | ||||
-rw-r--r-- | stdlib/cxa_thread_atexit_impl.c | 19 |
2 files changed, 15 insertions, 9 deletions
diff --git a/ChangeLog b/ChangeLog index f1b7bd7df1..d43a564c32 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,8 @@ +2015-07-24 Siddhesh Poyarekar <siddhesh@redhat.com> + + * stdlib/cxa_thread_atexit_impl.c: Use the lock name dl_load_lock + instead of just saying load lock in the comments. + 2015-07-23 Roland McGrath <roland@hack.frob.com> * sysdeps/unix/Subdirs: Moved ... diff --git a/stdlib/cxa_thread_atexit_impl.c b/stdlib/cxa_thread_atexit_impl.c index 8e26380799..2d5d56a7fa 100644 --- a/stdlib/cxa_thread_atexit_impl.c +++ b/stdlib/cxa_thread_atexit_impl.c @@ -25,14 +25,15 @@ combinations of all three functions are the link map list, a link map for a DSO and the link map member l_tls_dtor_count. - __cxa_thread_atexit_impl acquires the load_lock before accessing any shared - state and hence multiple of its instances can safely execute concurrently. + __cxa_thread_atexit_impl acquires the dl_load_lock before accessing any + shared state and hence multiple of its instances can safely execute + concurrently. - _dl_close_worker acquires the load_lock before accessing any shared state as - well and hence can concurrently execute multiple of its own instances as + _dl_close_worker acquires the dl_load_lock before accessing any shared state + as well and hence can concurrently execute multiple of its own instances as well as those of __cxa_thread_atexit_impl safely. Not all accesses to - l_tls_dtor_count are protected by the load lock, so we need to synchronize - using atomics. + l_tls_dtor_count are protected by the dl_load_lock, so we need to + synchronize using atomics. __call_tls_dtors accesses the l_tls_dtor_count without taking the lock; it decrements the value by one. It does not need the big lock because it does @@ -51,8 +52,8 @@ it is safe to unload the DSO. Hence, to ensure that this does not happen, the following conditions must be met: - 1. In _dl_close_worker, the l_tls_dtor_count load happens before the DSO - is unload and its link map is freed + 1. In _dl_close_worker, the l_tls_dtor_count load happens before the DSO is + unloaded and its link map is freed 2. The link map dereference in __call_tls_dtors happens before the l_tls_dtor_count dereference. @@ -122,7 +123,7 @@ __cxa_thread_atexit_impl (dtor_func func, void *obj, void *dso_symbol) /* This increment may only be concurrently observed either by the decrement in __call_tls_dtors since the other l_tls_dtor_count access in - _dl_close_worker is protected by the load lock. The execution in + _dl_close_worker is protected by the dl_load_lock. The execution in __call_tls_dtors does not really depend on this value beyond the fact that it should be atomic, so Relaxed MO should be sufficient. */ atomic_fetch_add_relaxed (&lm_cache->l_tls_dtor_count, 1); |