diff options
author | Florian Weimer <fweimer@redhat.com> | 2023-09-08 12:32:14 +0200 |
---|---|---|
committer | Florian Weimer <fweimer@redhat.com> | 2023-09-08 12:34:27 +0200 |
commit | 6985865bc3ad5b23147ee73466583dd7fdf65892 (patch) | |
tree | 5e058aad7a342ef64266c4decc089c4d81b6bcca /elf/dl-fini.c | |
parent | 434bf72a94de68f0cc7fbf3c44bf38c1911b70cb (diff) | |
download | glibc-6985865bc3ad5b23147ee73466583dd7fdf65892.tar.gz glibc-6985865bc3ad5b23147ee73466583dd7fdf65892.tar.xz glibc-6985865bc3ad5b23147ee73466583dd7fdf65892.zip |
elf: Always call destructors in reverse constructor order (bug 30785)
The current implementation of dlclose (and process exit) re-sorts the link maps before calling ELF destructors. Destructor order is not the reverse of the constructor order as a result: The second sort takes relocation dependencies into account, and other differences can result from ambiguous inputs, such as cycles. (The force_first handling in _dl_sort_maps is not effective for dlclose.) After the changes in this commit, there is still a required difference due to dlopen/dlclose ordering by the application, but the previous discrepancies went beyond that. A new global (namespace-spanning) list of link maps, _dl_init_called_list, is updated right before ELF constructors are called from _dl_init. In dl_close_worker, the maps variable, an on-stack variable length array, is eliminated. (VLAs are problematic, and dlclose should not call malloc because it cannot readily deal with malloc failure.) Marking still-used objects uses the namespace list directly, with next and next_idx replacing the done_index variable. After marking, _dl_init_called_list is used to call the destructors of now-unused maps in reverse destructor order. These destructors can call dlopen. Previously, new objects do not have l_map_used set. This had to change: There is no copy of the link map list anymore, so processing would cover newly opened (and unmarked) mappings, unloading them. Now, _dl_init (indirectly) sets l_map_used, too. (dlclose is handled by the existing reentrancy guard.) After _dl_init_called_list traversal, two more loops follow. The processing order changes to the original link map order in the namespace. Previously, dependency order was used. The difference should not matter because relocation dependencies could already reorder link maps in the old code. The changes to _dl_fini remove the sorting step and replace it with a traversal of _dl_init_called_list. The l_direct_opencount decrement outside the loader lock is removed because it appears incorrect: the counter manipulation could race with other dynamic loader operations. tst-audit23 needs adjustments to the changes in LA_ACT_DELETE notifications. The new approach for checking la_activity should make it clearer that la_activty calls come in pairs around namespace updates. The dependency sorting test cases need updates because the destructor order is always the opposite order of constructor order, even with relocation dependencies or cycles present. There is a future cleanup opportunity to remove the now-constant force_first and for_fini arguments from the _dl_sort_maps function. Fixes commit 1df71d32fe5f5905ffd5d100e5e9ca8ad62 ("elf: Implement force_first handling in _dl_sort_maps_dfs (bug 28937)"). Reviewed-by: DJ Delorie <dj@redhat.com>
Diffstat (limited to 'elf/dl-fini.c')
-rw-r--r-- | elf/dl-fini.c | 152 |
1 files changed, 52 insertions, 100 deletions
diff --git a/elf/dl-fini.c b/elf/dl-fini.c index 9acb64f47c..e201d36651 100644 --- a/elf/dl-fini.c +++ b/elf/dl-fini.c @@ -24,116 +24,68 @@ void _dl_fini (void) { - /* Lots of fun ahead. We have to call the destructors for all still - loaded objects, in all namespaces. The problem is that the ELF - specification now demands that dependencies between the modules - are taken into account. I.e., the destructor for a module is - called before the ones for any of its dependencies. - - To make things more complicated, we cannot simply use the reverse - order of the constructors. Since the user might have loaded objects - using `dlopen' there are possibly several other modules with its - dependencies to be taken into account. Therefore we have to start - determining the order of the modules once again from the beginning. */ - - /* We run the destructors of the main namespaces last. As for the - other namespaces, we pick run the destructors in them in reverse - order of the namespace ID. */ -#ifdef SHARED - int do_audit = 0; - again: -#endif - for (Lmid_t ns = GL(dl_nns) - 1; ns >= 0; --ns) - { - /* Protect against concurrent loads and unloads. */ - __rtld_lock_lock_recursive (GL(dl_load_lock)); - - unsigned int nloaded = GL(dl_ns)[ns]._ns_nloaded; - /* No need to do anything for empty namespaces or those used for - auditing DSOs. */ - if (nloaded == 0 -#ifdef SHARED - || GL(dl_ns)[ns]._ns_loaded->l_auditing != do_audit -#endif - ) - __rtld_lock_unlock_recursive (GL(dl_load_lock)); - else - { + /* Call destructors strictly in the reverse order of constructors. + This causes fewer surprises than some arbitrary reordering based + on new (relocation) dependencies. None of the objects are + unmapped, so applications can deal with this if their DSOs remain + in a consistent state after destructors have run. */ + + /* Protect against concurrent loads and unloads. */ + __rtld_lock_lock_recursive (GL(dl_load_lock)); + + /* Ignore objects which are opened during shutdown. */ + struct link_map *local_init_called_list = _dl_init_called_list; + + for (struct link_map *l = local_init_called_list; l != NULL; + l = l->l_init_called_next) + /* Bump l_direct_opencount of all objects so that they + are not dlclose()ed from underneath us. */ + ++l->l_direct_opencount; + + /* After this point, everything linked from local_init_called_list + cannot be unloaded because of the reference counter update. */ + __rtld_lock_unlock_recursive (GL(dl_load_lock)); + + /* Perform two passes: One for non-audit modules, one for audit + modules. This way, audit modules receive unload notifications + for non-audit objects, and the destructors for audit modules + still run. */ #ifdef SHARED - _dl_audit_activity_nsid (ns, LA_ACT_DELETE); + int last_pass = GLRO(dl_naudit) > 0; + Lmid_t last_ns = -1; + for (int do_audit = 0; do_audit <= last_pass; ++do_audit) #endif - - /* Now we can allocate an array to hold all the pointers and - copy the pointers in. */ - struct link_map *maps[nloaded]; - - unsigned int i; - struct link_map *l; - assert (nloaded != 0 || GL(dl_ns)[ns]._ns_loaded == NULL); - for (l = GL(dl_ns)[ns]._ns_loaded, i = 0; l != NULL; l = l->l_next) - /* Do not handle ld.so in secondary namespaces. */ - if (l == l->l_real) - { - assert (i < nloaded); - - maps[i] = l; - l->l_idx = i; - ++i; - - /* Bump l_direct_opencount of all objects so that they - are not dlclose()ed from underneath us. */ - ++l->l_direct_opencount; - } - assert (ns != LM_ID_BASE || i == nloaded); - assert (ns == LM_ID_BASE || i == nloaded || i == nloaded - 1); - unsigned int nmaps = i; - - /* Now we have to do the sorting. We can skip looking for the - binary itself which is at the front of the search list for - the main namespace. */ - _dl_sort_maps (maps, nmaps, (ns == LM_ID_BASE), true); - - /* We do not rely on the linked list of loaded object anymore - from this point on. We have our own list here (maps). The - various members of this list cannot vanish since the open - count is too high and will be decremented in this loop. So - we release the lock so that some code which might be called - from a destructor can directly or indirectly access the - lock. */ - __rtld_lock_unlock_recursive (GL(dl_load_lock)); - - /* 'maps' now contains the objects in the right order. Now - call the destructors. We have to process this array from - the front. */ - for (i = 0; i < nmaps; ++i) - { - struct link_map *l = maps[i]; - - if (l->l_init_called) - { - _dl_call_fini (l); + for (struct link_map *l = local_init_called_list; l != NULL; + l = l->l_init_called_next) + { #ifdef SHARED - /* Auditing checkpoint: another object closed. */ - _dl_audit_objclose (l); + if (GL(dl_ns)[l->l_ns]._ns_loaded->l_auditing != do_audit) + continue; + + /* Avoid back-to-back calls of _dl_audit_activity_nsid for the + same namespace. */ + if (last_ns != l->l_ns) + { + if (last_ns >= 0) + _dl_audit_activity_nsid (last_ns, LA_ACT_CONSISTENT); + _dl_audit_activity_nsid (l->l_ns, LA_ACT_DELETE); + last_ns = l->l_ns; + } #endif - } - /* Correct the previous increment. */ - --l->l_direct_opencount; - } + /* There is no need to re-enable exceptions because _dl_fini + is not called from a context where exceptions are caught. */ + _dl_call_fini (l); #ifdef SHARED - _dl_audit_activity_nsid (ns, LA_ACT_CONSISTENT); + /* Auditing checkpoint: another object closed. */ + _dl_audit_objclose (l); #endif - } - } + } #ifdef SHARED - if (! do_audit && GLRO(dl_naudit) > 0) - { - do_audit = 1; - goto again; - } + if (last_ns >= 0) + _dl_audit_activity_nsid (last_ns, LA_ACT_CONSISTENT); if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_STATISTICS)) _dl_debug_printf ("\nruntime linker statistics:\n" |