about summary refs log tree commit diff
path: root/src/exit/atexit.c
Commit message (Collapse)AuthorAgeFilesLines
* split internal lock API out of libc.h, creating lock.hRich Felker2018-09-121-0/+1
| | | | | | | | | this further reduces the number of source files which need to include libc.h and thereby be potentially exposed to libc global state and internals. this will also facilitate further improvements like adding an inline fast-path, if we want to do so later.
* revise the definition of multiple basic locks in the codeJens Gustedt2018-01-091-1/+1
| | | | In all cases this is just a change from two volatile int to one.
* fix atexit when it is called from an atexit handlerRich Felker2015-07-241-12/+9
| | | | | | | | | | | | | | | The old code accepted atexit handlers after exit, but did not run them reliably. C11 seems to explicitly allow atexit to fail (and report such failure) in this case, but this situation can easily come up in C++ if a destructor has a local static object with a destructor so it should be handled. Note that the memory usage can grow linearly with the overall number of registered atexit handlers instead of with the worst case list length. (This only matters if atexit handlers keep registering atexit handlers which should not happen in practice). Commit message/rationale based on text by Szabolcs Nagy.
* make all objects used with atomic operations volatileRich Felker2015-03-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the memory model we use internally for atomics permits plain loads of values which may be subject to concurrent modification without requiring that a special load function be used. since a compiler is free to make transformations that alter the number of loads or the way in which loads are performed, the compiler is theoretically free to break this usage. the most obvious concern is with atomic cas constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be transformed to a_cas(p,*p,f(*p)); where the latter is intended to show multiple loads of *p whose resulting values might fail to be equal; this would break the atomicity of the whole operation. but even more fundamental breakage is possible. with the changes being made now, objects that may be modified by atomics are modeled as volatile, and the atomic operations performed on them by other threads are modeled as asynchronous stores by hardware which happens to be acting on the request of another thread. such modeling of course does not itself address memory synchronization between cores/cpus, but that aspect was already handled. this all seems less than ideal, but it's the best we can do without mandating a C11 compiler and using the C11 model for atomics. in the case of pthread_once_t, the ABI type of the underlying object is not volatile-qualified. so we are assuming that accessing the object through a volatile-qualified lvalue via casts yields volatile access semantics. the language of the C standard is somewhat unclear on this matter, but this is an assumption the linux kernel also makes, and seems to be the correct interpretation of the standard.
* include cleanups: remove unused headers and add feature test macrosSzabolcs Nagy2013-12-121-2/+0
|
* fix bug whereby most atexit-registered functions got skippedRich Felker2012-08-191-3/+2
|
* ditch the priority inheritance locks; use malloc's version of lockRich Felker2012-04-241-7/+7
| | | | | | | | | | | | | | | | | | | i did some testing trying to switch malloc to use the new internal lock with priority inheritance, and my malloc contention test got 20-100 times slower. if priority inheritance futexes are this slow, it's simply too high a price to pay for avoiding priority inversion. maybe we can consider them somewhere down the road once the kernel folks get their act together on this (and perferably don't link it to glibc's inefficient lock API)... as such, i've switch __lock to use malloc's implementation of lightweight locks, and updated all the users of the code to use an array with a waiter count for their locks. this should give optimal performance in the vast majority of cases, and it's simple. malloc is still using its own internal copy of the lock code because it seems to yield measurably better performance with -O3 when it's inlined (20% or more difference in the contention stress test).
* add dummy __cxa_finalizeRich Felker2011-10-141-0/+4
| | | | | | musl's dynamic linker does not support unloading dsos, so there's nothing for this function to do. adding the symbol in case anything depends on its presence..
* support __cxa_atexit, and registering atexit functions from atexit handlersRich Felker2011-10-141-7/+26
| | | | | mildly tested; may have bugs. the locking should be updated not to use spinlocks but that's outside the scope of this one module.
* simplify atexit and fflush-on-exit handlingRich Felker2011-10-141-4/+1
|
* initial check-in, version 0.5.0 v0.5.0Rich Felker2011-02-121-0/+57