about summary refs log tree commit diff
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* always provide __fpclassifyl and __signbitl definitionsRich Felker2014-10-082-1/+9
| | | | | | | | | previously the external definitions of these functions were omitted on archs where long double is the same as double, since the code paths in the math.h macros which would call them are unreachable. however, even if they are unreachable, the definitions are still mandatory. omitting them is invalid C, and in the case of a non-optimizing compiler, will result in a link error.
* ignore access mode bits of flags in mkostemps and functions that use itRich Felker2014-10-061-0/+1
| | | | | | | | | | | | | per the text accepted for inclusion in POSIX, behavior is unspecified when any of the access mode bits are set. since it's impossible to consistently report this usage error (O_RDONLY could not be detected since its value happens to be zero), the most consistent way to handle them is just to ignore them. previously, if a caller erroneously passed O_WRONLY, the resulting access mode would be O_WRONLY|O_RDWR, which has the value 3, and this resulted in a file descriptor which rejects both read and write attempts when it is subsequently used.
* fix handling of odd lengths in swab functionRich Felker2014-10-041-1/+1
| | | | | | | | this function is specified to leave the last byte with "unspecified disposition" when the length is odd, so for the most part correct programs should not be calling swab with odd lengths. however, doing so is permitted, and should not write past the end of the destination buffer.
* fix incorrect sequence generation in *rand48 prng functionsRich Felker2014-09-221-2/+2
| | | | | | | | | | | patch by Jens Gustedt. this fixes a bug reported by Nadav Har'El. the underlying issue was that a left-shift by 16 bits after promotion of unsigned short to int caused integer overflow. while some compilers define this overflow case as "shifting into the sign bit", doing so doesn't help; the sign bit then gets extended through the upper bits in subsequent arithmetic as unsigned long long. this patch imposes a promotion to unsigned prior to the shift, so that the result is well-defined and matches the specified behavior.
* fix linked list corruption in flockfile listsRich Felker2014-09-191-0/+1
| | | | | | | | | | | | | | | | commit 5345c9b884e7c4e73eb2c8bb83b8d0df20f95afb added a linked list to track the FILE streams currently locked (via flockfile) by a thread. due to a failure to fully link newly added members, removal from the list could leave behind references which could later result in writes to already-freed memory and possibly other memory corruption. implicit stdio locking was unaffected; the list is only used in conjunction with explicit flockfile locking. this bug was not present in any releases; it was introduced and fixed during the same release cycle. patch by Timo Teräs, who discovered and tracked down the bug.
* math: fix exp10 not to raise invalid exception on NaNSzabolcs Nagy2014-09-183-4/+13
| | | | | This was not caught earlier because gcc incorrectly generates quiet relational operators that never raise exceptions.
* fix overflow corner case in strtoul-family functionsRich Felker2014-09-161-0/+1
| | | | | | | | incorrect behavior occurred only in cases where the input overflows unsigned long long, not just the (possibly lower) range limit for the result type. in this case, processing of the '-' sign character was not suppressed, and the function returned a value of 1 despite setting errno to ERANGE.
* rewrite the regex pattern parser in regcompSzabolcs Nagy2014-09-131-1081/+634
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new code is a bit simpler and the generated code is about 1KB smaller (on i386). The basic design was kept including internal interfaces, TNFA generation was not touched. The old tre parser had various issues: [^aa-z] negated overlapping ranges in a bracket expression were handled incorrectly (eg [^aa-z] was handled as [^a] instead of [^a-z]) a{,2} missing lower bound in a counted repetition should be an error, but it was accepted with broken semantics: a{,2} was treated as a{0,3}, the new parser rejects it a{999,} large min count was not rejected (a{5000,} failed with REG_ESPACE due to reaching a stack limit), the new parser enforces the RE_DUP_MAX limit \xff regcomp used to accept a pattern with illegal sequences in it (treated them as empty expression so p\xffq matched pq) the new parser rejects such patterns with REG_BADPAT or REG_ERANGE [^b-fD-H] with REG_ICASE old parser turned this into [^b-fB-F] because of the negated overlapping range issue (see above), the new parser treats it as [^b-hB-H], POSIX seems to require [^d-fD-F], but practical implementations do case-folding first and negate the character set later instead of the other way around. (Supporting the posix way efficiently would require significant changes so it was left as is, it is unclear if any application actually expects the posix behaviour, this issue is raised on the austingroup tracker: http://austingroupbugs.net/view.php?id=872 ). another case-insensitive matching issue is that unicode case folding rules can group more than two characters together while towupper and towlower can only work for a pair of upper and lower case characters, this is a limitation of POSIX so it is not fixed. invalid bracket and brace expressions may return different error codes now (REG_ERANGE instead of REG_EBRACK or REG_BADBR instead of REG_EBRACE) otherwise the new parser should be compatible with the old one. regcomp should be able to handle arbitrary pattern input if the pattern length is limited, the only exception is the use of large repetition counts (eg. (a{255}){255}) which require exp amount of memory and there is no easy workaround.
* fix exp10l.c to include float.hSzabolcs Nagy2014-09-081-0/+1
| | | | | | the previous commit was a no op in exp10l because LDBL_* macros were implicitly 0 (the preprocessor does not warn about undefined symbols).
* prune math code on archs with binary64 long doubleSzabolcs Nagy2014-09-082-0/+10
| | | | | | __polevll, __p1evll and exp10l were provided on archs when long double is the same as double. The first two were completely unused and exp10l can be a wrapper around exp10.
* add C11 thread creation and related thread functionsRich Felker2014-09-0710-7/+84
| | | | | | | | | | | | | | | | | based on patch by Jens Gustedt. the main difficulty here is handling the difference between start function signatures and thread return types for C11 threads versus POSIX threads. pointers to void are assumed to be able to represent faithfully all values of int. the function pointer for the thread start function is cast to an incorrect type for passing through pthread_create, but is cast back to its correct type before calling so that the behavior of the call is well-defined. changes to the existing threads implementation were kept minimal to reduce the risk of regressions, and duplication of code that carries implementation-specific assumptions was avoided for ease and safety of future maintenance.
* add C11 condition variable functionsJens Gustedt2014-09-066-0/+57
| | | | | Because of the clear separation for private pthread_cond_t these interfaces are quite simple and direct.
* add C11 mutex functionsJens Gustedt2014-09-066-0/+69
|
* add C11 thread functions operating on tss_t and once_flagJens Gustedt2014-09-065-0/+42
| | | | | | These all have POSIX equivalents, but aside from tss_get, they all have minor changes to the signature or return value and thus need to exist as separate functions.
* use weak symbols for the POSIX functions that will be used by C threadsJens Gustedt2014-09-0615-29/+76
| | | | | | | | | | The intent of this is to avoid name space pollution of the C threads implementation. This has two sides to it. First we have to provide symbols that wouldn't pollute the name space for the C threads implementation. Second we have to clean up some internal uses of POSIX functions such that they don't implicitly drag in such symbols.
* add C11 timespec_get function, with associated time.h changes for C11Rich Felker2014-09-061-0/+12
| | | | | | based on patch by Jens Gustedt for inclusion with C11 threads implementation, but committed separately since it's independent of threads.
* fix non-static dummy function that slipped in with locale implementationRich Felker2014-09-061-1/+1
|
* add missing legacy LFS *64 symbol aliasesSzabolcs Nagy2014-09-058-0/+23
| | | | | | versionsort64, aio*64 and lio*64 symbols were missing, they are only needed for glibc ABI compatibility, on the source level dirent.h and aio.h already redirect them.
* fix memory leak in regexec when input contains illegal sequenceSzabolcs Nagy2014-09-051-5/+6
|
* fix off-by-one in bounds check in fpathconfRich Felker2014-09-051-1/+1
| | | | | | this error resulted in an out-of-bounds read, as opposed to a reported error, when calling the function with an argument one greater than the max valid index.
* fix potential read past end of buffer in getnameinfo service name lookupRich Felker2014-09-051-1/+1
| | | | | | | | | if the loop stopped due to reaching the end of the string, the subsequent increment could possibly move the position one past the end of the buffer. no further writes happen, the reads cannot fault anyway unless the stack completely lacks any zero bytes, and reading junk should not yield an incorrect result from the function either. nonetheless the code was wrong and needs to be fixed.
* remove incorrect and useless check in network service name lookup codeRich Felker2014-09-051-1/+0
| | | | | | the condition was probably intended to be !*p rather than !p, but neither is needed here. the subsequent code naturally handles the case where it's already at end of string.
* fix case mapping for U+00DF (ß)Rich Felker2014-09-052-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | U+00DF ('ß') has had an uppercase form (U+1E9E) available since Unicode 5.1, but Unicode lacks the case mappings for it due to stability policy. when I added support for the new character in commit 1a63a9fc30e7a1f1239e3cedcb5041e5ec1c5351, I omitted the mapping in the lowercase-to-uppercase direction. this choice was not based on any actual information, only assumptions. this commit adds bidirectional case mappings between U+00DF and U+1E9E, and removes the special-case hack that allowed U+00DF to be identified as lowecase despite lacking a mapping. aside from strong evidence that this is the "right" behavior for real-world usage of these characters, several factors informed this decision: - the other "potentially correct" mapping, to "SS", is not representable in the C case-mapping system anyway. - leaving one letter in lowercase form when transforming a string to uppercase is obviously wrong. - having a character which is nominally lowercase but which is fixed under case mapping violates reasonable invariants.
* make non-waiting paths of sem_[timed]wait and pthread_join cancelableRich Felker2014-09-052-0/+3
| | | | | | | per POSIX these functions are both cancellation points, so they must act on any cancellation request which is pending prior to the call. previously, only the code path where actual waiting took place could act on cancellation.
* remove an extra layer of buffer copying in getnameinfo reverse dnsRich Felker2014-09-051-3/+2
| | | | | | | | | | | the outer getnameinfo function already has a properly-sized temporary buffer for storing the reverse dns (ptr) result. there is no reason for the callback to use a secondary buffer and copy it on success, and doing so potentially expanded the impact of the dn_expand bug that was fixed in commit 49d2c8c6bcf8c926e52c7f510033b6adc31355f5. this change reduces the code size by a small amount, and also reduces the run-time stack space requirements by about 256 bytes.
* fix multiple stdio functions' behavior on zero-length operationsRich Felker2014-09-044-9/+7
| | | | | | | | | | | previously, fgets, fputs, fread, and fwrite completely omitted locking and access to the FILE object when their arguments yielded a zero length read or write operation independent of the FILE state. this optimization was invalid; it wrongly skipped marking the stream as byte-oriented (a C conformance bug) and exposed observably missing synchronization (a POSIX conformance bug) where one of these functions could wrongly complete despite another thread provably holding the lock.
* suppress null termination when fgets reads EOF with no dataRich Felker2014-09-041-1/+1
| | | | | | | | | the C standard requires that "the contents of the array remain unchanged" in this case. this patch also changes the behavior on read errors, but in that case "the array contents are indeterminate", so the application cannot inspect them anyway.
* fix dn_expand empty name handling and offsets to 0Szabolcs Nagy2014-09-041-6/+9
| | | | | | | | | | Empty name was rejected in dn_expand since commit 56b57f37a46dab432247bf29d96fcb11fbd02a6d which is a regression as reported by Natanael Copa. Furthermore if an offset pointer in a compressed name pointed to a terminating 0 byte (instead of a label) the returned name was not null terminated.
* add malloc_usable_size function and non-stub malloc.hRich Felker2014-08-251-0/+17
| | | | | | | | | | | | | | | | | | | | | | this function is needed for some important practical applications of ABI compatibility, and may be useful for supporting some non-portable software at the source level too. I was hesitant to add a function which imposes any constraints on malloc internals; however, it turns out that any malloc implementation which has realloc must already have an efficient way to determine the size of existing allocations, so no additional constraint is imposed. for now, some internal malloc definitions are duplicated in the new source file. if/when malloc is refactored to put them in a shared internal header file, these could be removed. since malloc_usable_size is conventionally declared in malloc.h, the empty stub version of this file was no longer suitable. it's updated to provide the standard allocator functions, nonstandard ones (even if stdlib.h would not expose them based on the feature test macros in effect), and any malloc-extension functions provided (currently, only malloc_usable_size).
* refrain from spinning on locks when there is already a waiterRich Felker2014-08-255-5/+5
| | | | | | | | | | if there is already a waiter for a lock, spinning on the lock is essentially an attempt to steal it from whichever waiter would obtain it via any priority rules in place, and is therefore undesirable. in the current implementation, there is always an inherent race window at unlock during which a newly-arriving thread may steal the lock from the existing waiters, but we should aim to keep this window minimal rather than enlarging it.
* spin before waiting on futex in mutex and rwlock lock operationsRich Felker2014-08-253-0/+20
|
* spin in sem_[timed]wait before performing futex waitRich Felker2014-08-251-0/+5
| | | | | | | | | empirically, this increases the maximum rate of wait/post operations between two threads by 20-150 times on machines I tested, including x86 and arm. conceptually, it makes sense to do some spinning because semaphores are intended to be usable as a notification mechanism between threads, not just as locks, and low-latency notification is a valuable property to have.
* sanitize number of spins in userspace before futex waitRich Felker2014-08-252-2/+2
| | | | | | | | | | | | | | | the previous spin limit of 10000 was utterly unreasonable. empirically, it could consume up to 200000 cycles, whereas a failed futex wait (EAGAIN) typically takes 1000 cycles or less, and even a true wait/wake round seems much less expensive. the new counts (100 for general wait, 200 in barrier) were simply chosen to be in the range of what's reasonable without having adverse effects on casual micro-benchmark tests I have been running. they may still be too high, from a standpoint of not wasting cpu cycles, but at least they're a lot better than before. rigorous testing across different archs and cpu models should be performed at some point to determine whether further adjustments should be made.
* fix false ownership of stdio FILEs due to tid reuseRich Felker2014-08-236-2/+40
| | | | | | | | | | | | | this is analogous commit fffc5cda10e0c5c910b40f7be0d4fa4e15bb3f48 which fixed the corresponding issue for mutexes. the robust list can't be used here because the locks do not share a common layout with mutexes. at some point it may make sense to simply incorporate a mutex object into the FILE structure and use it, but that would be a much more invasive change, and it doesn't mesh well with the current design that uses a simpler code path for internal locking and pulls in the recursive-mutex-like code when the flockfile API is used explicitly.
* fix fallback checks for kernels without private futex supportRich Felker2014-08-225-5/+5
| | | | for unknown syscall commands, the kernel produces ENOSYS, not EINVAL.
* fix use of uninitialized memory with application-provided thread stacksRich Felker2014-08-221-0/+2
| | | | | | | | | | the subsequent code in pthread_create and the code which copies TLS initialization images to the new thread's TLS space assume that the memory provided to them is zero-initialized, which is true when it's obtained by pthread_create using mmap. however, when the caller provides a stack using pthread_attr_setstack, pthread_create cannot make any assumptions about the contents. simply zero-filling the relevant memory in this case is the simplest and safest fix.
* further simplify and optimize new cond varRich Felker2014-08-181-29/+21
| | | | | | | | | | | | | | the main idea of the changes made is to have waiters wait directly on the "barrier" lock that was used to prevent them from making forward progress too early rather than first waiting on the atomic state value and then attempting to lock the barrier. in addition, adjustments to the mutex waiter count are optimized. previously, each waking waiter decremented the count (unless it was the first) then immediately incremented it again for the next waiter (unless it was the last). this was a roundabout was of achieving the equivalent of incrementing it once for the first waiter and decrementing it once for the last.
* simplify and improve new cond var implementationRich Felker2014-08-181-40/+22
| | | | | | | | | | | | | | | | | | | | previously, wake order could be unpredictable: if a waiter happened to leave its futex wait on the state early, e.g. due to EAGAIN while restarting after a signal handler, it could acquire the mutex out of turn. handling this required ugly O(n) list walking in the unwait function and accounting to remove waiters that already woke from the list. with the new changes, the "barrier" locks in each waiter node are only unlocked in turn. in addition to simplifying the code, this seems to improve performance slightly, probably by reducing the number of accesses threads make to each other's stacks. as an additional benefit, unrecoverable mutex re-locking errors (mainly ENOTRECOVERABLE for robust mutexes) no longer need to be handled with deadlock; they can be reported to the caller, since the unlocking sequence makes it unnecessary to rely on the mutex to synchronize access to the waiter list.
* redesign cond var implementation to fix multiple issuesRich Felker2014-08-176-93/+213
| | | | | | | | | | | | | | | | | | | | | | | | | | the immediate issue that was reported by Jens Gustedt and needed to be fixed was corruption of the cv/mutex waiter states when switching to using a new mutex with the cv after all waiters were unblocked but before they finished returning from the wait function. self-synchronized destruction was also handled poorly and may have had race conditions. and the use of sequence numbers for waking waiters admitted a theoretical missed-wakeup if the sequence number wrapped through the full 32-bit space. the new implementation is largely documented in the comments in the source. the basic principle is to use linked lists initially attached to the cv object, but detachable on signal/broadcast, made up of nodes residing in automatic storage (stack) on the threads that are waiting. this eliminates the need for waiters to access the cv object after they are signaled, and allows us to limit wakeup to one waiter at a time during broadcasts even when futex requeue cannot be used. performance is also greatly improved, roughly double some tests. basically nothing is changed in the process-shared cond var case, where this implementation does not work, since processes do not have access to one another's local storage.
* fix possible failure-to-wake deadlock with robust mutexesRich Felker2014-08-171-1/+4
| | | | | | | | | | | | | | | | | | when the kernel is responsible for waking waiters on a robust mutex whose owner died, it does not have a waiters count available and must rely entirely on the waiter bit of the lock value. normally, this bit is only set by newly arriving waiters, so it will be clear if no new waiters arrived after the current owner obtained the lock, even if there are other waiters present. leaving it clear is desirable because it allows timed-lock operations to remove themselves as waiters and avoid causing unnecessary futex wake syscalls. however, for process-shared robust mutexes, we need to set the bit whenever there are existing waiters so that the kernel will know to wake them. for non-process-shared robust mutexes, the wake happens in userspace and can look at the waiters count, so the bit does not need to be set in the non-process-shared case.
* make pointers used in robust list volatileRich Felker2014-08-174-11/+18
| | | | | | | | | | | | | | | | | | | | when manipulating the robust list, the order of stores matters, because the code may be asynchronously interrupted by a fatal signal and the kernel will then access the robust list in what is essentially an async-signal context. previously, aliasing considerations made it seem unlikely that a compiler could reorder the stores, but proving that they could not be reordered incorrectly would have been extremely difficult. instead I've opted to make all the pointers used as part of the robust list, including those in the robust list head and in the individual mutexes, volatile. in addition, the format of the robust list has been changed to point back to the head at the end, rather than ending with a null pointer. this is to match the documented kernel robust list ABI. the null pointer, which was previously used, only worked because faults during access terminate the robust list processing.
* fix robust mutex unrecoverable status, and related clean-upRich Felker2014-08-163-12/+4
| | | | | | | | | | | | | | | | | a robust mutex should not enter the unrecoverable status until it's unlocked without marking it consistent. previously, flag 8 in the type was used as an indication of unrecoverable, but only honored after successful locking; this resulted in a race window where the unrecoverable mutex could appear to a second thread as locked/busy again while the first thread was in the process of observing it as unrecoverable. now, flag 8 is used to mean that the mutex is in the process of being recovered, but not yet marked consistent. the flag only takes effect in pthread_mutex_unlock, where it causes the value 0x40000000 (owner dead flag, with old owner tid 0, an otherwise impossible state) to be stored in the lock. subsequent lock attempts will interpret this state as unrecoverable.
* fix false ownership of mutexes due to tid reuse, using robust listRich Felker2014-08-164-23/+26
| | | | | | | | | | | | | | | | | | | | | | | per the resolution of Austin Group issue 755, the POSIX requirement that ownership be enforced for recursive and error-checking mutexes does not allow a random new thread to acquire ownership of an orphaned mutex just because it happened to be assigned the same tid as the original owner that exited with the mutex locked. one possible fix for this issue would be to disallow the kernel thread to terminate when it exited with mutexes held, permanently reserving the tid against reuse. however, this does not solve the problem for process-shared mutexes where lifetime cannot be controlled, so it was not used. the alternate approach I've taken is to reuse the robust mutex system for non-robust recursive and error-checking mutexes. when a thread exits, the kernel (or the new userspace robust-list code added in commit b092f1c5fa9c048e12d002c7b972df5ecbe96d1d) will set the owner-died bit for these orphaned mutexes, but since the mutex-type is not robust, pthread_mutex_trylock will not allow a new owner to acquire them. instead, they remain in a state of being permanently locked, as desired.
* optimize locking against vm changes for mmap/munmapRich Felker2014-08-162-8/+7
| | | | | | | | | | the whole point of this locking is to prevent munmap, or mmap with MAP_FIXED, from deallocating virtual addresses, or changing the backing a given virtual address refers to, during certain race windows involving self-synchronized unmapping or destruction of pthread synchronization objects. there is no need for exclusion in the other direction, so it suffices to take the lock momentarily and release it before making the syscall, rather than holding it across the syscall.
* enable private futex for process-local robust mutexesRich Felker2014-08-163-1/+25
| | | | | | | | | | | | | | | | | | | | | | the kernel always uses non-private wake when walking the robust list when a thread or process exits, so it's not able to wake waiters listening with the private futex flag. this problem is solved by doing the equivalent in userspace as the last step of pthread_exit. care is taken to remove mutexes from the robust list before unlocking them so that the kernel will not attempt to access them again, possibly after another thread locks them. this removal code can treat the list as singly-linked, since no further code which would add or remove items is able to run at this point. moreover, the pending pointer is not needed since the mutexes being unlocked are all process-local; in the case of asynchronous process termination, they all cease to exist. since a process-local robust mutex cannot come into existence without a call to pthread_mutexattr_setrobust in the same process, the code for userspace robust list processing is put in that source file, and a weak alias to a dummy function is used to avoid pulling in this bloat as part of pthread_exit in static-linked programs.
* make futex operations use private-futex mode when possibleRich Felker2014-08-1523-66/+82
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | private-futex uses the virtual address of the futex int directly as the hash key rather than requiring the kernel to resolve the address to an underlying backing for the mapping in which it lies. for certain usage patterns it improves performance significantly. in many places, the code using futex __wake and __wait operations was already passing a correct fixed zero or nonzero flag for the priv argument, so no change was needed at the site of the call, only in the __wake and __wait functions themselves. in other places, especially where the process-shared attribute for a synchronization object was not previously tracked, additional new code is needed. for mutexes, the only place to store the flag is in the type field, so additional bit masking logic is needed for accessing the type. for non-process-shared condition variable broadcasts, the futex requeue operation is unable to requeue from a private futex to a process-shared one in the mutex structure, so requeue is simply disabled in this case by waking all waiters. for robust mutexes, the kernel always performs a non-private wake when the owner dies. in order not to introduce a behavioral regression in non-process-shared robust mutexes (when the owning thread dies), they are simply forced to be treated as process-shared for now, giving correct behavior at the expense of performance. this can be fixed by adding explicit code to pthread_exit to do the right thing for non-shared robust mutexes in userspace rather than relying on the kernel to do it, and will be fixed in this way later. since not all supported kernels have private futex support, the new code detects EINVAL from the futex syscall and falls back to making the call without the private flag. no attempt to cache the result is made; caching it and using the cached value efficiently is somewhat difficult, and not worth the complexity when the benefits would be seen only on ancient kernels which have numerous other limitations and bugs anyway.
* fix #ifdef inside a macro argument list in __init_tls.cSzabolcs Nagy2014-08-131-4/+3
| | | | | C99 6.10.3p11 disallows such constructs so use an #ifdef outside of the argument list of __syscall
* add inline isspace in ctype.h as an optimizationSzabolcs Nagy2014-08-132-8/+1
| | | | | | | | | | | | | | | | | | | | | isspace can be a bottleneck in a simple parser, inlining it gives slightly smaller and faster code src/locale/pleval.o already had this optimization, the size change for other libc functions for i386 is src/internal/intscan.o 2134 2118 -16 src/locale/dcngettext.o 1562 1552 -10 src/network/res_msend.o 1961 1940 -21 src/network/lookup_name.o 2627 2608 -19 src/network/getnameinfo.o 1814 1811 -3 src/network/lookup_serv.o 643 624 -19 src/stdio/vfscanf.o 2675 2663 -12 src/stdlib/atoll.o 117 107 -10 src/stdlib/atoi.o 95 91 -4 src/stdlib/atol.o 95 91 -4 src/time/strptime.o 1515 1503 -12 (TOTALS) 432451 432321 -130
* add dlerror message for static-linked dlsym failureRich Felker2014-08-081-0/+2
|
* fix dlerror when using dlopen with a static libcClément Vasseur2014-08-081-0/+2
| | | | | when the dynamic loader is disabled, dlopen fails correctly but dlerror did not return a human readable error string like it should have.