about summary refs log tree commit diff
path: root/src/internal
Commit message (Collapse)AuthorAgeFilesLines
* unify static and dynamic linked implementations of thread-local storageRich Felker2015-11-121-1/+8
| | | | | | | | | | | | | | | | | this both allows removal of some of the main remaining uses of the SHARED macro and clears one obstacle to static-linked dlopen support, which may be added at some point in the future. specialized single-TLS-module versions of __copy_tls and __reset_tls are removed and replaced with code adapted from their dynamic-linked versions, capable of operating on a whole chain of TLS modules, and use of the dynamic linker's DSO chain (which contains large struct dso objects) by these functions is replaced with a new chain of struct tls_module objects containing only the information needed for implementing TLS. this may also yield some performance benefit initializing TLS for a new thread when a large number of modules without TLS have been loaded, since since there is no need to walk structures for modules without TLS.
* eliminate use of SHARED macro to suppress visibility attributesRich Felker2015-11-112-14/+4
| | | | | | | | | | | | | | | | this is the first and simplest stage of removal of the SHARED macro, which will eventually allow libc.a and libc.so to be produced from the same object files. the original motivation for these #ifdefs which are now being removed was to allow building a static-only libc using a compiler that does not support visibility. however, SHARED was the wrong condition to test for this anyway; various assembly-language sources refer to hidden symbols and declare them with the .hidden directive, making it wrong to define the referenced symbols as non-hidden. if there is a need in the future to build libc using compilers that lack visibility, support could be moved to the build system or perhaps the __PIC__ macro could be checked instead of SHARED.
* fix dynamic loader library mapping for nommu systemsRich Felker2015-11-111-0/+4
| | | | | | | | | | | | | | | | | | | | | on linux/nommu, non-writable private mappings of files may actually use memory shared with other processes or the fs cache. the old nommu loader code (used when mmap with MAP_FIXED fails) simply wrote over top of the original file mapping, possibly clobbering this shared memory. no such breakage was observed in practice, but it should have been possible. the new code starts by mapping anonymous writable memory on archs that might support nommu, then maps load segments over top of it, falling back to read if MAP_FIXED fails. we use an anonymous map rather than a writable file map to avoid reading more data from disk than needed. since pages cannot be loaded lazily on fault, in case of large data/bss, mapping the full file may read a lot of data that will subsequently be thrown away when processing additional LOAD segments. as a result, we cannot skip the first LOAD segment when operating in this mode. these changes affect only non-FDPIC nommu support.
* explicitly assemble all arm asm sources as UALRich Felker2015-11-101-0/+1
| | | | | | | | these files are all accepted as legacy arm syntax when producing arm code, but legacy syntax cannot be used for producing thumb2 with access to the full ISA. even after switching to UAL, some asm source files contain instructions which are not valid in thumb mode, so these will need to be addressed separately.
* remove non-working pre-armv4t support from arm asmRich Felker2015-11-091-2/+0
| | | | | | | | | | | | | | | the idea of the three-instruction sequence being removed was to be able to return to thumb code when used on armv4t+ from a thumb caller, but also to be able to run on armv4 without the bx instruction available (in which case the low bit of lr would always be 0). however, without compiler support for generating such a sequence from C code, which does not exist and which there is unlikely to be interest in implementing, there is little point in having it in the asm, and it would likely be easier to add pre-armv4t support via enhanced linker handling of R_ARM_V4BX than at the compiler level. removing this code simplifies adding support for building libc in thumb2-only form (for cortex-m).
* eliminate protected-visibility data in libc.so with vis.h preincludeRich Felker2015-09-291-0/+3
| | | | | | | | | | | | | | | some newer binutils versions print scary warnings about protected data because most gcc versions fail to produce the right address references/relocations for such data that might be subject to copy relocations. originally vis.h explicitly assigned default visibility to all public data symbols to avoid this issue, but commit b8dda24fe1caa901a99580f7a52defb95aedb67c removed this treatment for stdin/out/err to work around a gcc 3.x bug, and since they don't actually need it (because taking their addresses is not valid C). instead, a check for the gcc 3.x bug is added to the configure check for vis.h preinclude support; this feature will simply be disabled when using a buggy version of gcc.
* fix signal return for sh/fdpicRich Felker2015-09-231-0/+2
| | | | | | | | | | | | | | | | the restorer function pointer provided in the kernel sigaction structure is interpreted by the kernel as a raw code address, not a function descriptor. this commit moves the declarations of the __restore and __restore_rt symbols to ksigaction.h so that arch versions of the file can override them, and introduces a version for sh which declares them as objects rather than functions. an alternate solution would have been defining SA_RESTORER to 0 so that the functions are not used, but this both requires executable stack (since the sh kernel does not have a vdso page with permanent restorer functions) and crashes on qemu user-level emulation.
* add real fdpic loading of shared librariesRich Felker2015-09-221-0/+4
| | | | | | previously, the normal ELF library loading code was used even for fdpic, so only the kernel-loaded dynamic linker and main app could benefit from separate placement of segments and shared text.
* add general fdpic support in dynamic linker and arch support for shRich Felker2015-09-221-3/+14
| | | | | | | | | | | | | | | | | | at this point not all functionality is complete. the dynamic linker itself, and main app if it is also loaded by the kernel, take advantage of fdpic and do not need constant displacement between segments, but additional libraries loaded by the dynamic linker follow normal ELF semantics for mapping still. this fully works, but does not admit shared text on nommu. in terms of actual functional correctness, dlsym's results are presently incorrect for function symbols, RTLD_NEXT fails to identify the caller correctly, and dladdr fails almost entirely. with the dynamic linker entry point working, support for static pie is automatically included, but linking the main application as ET_DYN (pie) probably does not make sense for fdpic anyway. ET_EXEC is equally relocatable but more efficient at representing relocations.
* add fdpic structs and reloc types for dynamic linkingRich Felker2015-09-171-0/+16
|
* provide arch-generic fdpic self-relocation code for crt1 to useRich Felker2015-09-121-0/+28
| | | | | this file is intended to be included by crt_arch.h on fdpic-based targets and needs to be called from the entry point asm.
* fix local-dynamic model TLS on mips and powerpcRich Felker2015-06-251-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | the TLS ABI spec for mips, powerpc, and some other (presently unsupported) RISC archs has the return value of __tls_get_addr offset by +0x8000 and the result of DTPOFF relocations offset by -0x8000. I had previously assumed this part of the ABI was actually just an implementation detail, since the adjustments cancel out. however, when the local dynamic model is used for accessing TLS that's known to be in the same DSO, either of the following may happen: 1. the -0x8000 offset may already be applied to the argument structure passed to __tls_get_addr at ld time, without any opportunity for runtime relocations. 2. __tls_get_addr may be used with a zero offset argument to obtain a base address for the module's TLS, to which the caller then applies immediate offsets for individual objects accessed using the local dynamic model. since the immediate offsets have the -0x8000 adjustment applied to them, the base address they use needs to include the +0x8000 offset. it would be possible, but more complex, to store the pointers in the dtv[] array with the +0x8000 offset pre-applied, to avoid the runtime cost of adding 0x8000 on each call to __tls_get_addr. this change could be made later if measurements show that it would help.
* switch to using trap number 31 for syscalls on shRich Felker2015-06-161-1/+1
| | | | | | | | | | | | | | | | | | | nominally the low bits of the trap number on sh are the number of syscall arguments, but they have never been used by the kernel, and some code making syscalls does not even know the number of arguments and needs to pass an arbitrary high number anyway. sh3/sh4 traditionally used the trap range 16-31 for syscalls, but part of this range overlapped with hardware exceptions/interrupts on sh2 hardware, so an incompatible range 32-47 was chosen for sh2. using trap number 31 everywhere, since it's in the existing sh3/sh4 range and does not conflict with sh2 hardware, is a proposed unification of the kernel syscall convention that will allow binaries to be shared between sh2 and sh3/sh4. if this is not accepted into the kernel, we can refit the sh2 target with runtime selection mechanisms for the trap number, but doing so would be invasive and would entail non-trivial overhead.
* refactor stdio open file list handling, move it out of global libc structRich Felker2015-06-162-4/+3
| | | | | | | | | | | | | functions which open in-memory FILE stream variants all shared a tail with __fdopen, adding the FILE structure to stdio's open file list. replacing this common tail with a function call reduces code size and duplication of logic. the list is also partially encapsulated now. function signatures were chosen to facilitate tail call optimization and reduce the need for additional accessor functions. with these changes, static linked programs that do not use stdio no longer have an open file list at all.
* byte-based C locale, phase 3: make MB_CUR_MAX variable to activate codeRich Felker2015-06-161-0/+3
| | | | | | | | this patch activates the new byte-based C locale (high bytes treated as abstract code unit "characters" rather than decoded as multibyte characters) by making the value of MB_CUR_MAX depend on the active locale. for the C locale, the LC_CTYPE category pointer is null, yielding a value of 1. all other locales yield a value of 4.
* byte-based C locale, phase 2: stdio and iconv (multibyte callers)Rich Felker2015-06-161-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | this patch adjusts libc components which use the multibyte functions internally, and which depend on them operating in a particular encoding, to make the appropriate locale changes before calling them and restore the calling thread's locale afterwards. activating the byte-based C locale without these changes would cause regressions in stdio and iconv. in the case of iconv, the current implementation was simply using the multibyte functions as UTF-8 conversions. setting a multibyte UTF-8 locale for the duration of the iconv operation allows the code to continue working. in the case of stdio, POSIX requires that FILE streams have an encoding rule bound at the time of setting wide orientation. as long as all locales, including the C locale, used the same encoding, treating high bytes as UTF-8, there was no need to store an encoding rule as part of the stream's state. a new locale field in the FILE structure points to the locale that should be made active during fgetwc/fputwc/ungetwc on the stream. it cannot point to the locale active at the time the stream becomes oriented, because this locale could be mutable (the global locale) or could be destroyed (locale_t objects produced by newlocale) before the stream is closed. instead, a pointer to the static C or C.UTF-8 locale object added in commit commit aeeac9ca5490d7d90fe061ab72da446c01ddf746 is used. this is valid since categories other than LC_CTYPE will not affect these functions.
* add multiple inclusion guard to locale_impl.hRich Felker2015-06-071-0/+5
|
* remove redefinition of MB_CUR_MAX in locale_impl.hRich Felker2015-06-071-3/+0
| | | | | | | | unless/until the byte-based C locale is implemented, defining MB_CUR_MAX to 1 in the C locale is wrong. no internal code currently uses the MB_CUR_MAX macro, but having it defined inconsistently is error-prone. applications get the value from stdlib.h and were unaffected.
* make static C and C.UTF-8 locales available outside of newlocaleRich Felker2015-06-061-0/+7
|
* overhaul locale internals to treat categories roughly uniformlyRich Felker2015-05-272-7/+5
| | | | | | | | | | | | | | | | | | | | previously, LC_MESSAGES was treated specially as the only category which could be set to a locale name without a definition file, in order to facilitate gettext message translations when no libc locale was available. LC_NUMERIC was completely un-settable, and LC_CTYPE stored a flag intended to be used for a possible future byte-based C locale, instead of storing a __locale_map pointer like the other categories use. this patch changes all categories to be represented by pointers to __locale_map structures, and allows locale names without definition files to be treated as valid locales with trivial definition when used in any category. outwardly visible functional changes should be minor, limited mainly to the strings read back from setlocale and the way gettext handles translations in categories other than LC_MESSAGES. various internal refactoring has also been performed, and improvements in const correctness have been made.
* move call to dynamic linker stage-3 into stage-2 functionRich Felker2015-05-251-1/+1
| | | | | | | | | this move eliminates a duplicate "by-hand" symbol lookup loop from the stage-1 code and replaces it with a call to find_sym, which can be used once we're in stage 2. it reduces the size of the stage 1 code, which is helpful because stage 1 will become the crt start file for static-PIE executables, and it will allow stage 3 to access stage 2's automatic storage, which will be important in an upcoming commit.
* eliminate costly tricks to avoid TLS access for current locale stateRich Felker2015-05-162-6/+2
| | | | | | | | | | | | | | | the code being removed used atomics to track whether any threads might be using a locale other than the current global locale, and whether any threads might have abstract 8-bit (non-UTF-8) LC_CTYPE active, a feature which was never committed (still pending). the motivations were to support early execution prior to setup of the thread pointer, to partially support systems (ancient kernels) where thread pointer setup is not possible, and to avoid high performance cost on archs where accessing the thread pointer may be very slow. since commit 19a1fe670acb3ab9ead0fe31859ca7d4fe40dd54, the thread pointer is always available, so these hacks are no longer needed. removing them greatly simplifies the affected code.
* fix stack protector crashes on x32 & powerpc due to misplaced TLS canaryRich Felker2015-05-061-1/+6
| | | | | | | | | | | | | | | | | | | i386, x86_64, x32, and powerpc all use TLS for stack protector canary values in the default stack protector ABI, but the location only matched the ABI on i386 and x86_64. on x32, the expected location for the canary contained the tid, thus producing spurious mismatches (resulting in process termination) upon fork. on powerpc, the expected location contained the stdio_locks list head, so returning from a function after calling flockfile produced spurious mismatches. in both cases, the random canary was not present, and a predictable value was used instead, making the stack protector hardening much less effective than it should be. in the current fix, the thread structure has been expanded to have canary fields at all three possible locations, and archs that use a non-default location must define a macro in pthread_arch.h to choose which location is used. for most archs (which lack TLS canary ABI) the choice does not matter.
* in visibility preinclude, remove overrides for stdin/stdout/stderrRich Felker2015-04-221-3/+0
| | | | | | | | | | | | the motivation for this change is that the extra declaration (with or without visibility) using "struct _IO_FILE" instead of "FILE" seems to trigger a bug in gcc 3.x where it considers the types mismatched. however, this change also results in slightly better code and it is valid because (1) these three objects are constant, and (2) applying the & operator to any of them is invalid C, since they are not even specified to be objects. thus it does not matter if the application and libc see different addresses for them, as long as the (initial, unchanging) value is seen the same by both.
* fix inconsistent visibility for __hwcap and __sysinfo symbolsRich Felker2015-04-221-2/+3
| | | | | these are used as hidden by asm files (and such use is the whole reason they exist), but their actual definitions were not hidden.
* remove additional libc struct accessor cruftRich Felker2015-04-221-12/+0
| | | | | commit f9cccfc16e58b39ee381fbdfb8688db3bb8e3555 left behind the part in libc.c; remove it too.
* remove cruft for libc struct accessor function and broken visibilityRich Felker2015-04-221-14/+0
| | | | | | | | | these were hacks to work around toolchains that could not properly optimize PIC accesses based on visibility and would generate GOT lookups even for hidden data, which broke the old dynamic linker. since commit f3ddd173806fd5c60b3f034528ca24542aecc5b9 it no longer matters; the dynamic linker does not assume accessibility of this data until stage 3.
* add optional global visibility overrideRich Felker2015-04-191-0/+40
| | | | | | | | | | | | this is implemented via the build system and does not affect source files. the idea is to use protected or hidden visibility to prevent the compiler from pessimizing function calls within a shared (or position-independent static) libc in the form of overhead setting up for a call through the PLT. the ld-time symbol binding via the -Bsymbolic-functions option already optimized out the PLT itself, but not the code in the caller needed to support a call through the PLT. on some archs this overhead can be substantial; on others it's trivial.
* make dlerror state and message thread-local and dynamically-allocatedRich Felker2015-04-181-0/+2
| | | | | | | | | this fixes truncation of error messages containing long pathnames or symbol names. the dlerror state was previously required by POSIX to be global. the resolution of bug 97 relaxed the requirements to allow thread-safe implementations of dlerror with thread-local state and message buffer.
* add missing 'void' in prototypes of internal pthread functionsAlexander Monakov2015-04-181-6/+6
|
* fix inconsistent visibility for internal syscall symbolsRich Felker2015-04-1411-1/+11
|
* use hidden visibility for i386 asm-internal __vsyscall symbolRich Felker2015-04-141-0/+2
| | | | | otherwise the call instruction in the inline syscall asm results in textrels without ld-time binding.
* remove remnants of support for running in no-thread-pointer modeRich Felker2015-04-131-2/+1
| | | | | | | | | | | | | since 1.1.0, musl has nominally required a thread pointer to be setup. most of the remaining code that was checking for its availability was doing so for the sake of being usable by the dynamic linker. as of commit 71f099cb7db821c51d8f39dfac622c61e54d794c, this is no longer necessary; the thread pointer is now valid before any libc code (outside of dynamic linker bootstrap functions) runs. this commit essentially concludes "phase 3" of the "transition path for removing lazy init of thread pointer" project that began during the 1.1.0 release cycle.
* dynamic linker bootstrap overhaulRich Felker2015-04-131-0/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | this overhaul further reduces the amount of arch-specific code needed by the dynamic linker and removes a number of assumptions, including: - that symbolic function references inside libc are bound at link time via the linker option -Bsymbolic-functions. - that libc functions used by the dynamic linker do not require access to data symbols. - that static/internal function calls and data accesses can be made without performing any relocations, or that arch-specific startup code handled any such relocations needed. removing these assumptions paves the way for allowing libc.so itself to be built with stack protector (among other things), and is achieved by a three-stage bootstrap process: 1. relative relocations are processed with a flat function. 2. symbolic relocations are processed with no external calls/data. 3. main program and dependency libs are processed with a fully-functional libc/ldso. reduction in arch-specific code is achived through the following: - crt_arch.h, used for generating crt1.o, now provides the entry point for the dynamic linker too. - asm is no longer responsible for skipping the beginning of argv[] when ldso is invoked as a command. - the functionality previously provided by __reloc_self for heavily GOT-dependent RISC archs is now the arch-agnostic stage-1. - arch-specific relocation type codes are mapped directly as macros rather than via an inline translation function/switch statement.
* redesign and simplify vmlock systemRich Felker2015-04-101-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | this global lock allows certain unlock-type primitives to exclude mmap/munmap operations which could change the identity of virtual addresses while references to them still exist. the original design mistakenly assumed mmap/munmap would conversely need to exclude the same operations which exclude mmap/munmap, so the vmlock was implemented as a sort of 'symmetric recursive rwlock'. this turned out to be unnecessary. commit 25d12fc0fc51f1fae0f85b4649a6463eb805aa8f already shortened the interval during which mmap/munmap held their side of the lock, but left the inappropriate lock design and some inefficiency. the new design uses a separate function, __vm_wait, which does not hold any lock itself and only waits for lock users which were already present when it was called to release the lock. this is sufficient because of the way operations that need to be excluded are sequenced: the "unlock-type" operations using the vmlock need only block mmap/munmap operations that are precipitated by (and thus sequenced after) the atomic-unlock they perform while holding the vmlock. this allows for a spectacular lack of synchronization in the __vm_wait function itself.
* add aarch64 portSzabolcs Nagy2015-03-111-0/+13
| | | | | | | | | | This adds complete aarch64 target support including bigendian subarch. Some of the long double math functions are known to be broken otherwise interfaces should be fully functional, but at this point consider this port experimental. Initial work on this port was done by Sireesh Tripurari and Kevin Bortis.
* math: add dummy implementations of 128 bit long double functionsSzabolcs Nagy2015-03-111-0/+14
| | | | | | | | This is in preparation for the aarch64 port only to have the long double math symbols available on ld128 platforms. The implementations should be fixed up later once we have proper tests for these functions. Added bigendian handling for ld128 bit manipulations too.
* copy the dtv pointer to the end of the pthread struct for TLS_ABOVE_TP archsSzabolcs Nagy2015-03-111-0/+1
| | | | | | | | | | | | | | | | | | | | There are two main abi variants for thread local storage layout: (1) TLS is above the thread pointer at a fixed offset and the pthread struct is below that. So the end of the struct is at known offset. (2) the thread pointer points to the pthread struct and TLS starts below it. So the start of the struct is at known (zero) offset. Assembly code for the dynamic TLSDESC callback needs to access the dynamic thread vector (dtv) pointer which is currently at the front of the pthread struct. So in case of (1) the asm code needs to hard code the offset from the end of the struct which can easily break if the struct changes. This commit adds a copy of the dtv at the end of the struct. New members must not be added after dtv_copy, only before it. The size of the struct is increased a bit, but there is opportunity for size optimizations.
* make all objects used with atomic operations volatileRich Felker2015-03-033-20/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the memory model we use internally for atomics permits plain loads of values which may be subject to concurrent modification without requiring that a special load function be used. since a compiler is free to make transformations that alter the number of loads or the way in which loads are performed, the compiler is theoretically free to break this usage. the most obvious concern is with atomic cas constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be transformed to a_cas(p,*p,f(*p)); where the latter is intended to show multiple loads of *p whose resulting values might fail to be equal; this would break the atomicity of the whole operation. but even more fundamental breakage is possible. with the changes being made now, objects that may be modified by atomics are modeled as volatile, and the atomic operations performed on them by other threads are modeled as asynchronous stores by hardware which happens to be acting on the request of another thread. such modeling of course does not itself address memory synchronization between cores/cpus, but that aspect was already handled. this all seems less than ideal, but it's the best we can do without mandating a C11 compiler and using the C11 model for atomics. in the case of pthread_once_t, the ABI type of the underlying object is not volatile-qualified. so we are assuming that accessing the object through a volatile-qualified lvalue via casts yields volatile access semantics. the language of the C standard is somewhat unclear on this matter, but this is an assumption the linux kernel also makes, and seems to be the correct interpretation of the standard.
* factor cancellation cleanup push/pop out of futex __timedwait functionRich Felker2015-03-021-1/+2
| | | | | | | | | | | | | previously, the __timedwait function was optionally a cancellation point depending on whether it was passed a pointer to a cleaup function and context to register. as of now, only one caller actually used such a cleanup function (and it may face removal soon); most callers either passed a null pointer to disable cancellation or a dummy cleanup function. now, __timedwait is never a cancellation point, and __timedwait_cp is the cancellable version. this makes the intent of the calling code more obvious and avoids ugly dummy functions and long argument lists.
* add IEEE binary128 long double support to floatscanSzabolcs Nagy2015-02-091-1/+9
| | | | | | | | | | | | | | | | | just defining the necessary constants: LD_B1B_MAX is 2^113 - 1 in base 10^9 KMAX is 2048 so the x array can hold up to 18432 decimal digits (the worst case is converting 2^-16495 = 5^16495 * 10^-16495 to binary, it requires the processing of int(log10(5)*16495)+1 = 11530 decimal digits after discarding the leading zeros, the conversion requires some headroom in x, but KMAX is more than enough for that) However this code is not optimal on archs with IEEE binary128 long double because the arithmetics is software emulated (on all such platforms as far as i know) which means big and slow strtod.
* remove cruft from x86_64 syscall.hSzabolcs Nagy2015-02-071-0/+3
| | | | | | | x86_64 syscall.h defined some musl internal syscall names and made them public. These defines were already moved to src/internal/syscall.h (except for SYS_fadvise which is added now) so the cruft in x86_64 syscall.h is not needed.
* add FUTEX_PRIVATE macro to internal futex.hRich Felker2015-01-151-0/+2
|
* provide CMPLX macros in implementation-internal libm.hRich Felker2014-12-171-0/+12
| | | | | | this avoids assuming the presence of C11 macro definitions in the public complex.h, which need changes potentially incompatible with the way these macros are being used internally.
* unify non-inline version of syscall code across archsRich Felker2014-11-221-0/+10
| | | | | | | | | | | | | | except powerpc, which still lacks inline syscalls simply because nobody has written the code, these are all fallbacks used to work around a clang bug that probably does not exist in versions of clang that can compile musl. however, it's useful to have the generic non-inline code anyway, as it eases the task of porting to new archs: writing inline syscall code is now optional. this approach could also help support compilers which don't understand inline asm or lack support for the needed register constraints. mips could not be unified because it has special fixup code for broken layout of the kernel's struct stat.
* fix overflow corner case in strtoul-family functionsRich Felker2014-09-161-0/+1
| | | | | | | | incorrect behavior occurred only in cases where the input overflows unsigned long long, not just the (possibly lower) range limit for the result type. in this case, processing of the '-' sign character was not suppressed, and the function returned a value of 1 despite setting errno to ERANGE.
* add C11 thread creation and related thread functionsRich Felker2014-09-071-0/+2
| | | | | | | | | | | | | | | | | based on patch by Jens Gustedt. the main difficulty here is handling the difference between start function signatures and thread return types for C11 threads versus POSIX threads. pointers to void are assumed to be able to represent faithfully all values of int. the function pointer for the thread start function is cast to an incorrect type for passing through pthread_create, but is cast back to its correct type before calling so that the behavior of the call is well-defined. changes to the existing threads implementation were kept minimal to reduce the risk of regressions, and duplication of code that carries implementation-specific assumptions was avoided for ease and safety of future maintenance.
* fix false ownership of stdio FILEs due to tid reuseRich Felker2014-08-232-0/+2
| | | | | | | | | | | | | this is analogous commit fffc5cda10e0c5c910b40f7be0d4fa4e15bb3f48 which fixed the corresponding issue for mutexes. the robust list can't be used here because the locks do not share a common layout with mutexes. at some point it may make sense to simply incorporate a mutex object into the FILE structure and use it, but that would be a much more invasive change, and it doesn't mesh well with the current design that uses a simpler code path for internal locking and pulls in the recursive-mutex-like code when the flockfile API is used explicitly.
* fix fallback checks for kernels without private futex supportRich Felker2014-08-221-1/+1
| | | | for unknown syscall commands, the kernel produces ENOSYS, not EINVAL.
* redesign cond var implementation to fix multiple issuesRich Felker2014-08-171-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | the immediate issue that was reported by Jens Gustedt and needed to be fixed was corruption of the cv/mutex waiter states when switching to using a new mutex with the cv after all waiters were unblocked but before they finished returning from the wait function. self-synchronized destruction was also handled poorly and may have had race conditions. and the use of sequence numbers for waking waiters admitted a theoretical missed-wakeup if the sequence number wrapped through the full 32-bit space. the new implementation is largely documented in the comments in the source. the basic principle is to use linked lists initially attached to the cv object, but detachable on signal/broadcast, made up of nodes residing in automatic storage (stack) on the threads that are waiting. this eliminates the need for waiters to access the cv object after they are signaled, and allows us to limit wakeup to one waiter at a time during broadcasts even when futex requeue cannot be used. performance is also greatly improved, roughly double some tests. basically nothing is changed in the process-shared cond var case, where this implementation does not work, since processes do not have access to one another's local storage.