about summary refs log tree commit diff
Commit message (Collapse)AuthorAgeFilesLines
...
* remove the last of possible-textrels from i386 asmRich Felker2015-04-186-4/+16
| | | | | | | | | | | | none of these are actual textrels because of ld-time binding performed by -Bsymbolic-functions, but I'm changing them with the goal of making ld-time binding purely an optimization rather than relying on it for semantic purposes. in the case of memmove's call to memcpy, making it explicit that the memmove asm is assuming the forward-copying behavior of the memcpy asm is desirable anyway; in case memcpy is ever changed, the semantic mismatch would be apparent while editing memmcpy.s.
* make dlerror state and message thread-local and dynamically-allocatedRich Felker2015-04-183-32/+65
| | | | | | | | | this fixes truncation of error messages containing long pathnames or symbol names. the dlerror state was previously required by POSIX to be global. the resolution of bug 97 relaxed the requirements to allow thread-safe implementations of dlerror with thread-local state and message buffer.
* add missing 'void' in prototypes of internal pthread functionsAlexander Monakov2015-04-181-6/+6
|
* math: fix pow(+-0,-inf) not to raise divbyzero flagSzabolcs Nagy2015-04-183-3/+3
| | | | | | this reverts the commit f29fea00b5bc72d4b8abccba2bb1e312684d1fce which was based on a bug in C99 and POSIX and did not match IEEE-754 http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1515.pdf
* apply hidden visibility to tlsdesc accessor functionsRich Felker2015-04-175-0/+10
| | | | | | | these functions are never called directly; only their addresses are used, so PLT indirections should never happen unless a broken application tries to redefine them, but it's still best to make them hidden.
* comment fixes in aarch64 tlsdesc asmSzabolcs Nagy2015-04-171-4/+4
|
* ensure debugger hook for dynamic linker does not point to a PLT slotRich Felker2015-04-171-2/+4
| | | | | this change is made in preparation to support linking without -Bsymbolic-functions.
* add PR_*_FP_MODE prctl optionsSzabolcs Nagy2015-04-171-0/+5
| | | | | new in linux v4.0, commit 9791554b45a2acc28247f66a5fd5bbc212a6b8c8 used to work around a floating-point abi issue on mips
* add PR_MPX_*_MANAGEMENT prctl optionsSzabolcs Nagy2015-04-171-0/+3
| | | | | new in linux v3.19, commit fe3d197f84319d3bce379a9c0dc17b1f48ad358c used for on-demand kernel allocation of bounds tables for mpx on x86
* add IP_CHECKSUM socket option to netinet/in.hSzabolcs Nagy2015-04-171-0/+1
| | | | new in linux v4.0, commit ad6f939ab193750cc94a265f58e007fb598c97b7
* add execveat syscall number to microblazeSzabolcs Nagy2015-04-171-0/+2
| | | | | syscall number was reserved in linux v4.0, kernel commit add4b1b02da7e7ec35c34dd04d351ac53f3f0dd8
* improve ctype.h macros to diagnose errorsRich Felker2015-04-171-6/+6
| | | | | | | | the casts of the argument to unsigned int suppressed diagnosis of errors like passing a pointer instead of a character. putting the actual function call in an unreachable branch restores any diagnostics that would be present if the macros didn't exist and functions were used.
* fix missing quotation mark in mips crt_arch.h that broke buildRich Felker2015-04-171-1/+1
|
* fix mips fesetenv(FE_DFL_ENV) againRich Felker2015-04-171-0/+1
| | | | | commit 5fc1487832e16aa2119e735a388d5f36c8c139e2 attempted to fix it, but neglected the fact that mips has branch delay slots.
* fix PLT call offset in sh dlsym asmRich Felker2015-04-171-3/+3
| | | | | | | | | | | the braf instruction's destination register is an offset from the address of the braf instruction plus 4 (or equivalently, the address of the next instruction after the delay slot). the code for dlsym was incorrectly computing the offset to pass using the address of the delay slot itself. in other places, a label was placed after the delay slot, but I find this confusing. putting the label on the branch instruction itself, and manually adding 4, makes it more clear which branch the offset in the constant pool goes with.
* fix sh build regressions in asmRich Felker2015-04-172-2/+2
| | | | | even hidden functions need @PLT symbol references; otherwise an absolute address is produced instead of a PC-relative one.
* fix sh __set_thread_area uninitialized return valueRich Felker2015-04-171-1/+2
| | | | | this caused the dynamic linker/startup code to abort when r0 happened to contain a negative value.
* redesign sigsetjmp so that signal mask is restored after longjmpRich Felker2015-04-1712-133/+177
| | | | | | | | | | | | | | | | | | | | | | | | | | | | the conventional way to implement sigsetjmp is to save the signal mask then tail-call to setjmp; siglongjmp then restores the signal mask and calls longjmp. the problem with this approach is that a signal already pending, or arriving between unmasking of signals and restoration of the saved stack pointer, will have its signal handler run on the stack that was active before siglongjmp was called. this can lead to unbounded stack usage when siglongjmp is used to leave a signal handler. in the new design, sigsetjmp saves its own return address inside the extended part of the sigjmp_buf (outside the __jmp_buf part used by setjmp) then calls setjmp to save a jmp_buf inside its own execution. it then tail-calls to __sigsetjmp_tail, which uses the return value of setjmp to determine whether to save the current signal mask or restore a previously-saved mask. as an added bonus, this design makes it so that siglongjmp and longjmp are identical. this is useful because the __longjmp_chk function we need to add for ABI-compatibility assumes siglongjmp and longjmp are the same, but for different reasons -- it was designed assuming either can access a flag just past the __jmp_buf indicating whether the signal masked was saved, and act on that flag. however, early versions of musl did not have space past the __jmp_buf for the non-sigjmp_buf version of jmp_buf, so our setjmp cannot store such a flag without risking clobbering memory on (very) old binaries.
* use hidden __tls_get_new for tls/tlsdesc lookup fallback casesRich Felker2015-04-144-5/+13
| | | | | | | | | | | | | | | previously, the dynamic tlsdesc lookup functions and the i386 special-ABI ___tls_get_addr (3 underscores) function called __tls_get_addr when the slot they wanted was not already setup; __tls_get_addr would then in turn also see that it's not setup and call __tls_get_new. calling __tls_get_new directly is both more efficient and avoids the issue of calling a non-hidden (public API/ABI) function from asm. for the special i386 function, a weak reference to __tls_get_new is used since this function is not defined when static linking (the code path that needs it is unreachable in static-linked programs).
* cleanup use of visibility attributes in pthread_cancel.cRich Felker2015-04-141-8/+9
| | | | | | | applying the attribute to a weak_alias macro was a hack. instead use a separate declaration to apply the visibility, and consolidate declarations together to avoid having visibility mess all over the file.
* fix inconsistent visibility for internal syscall symbolsRich Felker2015-04-1412-1/+16
|
* use hidden visibility for call from dlsym to internal __dlsymRich Felker2015-04-1411-3/+14
|
* consistently use hidden visibility for cancellable syscall internalsRich Felker2015-04-1412-30/+103
| | | | | | | | | | in a few places, non-hidden symbols were referenced from asm in ways that assumed ld-time binding. while these is no semantic reason these symbols need to be hidden, fixing the references without making them hidden was going to be ugly, and hidden reduces some bloat anyway. in the asm files, .global/.hidden directives have been moved to the top to unclutter the actual code.
* fix inconsistent visibility for internal __tls_get_new functionRich Felker2015-04-142-3/+3
| | | | | | at the point of call it was declared hidden, but the definition was not hidden. for some toolchains this inconsistency produced textrels without ld-time binding.
* use hidden visibility for i386 asm-internal __vsyscall symbolRich Felker2015-04-142-7/+9
| | | | | otherwise the call instruction in the inline syscall asm results in textrels without ld-time binding.
* make _dlstart_c function use hidden visibilityRich Felker2015-04-141-0/+1
| | | | | otherwise the call/jump from the crt_arch.h asm may not resolve correctly without -Bsymbolic-functions.
* remove initializers for decoded aux/dyn arrays in dynamic linkerRich Felker2015-04-131-5/+5
| | | | | | | | the zero initialization is redundant since decode_vec does its own clearing, and it increases the risk that buggy compilers will generate calls to memset. as long as symbols are bound at ld time, such a call will not break anything, but it may be desirable to turn off ld-time binding in the future.
* allow libc itself to be built with stack protector enabledRich Felker2015-04-133-1/+26
| | | | | | | | | | | | | | | | | | | | this was already essentially possible as a result of the previous commits changing the dynamic linker/thread pointer bootstrap process. this commit mainly adds build system infrastructure: configure no longer attempts to disable stack protector. instead it simply determines how so the makefile can disable stack protector for a few translation units used during early startup. stack protector is also disabled for memcpy and memset since compilers (incorrectly) generate calls to them on some archs to implement struct initialization and assignment, and such calls may creep into early initialization. no explicit attempt to enable stack protector is made by configure at this time; any stack protector option supported by the compiler can be passed to configure in CFLAGS, and if the compiler uses stack protector by default, this default is respected.
* remove remnants of support for running in no-thread-pointer modeRich Felker2015-04-1310-32/+13
| | | | | | | | | | | | | since 1.1.0, musl has nominally required a thread pointer to be setup. most of the remaining code that was checking for its availability was doing so for the sake of being usable by the dynamic linker. as of commit 71f099cb7db821c51d8f39dfac622c61e54d794c, this is no longer necessary; the thread pointer is now valid before any libc code (outside of dynamic linker bootstrap functions) runs. this commit essentially concludes "phase 3" of the "transition path for removing lazy init of thread pointer" project that began during the 1.1.0 release cycle.
* move thread pointer setup to beginning of dynamic linker stage 3Rich Felker2015-04-131-8/+23
| | | | | | | | | | | | this allows the dynamic linker itself to run with a valid thread pointer, which is a prerequisite for stack protector on archs where the ssp canary is stored in TLS. it will also allow us to remove some remaining runtime checks for whether the thread pointer is valid. as long as the application and its libraries do not require additional size or alignment, this early thread pointer will be kept and reused at runtime. otherwise, a new static TLS block is allocated after library loading has finished and the thread pointer is switched over.
* stabilize dynamic linker's layout of static TLSRich Felker2015-04-131-9/+6
| | | | | | | | | | | | | | previously, the layout of the static TLS block was perturbed by the size of the dtv; dtv size increasing from 0 to 1 perturbed both TLS arch types, and the TLS-above-TP type's layout was perturbed by the specific number of dtv slots (libraries with TLS). this behavior made it virtually impossible to setup a tentative thread pointer address before loading libraries and keep it unchanged as long as the libraries' TLS size/alignment requirements fit. the new code fixes the location of the dtv and pthread structure at opposite ends of the static TLS block so that they will not move unless size or alignment changes.
* allow i386 __set_thread_area to be called more than onceRich Felker2015-04-131-1/+5
| | | | | | | | previously a new GDT slot was requested, even if one had already been obtained by a previous call. instead extract the old slot number from GS and reuse it if it was already set. the formula (GS-3)/8 for the slot number automatically yields -1 (request for new slot) if GS is zero (unset).
* dynamic linker bootstrap overhaulRich Felker2015-04-1337-909/+627
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | this overhaul further reduces the amount of arch-specific code needed by the dynamic linker and removes a number of assumptions, including: - that symbolic function references inside libc are bound at link time via the linker option -Bsymbolic-functions. - that libc functions used by the dynamic linker do not require access to data symbols. - that static/internal function calls and data accesses can be made without performing any relocations, or that arch-specific startup code handled any such relocations needed. removing these assumptions paves the way for allowing libc.so itself to be built with stack protector (among other things), and is achieved by a three-stage bootstrap process: 1. relative relocations are processed with a flat function. 2. symbolic relocations are processed with no external calls/data. 3. main program and dependency libs are processed with a fully-functional libc/ldso. reduction in arch-specific code is achived through the following: - crt_arch.h, used for generating crt1.o, now provides the entry point for the dynamic linker too. - asm is no longer responsible for skipping the beginning of argv[] when ldso is invoked as a command. - the functionality previously provided by __reloc_self for heavily GOT-dependent RISC archs is now the arch-agnostic stage-1. - arch-specific relocation type codes are mapped directly as macros rather than via an inline translation function/switch statement.
* remove mismatched arguments from vmlock function definitionsRich Felker2015-04-111-2/+2
| | | | | commit f08ab9e61a147630497198fe3239149275c0a3f4 introduced these accidentally as remnants of some work I tried that did not work out.
* apply vmlock wait to __unmapself in pthread_exitRich Felker2015-04-101-0/+4
|
* redesign and simplify vmlock systemRich Felker2015-04-108-45/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | this global lock allows certain unlock-type primitives to exclude mmap/munmap operations which could change the identity of virtual addresses while references to them still exist. the original design mistakenly assumed mmap/munmap would conversely need to exclude the same operations which exclude mmap/munmap, so the vmlock was implemented as a sort of 'symmetric recursive rwlock'. this turned out to be unnecessary. commit 25d12fc0fc51f1fae0f85b4649a6463eb805aa8f already shortened the interval during which mmap/munmap held their side of the lock, but left the inappropriate lock design and some inefficiency. the new design uses a separate function, __vm_wait, which does not hold any lock itself and only waits for lock users which were already present when it was called to release the lock. this is sufficient because of the way operations that need to be excluded are sequenced: the "unlock-type" operations using the vmlock need only block mmap/munmap operations that are precipitated by (and thus sequenced after) the atomic-unlock they perform while holding the vmlock. this allows for a spectacular lack of synchronization in the __vm_wait function itself.
* optimize out setting up robust list with kernel when not neededRich Felker2015-04-104-7/+8
| | | | | | | | | | as a result of commit 12e1e324683a1d381b7f15dd36c99b37dd44d940, kernel processing of the robust list is only needed for process-shared mutexes. previously the first attempt to lock any owner-tracked mutex resulted in robust list initialization and a set_robust_list syscall. this is no longer necessary, and since the kernel's record of the robust list must now be cleared at thread exit time for detached threads, optimizing it out is more worthwhile than before too.
* process robust list in pthread_exit to fix detached thread use-after-unmapRich Felker2015-04-102-26/+27
| | | | | | | | | | | | | | | | | | | | | the robust list head lies in the thread structure, which is unmapped before exit for detached threads. this leaves the kernel unable to process the exiting thread's robust list, and with a dangling pointer which may happen to point to new unrelated data at the time the kernel processes it. userspace processing of the robust list was already needed for non-pshared robust mutexes in order to perform private futex wakes rather than the shared ones the kernel would do, but it was conditional on linking pthread_mutexattr_setrobust and did not bother processing the pshared mutexes in the list, which requires additional logic for the robust list pending slot in case pthread_exit is interrupted by asynchronous process termination. the new robust list processing code is linked unconditionally (inlined in pthread_exit), handles both private and shared mutexes, and also removes the kernel's reference to the robust list before unmapping and exit if the exiting thread is detached.
* fix possible clobbering of syscall return values on mipsRich Felker2015-04-071-3/+6
| | | | | | | | | | | | depending on the compiler's interpretation of __asm__ register names for register class objects, it may be possible for the return value in r2 to be clobbered by the function call to __stat_fix. I have not observed any such breakage in normal builds and suspect it only happens with -O0 or other unusual build options, but since there's an ambiguity as to the semantics of this feature, it's best to use an explicit temporary to avoid the issue. based on reporting and patch by Eugene.
* fix getdelim to set the error indicator on all failuresSzabolcs Nagy2015-04-041-2/+5
|
* fix rpath string memory leak on failed dlopenRich Felker2015-04-041-0/+2
| | | | | | | | when dlopen fails, all partially-loaded libraries need to be unmapped and freed. any of these libraries using an rpath with $ORIGIN expansion may have an allocated string for the expanded rpath; previously, this string was not freed when freeing the library data structures.
* halt dynamic linker library search on errors resolving $ORIGIN in rpathRich Felker2015-04-031-8/+18
| | | | | | | | | | | | | | | this change hardens the dynamic linker against the possibility of loading the wrong library due to inability to expand $ORIGIN in rpath. hard failures such as excessively long paths or absence of /proc (when resolving /proc/self/exe for the main executable's origin) do not stop the path search, but memory allocation failures and any other potentially transient failures do. to implement this change, the meaning of the return value of fixup_rpath function is changed. returning zero no longer indicates that the dso's rpath string pointer is non-null; instead, the caller needs to check. a return value of -1 indicates a failure that should stop further path search.
* remove macro definition of longjmp from setjmp.hRich Felker2015-04-011-1/+0
| | | | | | | the C standard specifies that setjmp is a macro, but longjmp is a normal function. a macro version of it would be permitted (albeit useless) for C (not C++), but would have to be a function-like macro, not an object-like one.
* harden dynamic linker library path searchRich Felker2015-04-011-5/+16
| | | | | | | | | | | | transient errors during the path search should not allow the search to continue and possibly open the wrong file. this patch eliminates most conditions where that could happen, but there is still a possibility that $ORIGIN-based rpath processing will have an allocation failure, causing the search to skip such a path. fixing this is left as a separate task. a small bug where overly-long path components caused an infinite loop rather than being skipped/ignored is also fixed.
* move O_PATH definition back to arch bitsRich Felker2015-04-0110-3/+11
| | | | | | | while it's the same for all presently supported archs, it differs at least on sparc, and conceptually it's no less arch-specific than the other O_* macros. O_SEARCH and O_EXEC are still defined in terms of O_PATH in the main fcntl.h.
* aarch64: remove duplicate macro definitions in bits/fcntl.hRich Felker2015-04-011-3/+0
|
* aarch64: fix definition of sem_nsems in semid_ds structureRich Felker2015-04-011-1/+7
| | | | | | POSIX requires the sem_nsems member to have type unsigned short. we have to work around the incorrect kernel type using matching endian-specific padding.
* aarch64: fix namespace pollution in bits/shm.hSzabolcs Nagy2015-04-011-2/+2
| | | | | | The shm_info struct is a gnu extension and some of its members do not have shm* prefix. This is worked around in sys/shm.h by macros, but aarch64 didn't use those.
* release 1.1.8 v1.1.8Rich Felker2015-03-292-1/+16
|
* regex: fix character class repetitionsSzabolcs Nagy2015-03-271-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | Internally regcomp needs to copy some iteration nodes before translating the AST into TNFA representation. Literal nodes were not copied correctly: the class type and list of negated class types were not copied so classes were ignored (in the non-negated case an ignored char class caused the literal to match everything). This affects iterations when the upper bound is finite, larger than one or the lower bound is larger than one. So eg. the EREs [[:digit:]]{2} [^[:space:]ab]{1,4} were treated as .{2} [^ab]{1,4} The fix is done with minimal source modification to copy the necessary fields, but the AST preparation and node handling code of tre will need to be cleaned up for clarity.