about summary refs log tree commit diff
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
...
* rewrite __synccall in terms of global thread listRich Felker2019-02-163-124/+59
| | | | | | | | | | | | | | | | the __synccall mechanism provides stop-the-world synchronous execution of a callback in all threads of the process. it is used to implement multi-threaded setuid/setgid operations, since Linux lacks them at the kernel level, and for some other less-critical purposes. this change eliminates dependency on /proc/self/task to determine the set of live threads, which in addition to being an unwanted dependency and a potential point of resource-exhaustion failure, turned out to be inaccurate. test cases provided by Alexey Izbyshev showed that it could fail to reflect newly created threads. due to how the presignaling phase worked, this usually yielded a deadlock if hit, but in the worst case it could also result in threads being silently missed (allowed to continue running without executing the callback).
* track all live threads in an AS-safe, fully-consistent linked listRich Felker2019-02-157-43/+94
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the hard problem here is unlinking threads from a list when they exit without creating a window of inconsistency where the kernel task for a thread still exists and is still executing instructions in userspace, but is not reflected in the list. the magic solution here is getting rid of per-thread exit futex addresses (set_tid_address), and instead using the exit futex to unlock the global thread list. since pthread_join can no longer see the thread enter a detach_state of EXITED (which depended on the exit futex address pointing to the detach_state), it must now observe the unlocking of the thread list lock before it can unmap the joined thread and return. it doesn't actually have to take the lock. for this, a __tl_sync primitive is offered, with a signature that will allow it to be enhanced for quick return even under contention on the lock, if needed. for now, the exiting thread always performs a futex wake on its detach_state. a future change could optimize this out except when there is already a joiner waiting. initial/dynamic variants of detached state no longer need to be tracked separately, since the futex address is always set to the global list lock, not a thread-local address that could become invalid on detached thread exit. all detached threads, however, must perform a second sigprocmask syscall to block implementation-internal signals, since locking the thread list with them already blocked is not permissible. the arch-independent C version of __unmapself no longer needs to take a lock or setup its own futex address to release the lock, since it must necessarily be called with the thread list lock already held, guaranteeing exclusive access to the temporary stack. changes to libc.threads_minus_1 no longer need to be atomic, since they are guarded by the thread list lock. it is largely vestigial at this point, and can be replaced with a cheaper boolean indicating whether the process is multithreaded at some point in the future.
* always block signals for starting new threads, refactor start argsRich Felker2019-02-154-68/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | whether signals need to be blocked at thread start, and whether unblocking is necessary in the entry point function, has historically depended on intricacies of the cancellation design and on whether there are scheduling operations to perform on the new thread before its successful creation can be committed. future changes to track an AS-safe list of live threads will require signals to be blocked whenever changes are made to the list, so ... prior to commits b8742f32602add243ee2ce74d804015463726899 and 40bae2d32fd6f3ffea437fa745ad38a1fe77b27e, a signal mask for the entry function to restore was part of the pthread structure. it was removed to trim down the size of the structure, which both saved a small amount of stack space and improved code generation on archs where small immediate displacements are less costly than arbitrary ones, by limiting the range of offsets between the base of the thread structure, its members, and the thread pointer. these commits moved the saved mask to a special structure used only when special scheduling was needed, in which case the pthread_create caller and new thread had to synchronize with each other and could use this memory to pass a mask. this commit partially reverts the above two commits, but instead of putting the mask back in the pthread structure, it moves all "start argument" members out of the pthread structure, trimming it down further, and puts them in a separate structure passed on the new thread's stack. the code path for explicit scheduling of the new thread is also changed to synchronize with the calling thread in such a way to avoid spurious futex wakes.
* for SIGEV_THREAD timer threads, replace signal handler with sigwaitinfoRich Felker2019-02-152-21/+16
| | | | | | this eliminates some ugly hacks that were repurposing the start function and start argument fields in the pthread structure for timer use, and the need to longjmp out of a signal handler.
* defer free of thread-local dlerror buffers from inconsistent contextRich Felker2019-02-151-2/+20
| | | | | | | | | | __dl_thread_cleanup is called from the context of an exiting thread that is not in a consistent state valid for calling application code. since commit c9f415d7ea2dace5bf77f6518b6afc36bb7a5732, it's possible (and supported usage) for the allocator to have been replaced by the application, so __dl_thread_cleanup can no longer call free. instead, reuse the message buffer as a linked-list pointer, and queue it to be freed the next time any dynamic linker error message is generated.
* fix behavior of gets when input line contains a null byteRich Felker2019-02-131-3/+8
| | | | | | | | | | | | | | | | | | | | | | the way gets was implemented in terms of fgets, it used the location of the null termination to determine where to find and remove the newline, if any. an embedded null byte prevented this from working. this also fixes a one-byte buffer overflow, whereby when gets read an N-byte line (not counting newline), it would store two null terminators for a total of N+2 bytes. it's unlikely that anyone would care that a function whose use is pretty much inherently a buffer overflow writes too much, but it could break the only possible correct uses of this function, in conjunction with input of known format from a trusted/same-privilege-domain source, where the buffer length may have been selected to exactly match a line length contract. there seems to be no correct way to implement gets in terms of a single call to fgets or scanf, and using multiple calls would require explicit locking, so we might as well just write the logic out explicitly character-at-a-time. this isn't fast, but nobody cares if a catastrophically unsafe function that's so bad it was removed from the C language is fast.
* redesign robust mutex states to eliminate data races on type fieldRich Felker2019-02-124-12/+23
| | | | | | | | | | | | | | | | | | | | | | | in order to implement ENOTRECOVERABLE, the implementation has traditionally used a bit of the mutex type field to indicate that it's recovered after EOWNERDEAD and will go into ENOTRECOVERABLE state if pthread_mutex_consistent is not called before unlocking. while it's only the thread that holds the lock that needs access to this information (except possibly for the sake of pthread_mutex_consistent choosing between EINVAL and EPERM for erroneous calls), the change to the type field is formally a data race with all other threads that perform any operation on the mutex. no individual bits race, and no write races are possible, so things are "okay" in some sense, but it's still not good. this patch moves the recovery/consistency state to the mutex owner/lock field which is rightfully mutable. bit 30, the same bit the kernel uses with a zero owner to indicate that the previous owner died holding the lock, is now used with a nonzero owner to indicate that the mutex is held but has not yet been marked consistent. note that the kernel ABI also reserves bit 29 not to appear in any tid, so the sentinel value we use for ENOTRECOVERABLE, 0x7fffffff, does not clash with any tid plus bit 30.
* fail fdopendir for O_PATH file descriptorsRich Felker2019-02-071-0/+4
| | | | | | | | | | | | | | | | | fdopendir is specified to fail with EBADF if the file descriptor passed is not open for reading. while O_PATH is an extension and arguably exempt from this requirement, it's used, albeit incompletely, to implement O_SEARCH, and fdopendir should fail when passed an O_SEARCH file descriptor. the new check is performed after fstat so that we don't have to consider the possibility that the fd is invalid. an alternate solution would be attempting to pre-fill the buffer using getdents, which would fail with EBADF for us, but that seems more complex and error-prone and involves either code duplication or refactoring, so the simple fix with an additional inexpensive syscall is what I've made for now.
* locale: ensure dcngettext() preserves errnoA. Wilcox2019-02-071-0/+3
| | | | | | | | | | | | Some packages call gettext to format a message to be sent to perror. If the currently set user locale points to a non-existent .mo file, open via __map_file in dcngettext will set errno to ENOENT. Maintainer's notes: Non-modification of errno is a documented part of the interface contract for the GNU version of this function and likely other versions. The issue being fixed here seems to be a regression from commit 1b52863e244ecee5b5935b6d36bb9e6efe84c035, which enabled setting of errno from __map_file.
* fix call to __pthread_tsd_run_dtors with too many argumentsRich Felker2019-01-211-1/+1
| | | | | | commit a6054e3c94aa0491d7366e4b05ae0d73f661bfe2 removed the argument, making it a constraint violation to pass one. caught by cparser/firm; other compilers seem to ignore it.
* fix unintended linking dependency of pthread_key_create on __synccallRich Felker2019-01-161-0/+6
| | | | | | | | | | | commit 84d061d5a31c9c773e29e1e2b1ffe8cb9557bc58 attempted to do this already, but omitted from pthread_key_create.c the weak definition of __pthread_key_delete_synccall, so that the definition provided by pthread_key_delete.c was always pulled in. based on patch by Markus Wichmann, but with a weak alias rather than weak reference for consistency/policy about dependence on tooling features.
* halt getspnam[_r] search on error accessing TCB shadowRich Felker2018-12-281-0/+2
| | | | | | fallback to /etc/shadow should happen only when the entry is not found in the TCB shadow. otherwise transient errors or permission errors can cause inconsistent results.
* don't set errno or return an error when getspnam[_r] finds no entryRich Felker2018-12-282-3/+9
| | | | | this case is specified as success with a null result, rather than an error, and errno is not to be set on success.
* make sem_wait and sem_timedwait interruptible by signalsRich Felker2018-12-191-1/+1
| | | | | | | | | | | | | | | | | | this reverts commit c0ed5a201b2bdb6d1896064bec0020c9973db0a1, which was based on a mistaken reading of POSIX due to inconsistency between the description (which requires return upon interruption by a signal) and the errors list (which wrongly lists EINTR as "may fail"). since the previously-introduced behavior was a workaround for an old kernel bug to ensure safety of correct programs that were not hardened against the bug, an effort has been made to preserve it for programs which do not use interrupting signal handlers. the stage for this was set in commit a63c0104e496f7ba78b64be3cd299b41e8cd427f, which makes the futex __timedwait backend suppress EINTR if it's seen when no interrupting signal handlers have been installed. based loosely on a patch submitted by Orivej Desh, but with unnecessary additional changes removed.
* don't fail pthread_sigmask/sigprocmask on invalid how when set is nullRich Felker2018-12-181-1/+1
| | | | | | | | the resolution of Austin Group issue #1132 changes the requirement to fail so that it only applies when the set argument (new mask) is non-null. this change was made for consistency with the description, which specified "if set is a null pointer, the value of the argument how is not significant".
* add __timedwait backend workaround for old kernels where futex EINTRsRich Felker2018-12-183-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | prior to linux 2.6.22, futex wait could fail with EINTR even for non-interrupting (SA_RESTART) signals. this was no problem provided the caller simply restarted the wait, but sem_[timed]wait is required by POSIX to return when interrupted by a signal. commit a113434cd68ce30642c4995b1caadcd084be6f09 introduced this behavior, and commit c0ed5a201b2bdb6d1896064bec0020c9973db0a1 reverted it based on a mistaken belief that it was not required. this belief stems from a bug in the specification: the description requires the function to return when interrupted, but the errors section marks EINTR as a "may fail" condition rather than a "shall fail" one. since there does seem to be significant value in the change made in commit c0ed5a201b2bdb6d1896064bec0020c9973db0a1, making it so that programs that call sem_wait without checking for EINTR don't silently make forward progress without obtaining the semaphore or treat it as a fatal error and abort, add a behind-the-scenes mechanism in the __timedwait backend to suppress EINTR in programs that have never installed interrupting signal handlers, and have sigaction track and report this state. this way the semaphore code is not cluttered by workarounds and can be updated (to be done in next commit) to reflect the high-level logic for conforming behavior. these changes are based loosely on a patch by Markus Wichmann, with the main changes being atomic update to flag object and moving the workaround from sem_timedwait to the __timedwait futex backend.
* on failed aio submission, set aiocb error and return valueRich Felker2018-12-111-2/+4
| | | | | | | | it's not clear whether this is required, but it seems arguable that it should happen. for example aio_suspend is supposed to return immediately if any of the operations has "completed", which includes ending with an error status asynchonously and might also be interpreted to include doing so synchronously.
* don't create aio queue/map structures for invalid file descriptorsRich Felker2018-12-111-4/+8
| | | | | | | | | | | | | | the map structures in particular are permanent once created, and thus a large number of aio function calls with invalid file descriptors could exhaust memory, whereas, assuming normal resource limits, only a very small number of entries ever need to be allocated. check validity of the fd before allocating anything new, so that allocation of large amounts of memory is only possible when resource limits have been increased and a large number of files are actually open. this change also improves error reporting for bad file descriptors to happen at the time the aio submission call is made, as opposed to asynchronously.
* move aio queue allocation from io thread to submitting threadRich Felker2018-12-111-16/+21
| | | | | | | | | | | | | | | since commit c9f415d7ea2dace5bf77f6518b6afc36bb7a5732, it has been possible that the allocator is application-provided code, which cannot necessarily run safely on io thread stacks, and which should not be able to see the existence of io threads, since they are an implementation detail. instead of having the io thread request and possibly allocate its queue (and the map structures leading to it), make the submitting thread responsible for this, and pass the queue pointer into the io thread via its args structure. this eliminates the only early error case in io threads, making it no longer necessary to pass an error status back to the submitting thread via the args structure.
* fix and future-proof against stack overflow in aio io threadsRich Felker2018-12-091-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | aio threads not using SIGEV_THREAD notification are created with small stacks and no guard page, which is possible since they only run the code for the requested io operation, not any application code. the motivation is not creating a lot of VMAs. however, the io thread needs to be able to receive a cancellation signal in case aio_cancel (implemented via pthread_cancel) is called. this requires sufficient stack space for a signal frame, which PTHREAD_STACK_MIN does not necessarily include. in principle MINSIGSTKSZ from signal.h should give us sufficient space for a signal frame, but the value is incorrect on some existing archs due to kernel addition of new vector register support without consideration for impact on ABI. some powerpc models exceed MINSIGSTKSZ by about 0.5k, and x86[_64] with AVX-512 can exceed it by up to about 1.5k. so use MINSIGSTKSZ+2048 to allow for the discrepancy plus some working space. unfortunately, it's possible that signal frame sizes could continue to grow, and some archs (aarch64) explicitly specify that they may. passing of a runtime value for MINSIGSTKSZ via AT_MINSIGSTKSZ in the aux vector was added to aarch64 linux, and presumably other archs will use this mechanism to report if they further increase the signal frame size. when AT_MINSIGSTKSZ is present, assume it's correct, so that we only need a small amount of working space in addition to it; in this case just add 512.
* add namespace-safe version of getauxval for internal useRich Felker2018-12-092-1/+13
|
* fix wordexp not to read past end of string ending with lone backslashRich Felker2018-12-091-1/+1
|
* fix memccpy to not access buffer past given sizeQuentin Rameau2018-12-021-1/+1
| | | | | memccpy would return a pointer over the given size when c is not found in the source buffer and n reaches 0.
* optimize two-way strstr and memmem bad character shiftRich Felker2018-11-082-2/+2
| | | | | | | | | | | | | | | | first, the condition (mem && k < p) is redundant, because mem being nonzero implies the needle is periodic with period exactly p, in which case any byte that appears in the needle must appear in the last p bytes of the needle, bounding the shift (k) by p. second, the whole point of replacing the shift k by mem (=l-p) is to prevent shifting by less than mem when discarding the memory on shift, in which case linear time could not be guaranteed. but as written, the check also replaced shifts greater than mem by mem, reducing the benefit of the shift. there is no possible benefit to this reduction of the shift; since mem is being cleared, the full shift is valid and more optimal. so only replace the shift by mem when it would be less than mem.
* fix regression in setlocale for LC_ALL with per-category settingRich Felker2018-11-021-1/+1
| | | | | | commit d88e5dfa8b989dafff4b748bfb3cba3512c8482e inadvertently changed the argument pased to __get_locale from part (the current ;-delimited component) to name (the full string).
* fix failure to flush stderr when fflush(0) is calledRich Felker2018-11-021-1/+4
| | | | | | | | commit ddc947eda311331959c73dbc4491afcfe2326346 fixed the corresponding bug for exit which was introduced when commit 0b80a7b0404b6e49b0b724e3e3fe0ed5af3b08ef added support for caller-provided buffers, making it possible for stderr to be a buffered stream.
* fix deadlock and buffered data loss race in fcloseRich Felker2018-11-021-13/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | fflush(NULL) and __stdio_exit lock individual FILEs while holding the open file list lock to walk the list. since fclose first locked the FILE to be closed, then the ofl lock, it could deadlock with these functions. also, because fclose removed the FILE to be closed from the open file list before flushing and closing it, a concurrent fclose or exit could complete successfully before fclose flushed the FILE it was closing, resulting in data loss. reorder the body of fclose to first flush and close the file, then remove it from the open file list only after unlocking it. this creates a window where consumers of the open file list can see dead FILE objects, but in the absence of undefined behavior on the part of the application, such objects will be in an inactive-buffer state and processing them will have no side effects. __unlist_locked_file is also moved so that it's performed only for non-permanent files. this change is not necessary, but preserves consistency (and thereby provides safety/hardening) in the case where an application uses one of the standard streams after closing it while holding an explicit lock on it. such usage is of course undefined behavior.
* __libc_start_main: slightly simplify stage2 pointer setupAlexander Monakov2018-11-021-3/+4
| | | | | | Use "+r" in the asm instead of implementing a non-transparent copy by applying "0" constraint to the source value. Introduce a typedef for the function type to avoid spelling it out twice.
* remove commented-out debug printf from strstrRich Felker2018-11-021-1/+0
| | | | this was leftover from before the initial commit.
* fix spuriously slow check in twoway strstr/memmem coresRich Felker2018-11-022-2/+2
| | | | | mem0 && mem && ... is redundant since mem can only be nonzero when mem0 is nonzero.
* don't omit setting errno in internal __map_file functionRich Felker2018-10-221-2/+2
| | | | | | a caller needs the reason for open (or fstat, albeit unlikely) failure if it's going to make decisions about continuing a path search or similar.
* make the default locale (& a variant) failure-free cases for newlocaleRich Felker2018-10-221-1/+20
| | | | | | | | | | | | | | | commit aeeac9ca5490d7d90fe061ab72da446c01ddf746 introduced fail-safe invariants that creating a locale_t object for the C locale or C.UTF-8 locale will always succeed. extend the guarantee to also cover the following: - newlocale(LC_ALL_MASK, "", 0) - newlocale(LC_ALL_MASK-LC_CTYPE_MASK, "C", 0) provided that the LANG/LC_* environment variables have not been changed by the program. these usages are idiomatic for getting the default locale, and for getting a locale that behaves as the C locale except for honoring the default locale's character encoding.
* simplify newlocale and allow failure for explicit locale namesRich Felker2018-10-221-23/+14
| | | | | | | | | | | | unify the code paths for allocated and non-allocated locale objects, always using a tmp object. this is necessary to avoid clobbering the base locale object too soon if we allow for the possibility that looking up an explicitly requested locale name may fail, and makes the code simpler and cleaner anyway. eliminate the complex and fragile logic for checking whether one of the non-allocated locale objects can be used for the result, and instead just memcmp against each of them.
* remove volatile qualification from category pointers in __locale_structRich Felker2018-10-201-1/+1
| | | | | | | | | | commit 63c188ec42e76ff768e81f6b65b11c68fc43351e missed making this change when switching from atomics to locking for modification of the global locale, leaving access to locale structures unnecessarily burdened with the restrictions of volatile. the volatile qualification was originally added in commit 56fbaa3bbe73f12af2bfbbcf2adb196e6f9fe264.
* adapt setlocale to support possibility of failureRich Felker2018-10-202-12/+22
| | | | | | | introduce a new LOC_MAP_FAILED sentinel for errors, since null pointers for a category's locale map indicate the C locale. at this time, __get_locale does not fail, so there should be no functional change by this commit.
* adjust types in FILE struct to make line buffering check less expensiveRich Felker2018-10-181-4/+2
| | | | | | | | | | | | | | | | | the choice of signed char for lbf was a theoretically space-saving hack that was not helping, and was unwantedly expensive. while comparing bytes against a byte-sized member sounds easy, the trick here was that the byte to be compared was unsigned while the lbf member was signed, making it possible to set lbf negative to disable line buffering. however, this imposed a requirement to promote both operands, zero-extending one and sign-extending the other, in order to compare them. to fix this, repurpose the waiters count slot (unused since commit c21f750727515602a9e84f2a190ee8a0a2aeb2a1). while we're at it, switch mode (orientation) from signed char to int as well. this makes no semantic difference (its only possible values are -1, 0, and 1) but it might help on archs where byte access is awkward.
* optimize internal putc_unlocked macro used in putcRich Felker2018-10-181-1/+2
| | | | | | | | to check whether flush due to line buffering is needed, the int-type character argument must be truncated to unsigned char for comparison. if the original value is subsequently passed to __overflow, it must be preserved, adding to register pressure. since it doesn't matter, truncate all uses so the original value is no longer live.
* fix wrong result for putc variants due to operator precedenceRich Felker2018-10-181-1/+1
| | | | | | | the internal putc_unlocked macro was wrongly returning a meaningless boolean result rather than the written character or EOF. bug was found by reading (very surprising) asm.
* further optimize getc/putc when locking is neededRich Felker2018-10-182-10/+10
| | | | | | | | | | | | | | | check whether the lock is free before loading the calling thread's tid. if so, just use a dummy tid value that cannot compare equal to any actual thread id (because it's one bit wider). this also avoids the need to save the tid and pass it to locking_getc or locking_putc, reducing register pressure. this change might slightly hurt the case where the caller already holds the lock, but it does not affect the single-threaded case, and may significantly improve the multi-threaded case, especially on archs where loading the thread pointer is disproportionately expensive like early mips and arm ISA levels. but even on i386 it helps, at least on some machines; I measured roughly a 10-15% improvement.
* use prototype for function pointer in static link libc init barrierRich Felker2018-10-181-1/+1
| | | | | | this is not needed for correctness, but doesn't hurt, and in some cases the compiler may pessimize the call assuming the callee might be variadic when it lacks a prototype.
* fix error in constraints for static link libc init barrierRich Felker2018-10-181-1/+1
| | | | | | | commit 4390383b32250a941ec616e8bff6f568a801b1c0 inadvertently used "r" instead of "0" for the input constraint, which only happened to work for the configuration I tested it on because it usually makes sense for the compiler to choose the same input and output register.
* fix build regression due to missing file for putc changesRich Felker2018-10-181-0/+22
| | | | | commit d664061adb4d7f6647ab2059bc351daa394bf5da inadvertently omitted the new file putc.h.
* bypass indirection through pointer objects to access stdin/out/errRich Felker2018-10-184-9/+33
| | | | | | | | | | | | | | | | | by ABI, the public stdin/out/err macros use extern pointer objects, and this is necessary to avoid copy relocations that would be expensive and make the size of the FILE structure part of the ABI. however, internally it makes sense to access the underlying FILE objects directly. this avoids both an indirection through the GOT to find the address of the stdin/out/err pointer objects (which can't be computed PC-relative because they may have been moved to the main program by copy relocations) and an indirection through the resulting pointer object. in most places this is just a minor optimization, but in the case of getchar and putchar (and the unlocked versions thereof), ipa constant propagation makes all accesses to members of stdin/out PC-relative or GOT-relative, possibly reducing register pressure as well.
* optimize hot paths of putc with manual shrink-wrappingRich Felker2018-10-173-13/+8
| | | | | this is the analog of commit dd8f02b7dce53d6b1c4282439f1636a2d63bee01, but for putc.
* optimize hot paths of getc with manual shrink-wrappingRich Felker2018-10-174-15/+30
| | | | | | | | | | | | | | | | | | | | | | | | with these changes, in a program that has not created any threads besides the main thread and that has not called f[try]lockfile, getc performs indistinguishably from getc_unlocked. this was measured on several i386 and x86_64 models, and should hold on other archs too simply by the properties of the code generation. the case where the caller already holds the lock (via flockfile) is improved significantly as well (40-60% reduction in time on machines tested) and the case where locking is needed is improved somewhat (roughly 10%). the key technique used here is forcing the non-hot path out-of-line and enabling it to be a tail call. a static noinline function (conditional on __GNUC__) is used rather than the extern hiddens used elsewhere for this purpose, so that the compiler can choose non-default calling conventions, making it possible to tail-call to a callee that takes more arguments than the caller on archs where arguments are passed on the stack or must have space reserved on the stack for spilling the. the tid could just be reloaded via the thread pointer in locking_getc, but that would be ridiculously expensive on some archs where thread pointer load requires a trap or syscall.
* document and make explicit desired noinline property for __init_libcRich Felker2018-10-171-0/+6
| | | | | | | | | | | | | on multiple occasions I've started to flatten/inline the code in __init_libc, only to rediscover the reason it was not inlined: GCC fails to deallocate its stack (and now, with the changes in commit 4390383b32250a941ec616e8bff6f568a801b1c0, fails to produce a tail call to the stage 2 function; see PR #87639) before calling main if it was inlined. document this with a comment and use an explicit noinline attribute if __GNUC__ is defined so that even with CFLAGS that heavily favor inlining it won't get inlined.
* impose barrier between thread pointer setup and use for static linkingRich Felker2018-10-171-0/+13
| | | | | | | | | | | | this is the analog of commit 1c84c99913bf1cd47b866ed31e665848a0da84a2 for static linking. unlike with dynamic linking, we don't have symbolic lookup to use as a barrier. use a dummy (target-agnostic) degenerate inline asm fragment instead. this technique has precedent in commit 05ac345f895098657cf44d419b5d572161ebaf43 where it's used for explicit_bzero. if it proves problematic in any way, loading the address of the stage 2 function from a pointer object whose address leaks to kernelspace during thread pointer init could be used as an even stronger barrier.
* move stdio locking MAYBE_WAITERS definition to stdio_impl.hRich Felker2018-10-163-4/+2
| | | | don't repeat definition in two places.
* x86_64: add single instruction fmaSzabolcs Nagy2018-10-154-0/+92
| | | | | | | fma is only available on recent x86_64 cpus and it is much faster than a software fma, so this should be done with a runtime check, however that requires more changes, this patch just adds the code so it can be tested when musl is compiled with -mfma or -mfma4.
* arm: add single instruction fmaSzabolcs Nagy2018-10-152-0/+30
| | | | | | | | | | | | | | vfma is available in the vfpv4 fpu and above, the ACLE standard feature test for double precision hardware fma support is __ARM_FEATURE_FMA && __ARM_FP&8 we need further checks to work around clang bugs (fixed in clang >=7.0) && !__SOFTFP__ because __ARM_FP is defined even with -mfloat-abi=soft && !BROKEN_VFP_ASM to disable the single precision code when inline asm handling is broken. For runtime selection the HWCAP_ARM_VFPv4 hwcap flag can be used, but that requires further work.