about summary refs log tree commit diff
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* add alternate backend support for getgrouplistJosiah Worcester2015-03-153-24/+86
| | | | | | | | | | | | | | | | | This completes the alternate backend support that was previously added to the getpw* and getgr* functions. Unlike those, though, it unconditionally queries nscd. Any groups from nscd that aren't in the /etc/groups file are added to the returned list, and any that are present in the file are ignored. The purpose of this behavior is to provide a view of the group database consistent with what is observed by the getgr* functions. If group memberships reported by nscd were honored when the corresponding group already has a definition in the /etc/groups file, the user's getgrouplist-based membership in the group would conflict with their non-membership in the reported gr_mem[] for the group. The changes made also make getgrouplist thread-safe and eliminate its clobbering of the global getgrent state.
* add aarch64 portSzabolcs Nagy2015-03-1117-0/+363
| | | | | | | | | | This adds complete aarch64 target support including bigendian subarch. Some of the long double math functions are known to be broken otherwise interfaces should be fully functional, but at this point consider this port experimental. Initial work on this port was done by Sireesh Tripurari and Kevin Bortis.
* math: add dummy implementations of 128 bit long double functionsSzabolcs Nagy2015-03-1117-4/+111
| | | | | | | | This is in preparation for the aarch64 port only to have the long double math symbols available on ld128 platforms. The implementations should be fixed up later once we have proper tests for these functions. Added bigendian handling for ld128 bit manipulations too.
* math: add ld128 exp2l based on the freebsd implementationSzabolcs Nagy2015-03-111-1/+366
| | | | | Changed the special case handling and bit manipulation to better match the double version.
* copy the dtv pointer to the end of the pthread struct for TLS_ABOVE_TP archsSzabolcs Nagy2015-03-113-4/+5
| | | | | | | | | | | | | | | | | | | | There are two main abi variants for thread local storage layout: (1) TLS is above the thread pointer at a fixed offset and the pthread struct is below that. So the end of the struct is at known offset. (2) the thread pointer points to the pthread struct and TLS starts below it. So the start of the struct is at known (zero) offset. Assembly code for the dynamic TLSDESC callback needs to access the dynamic thread vector (dtv) pointer which is currently at the front of the pthread struct. So in case of (1) the asm code needs to hard code the offset from the end of the struct which can easily break if the struct changes. This commit adds a copy of the dtv at the end of the struct. New members must not be added after dtv_copy, only before it. The size of the struct is increased a bit, but there is opportunity for size optimizations.
* fix regression in pthread_cond_wait with cancellation disabledRich Felker2015-03-071-0/+1
| | | | | | due to a logic error in the use of masked cancellation mode, pthread_cond_wait did not honor PTHREAD_CANCEL_DISABLE but instead failed with ECANCELED when cancellation was pending.
* fix FLT_ROUNDS to reflect the current rounding modeSzabolcs Nagy2015-03-071-0/+19
| | | | | Implemented as a wrapper around fegetround introducing a new function to the ABI: __flt_rounds. (fegetround cannot be used directly from float.h)
* fix over-alignment of TLS, insufficient builtin TLS on 64-bit archsRich Felker2015-03-062-4/+16
| | | | | | | | | | | | | a conservative estimate of 4*sizeof(size_t) was used as the minimum alignment for thread-local storage, despite the only requirements being alignment suitable for struct pthread and void* (which struct pthread already contains). additional alignment required by the application or libraries is encoded in their headers and is already applied. over-alignment prevented the builtin_tls array from ever being used in dynamic-linked programs on 64-bit archs, thereby requiring allocation at startup even in programs with no TLS of their own.
* add legacy functions from sysinfo.h duplicating sysconf functionalityRich Felker2015-03-041-0/+22
|
* fix signed left-shift overflow in pthread_condattr_setpsharedRich Felker2015-03-041-1/+1
|
* remove useless check of bin match in mallocRich Felker2015-03-041-1/+1
| | | | | | | | | | | | | | | | this re-check idiom seems to have been copied from the alloc_fwd and alloc_rev functions, which guess a bin based on non-synchronized memory access to adjacent chunk headers then need to confirm, after locking the bin, that the chunk is actually in the bin they locked. the check being removed, however, was being performed on a chunk obtained from the already-locked bin. there is no race to account for here; the check could only fail in the event of corrupt free lists, and even then it would not catch them but simply continue running. since the bin_index function is mildly expensive, it seems preferable to remove the check rather than trying to convert it into a useful consistency check. casual testing shows a 1-5% reduction in run time.
* eliminate atomics in syslog setlogmask functionRich Felker2015-03-041-4/+6
|
* fix init race that could lead to deadlock in malloc init codeRich Felker2015-03-041-39/+14
| | | | | | | | | | | | | the malloc init code provided its own version of pthread_once type logic, including the exact same bug that was fixed in pthread_once in commit 0d0c2f40344640a2a6942dda156509593f51db5d. since this code is called adjacent to expand_heap, which takes a lock, there is no reason to have pthread_once-type initialization. simply moving the init code into the interval where expand_heap already holds its lock on the brk achieves the same result with much less synchronization logic, and allows the buggy code to be eliminated rather than just fixed.
* make all objects used with atomic operations volatileRich Felker2015-03-0325-57/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the memory model we use internally for atomics permits plain loads of values which may be subject to concurrent modification without requiring that a special load function be used. since a compiler is free to make transformations that alter the number of loads or the way in which loads are performed, the compiler is theoretically free to break this usage. the most obvious concern is with atomic cas constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be transformed to a_cas(p,*p,f(*p)); where the latter is intended to show multiple loads of *p whose resulting values might fail to be equal; this would break the atomicity of the whole operation. but even more fundamental breakage is possible. with the changes being made now, objects that may be modified by atomics are modeled as volatile, and the atomic operations performed on them by other threads are modeled as asynchronous stores by hardware which happens to be acting on the request of another thread. such modeling of course does not itself address memory synchronization between cores/cpus, but that aspect was already handled. this all seems less than ideal, but it's the best we can do without mandating a C11 compiler and using the C11 model for atomics. in the case of pthread_once_t, the ABI type of the underlying object is not volatile-qualified. so we are assuming that accessing the object through a volatile-qualified lvalue via casts yields volatile access semantics. the language of the C standard is somewhat unclear on this matter, but this is an assumption the linux kernel also makes, and seems to be the correct interpretation of the standard.
* suppress masked cancellation in pthread_joinRich Felker2015-03-021-1/+5
| | | | | | like close, pthread_join is a resource-deallocation function which is also a cancellation point. the intent of masked cancellation mode is to exempt such functions from failure with ECANCELED.
* fix namespace issue in pthread_join affecting thrd_joinRich Felker2015-03-021-1/+2
| | | | | | pthread_testcancel is not in the ISO C reserved namespace and thus cannot be used here. use the namespace-protected version of the function instead.
* make aio_suspend a cancellation point and properly handle cancellationRich Felker2015-03-021-3/+9
|
* factor cancellation cleanup push/pop out of futex __timedwait functionRich Felker2015-03-029-26/+24
| | | | | | | | | | | | | previously, the __timedwait function was optionally a cancellation point depending on whether it was passed a pointer to a cleaup function and context to register. as of now, only one caller actually used such a cleanup function (and it may face removal soon); most callers either passed a null pointer to disable cancellation or a dummy cleanup function. now, __timedwait is never a cancellation point, and __timedwait_cp is the cancellable version. this makes the intent of the calling code more obvious and avoids ugly dummy functions and long argument lists.
* fix failure of internal futex __timedwait to report ECANCELEDRich Felker2015-02-271-1/+1
| | | | | | | | | | as part of abstracting the futex wait, this function suppresses all futex error values which callers should not see using a whitelist approach. when the masked cancellation mode was added, the new ECANCELED error was not whitelisted. this omission caused the new pthread_cond_wait code using masked cancellation to exhibit a spurious wake (rather than acting on cancellation) when the request arrived after blocking on the cond var.
* overhaul optimized x86_64 memset asmRich Felker2015-02-261-26/+55
| | | | | | | | | | | | | | on most cpu models, "rep stosq" has high overhead that makes it undesirable for small memset sizes. the new code extends the minimal-branch fast path for short memsets from size 15 up to size 126, and shrink-wraps this code path. in addition, "rep stosq" is sensitive to misalignment. the cost varies with size and with cpu model, but it has been observed performing 1.5 times slower when the destination address is not aligned mod 16. the new code thus ensures alignment mod 16, but also preserves any existing additional alignment, in case there are cpu models where it is beneficial. this version is based in part on changes proposed by Denys Vlasenko.
* overhaul optimized i386 memset asmRich Felker2015-02-261-32/+61
| | | | | | | | | | | | | | | on most cpu models, "rep stosl" has high overhead that makes it undesirable for small memset sizes. the new code extends the minimal-branch fast path for short memsets from size 15 up to size 62, and shrink-wraps this code path. in addition, "rep stosl" is very sensitive to misalignment. the cost varies with size and with cpu model, but it has been observed performing 1.5 to 4 times slower when the destination address is not aligned mod 16. the new code thus ensures alignment mod 16, but also preserves any existing additional alignment, in case there are cpu models where it is beneficial. this version is based in part on changes to the x86_64 memset asm proposed by Denys Vlasenko.
* getloadavg: use sysinfo() instead of /proc/loadavgAlexander Monakov2015-02-251-11/+7
| | | | Based on a patch by Szabolcs Nagy.
* fix possible isatty false positives and unwanted device state changesRich Felker2015-02-233-9/+8
| | | | | | | | | | | | | | | | | | | | | | | the equivalent checks for newly opened stdio output streams, used to determine buffering mode, are also fixed. on most archs, the TCGETS ioctl command shares a value with SNDCTL_TMR_TIMEBASE, part of the OSS sound API which was apparently used with certain MIDI and timer devices. for file descriptors referring to such a device, TCGETS will not fail with ENOTTY as expected; it may produce a different error, or may succeed, and if it succeeds it changes the mode of the device. while it's unlikely that such devices are in use, this is in principle very harmful behavior for an operation which is supposed to do nothing but query whether the fd refers to a tty. TIOCGWINSZ, used to query logical window size for a terminal, was chosen as an alternate ioctl to perform the isatty check. it does not share a value with any other ioctl commands, and it succeeds on any tty device. this change also cleans up strace output to be less ugly and misleading.
* fix breakage in pthread_cond_wait due to typoRich Felker2015-02-231-1/+1
| | | | | | | | | | | | due to accidental use of = instead of ==, the error code was always set to zero in the signaled wake case for non-shared cv waits. suppressing ETIMEDOUT (the only possible wait error) is harmless and actually permitted in this case, but suppressing mutex errors could give the caller false information about the state of the mutex. commit 8741ffe625363a553e8f509dc3ca7b071bdbab47 introduced this regression and commit d9da1fb8c592469431c764732d09f7756340190e preserved it when reorganizing the code.
* support alternate backends for the passwd and group dbsJosiah Worcester2015-02-234-2/+390
| | | | | | | | when we fail to find the entry in the commonly accepted files, we query a server over a Unix domain socket on /var/run/nscd/socket. the protocol used here is compatible with glibc's nscd protocol on most systems (all that use 32-bit numbers for all the protocol fields, which appears to be everything but Alpha).
* fix spurious errors in refactored passwd/group codeRich Felker2015-02-232-2/+2
| | | | | | | | | errno was treated as the error status when the return value of getline was negative, but this condition can simply indicate EOF and is not necessarily an error. the spurious errors caused by this bug masked the bug which was fixed in commit fc5a96c9c8aa186effad7520d5df6b616bbfd29d.
* fix crashes in refactored passwd/group codeRich Felker2015-02-232-4/+4
| | | | | | | the wrong condition was used in determining the presence of a result that needs space/copying for the _r functions. a zero return value does not necessarily mean success; it can also be a non-error negative result: no such user/group.
* simplify cond var code now that cleanup handler is not neededRich Felker2015-02-221-86/+63
|
* fix pthread_cond_wait cancellation raceRich Felker2015-02-221-5/+38
| | | | | | | | | | | | | it's possible that signaling a waiter races with cancellation of that same waiter. previously, cancellation was acted upon, causing the signal to be consumed with no waiter returning. by using the new masked cancellation state, it's possible to refuse to act on the cancellation request and instead leave it pending. to ease review and understanding of the changes made, this commit leaves the unwait function, which was previously the cancellation cleanup handler, in place. additional simplifications could be made by removing it.
* add new masked cancellation modeRich Felker2015-02-212-10/+16
| | | | | | | | | | | | this is a new extension which is presently intended only for experimental and internal libc use. interface and behavior details may change subject to feedback and experience from using it internally. the basic concept for the new PTHREAD_CANCEL_MASKED state is that the first cancellation point to observe the cancellation request fails with an errno value of ECANCELED rather than acting on cancellation, allowing the caller to process the status and choose whether/how to act upon it.
* prepare cancellation syscall asm for possibility of __cancel returningRich Felker2015-02-205-11/+32
|
* map interruption of close by signal to success rather than EINPROGRESSRich Felker2015-02-201-1/+1
| | | | | | | | | commit 82dc1e2e783815e00a90cd3f681436a80d54a314 addressed the resolution of Austin Group issue 529, which requires close to leave the fd open when failing with EINTR, by returning the newly defined error code EINPROGRESS. this turns out to be a bad idea, though, since legacy applications not aware of the new specification are likely to interpret any error from close except EINTR as a hard failure.
* make pthread_exit responsible for disabling cancellationRich Felker2015-02-162-3/+2
| | | | | this requirement is tucked away in XSH 2.9.5 Thread Cancellation under the heading Thread Cancellation Cleanup Handlers.
* fix type error (arch-dependent) in new aio codeRich Felker2015-02-141-1/+1
| | | | | | | a_store is only valid for int, but ssize_t may be defined as long or another type. since there is no valid way for another thread to acess the return value without first checking the error/completion status of the aiocb anyway, an atomic store is not necessary.
* refactor group file access codeJosiah Worcester2015-02-136-51/+71
| | | | | this allows getgrnam and getgrgid to share code with the _r versions in preparation for alternate backend support.
* overhaul aio implementation for correctnessRich Felker2015-02-139-193/+440
| | | | | | | | | | | | | | | | | | | | | | | | previously, aio operations were not tracked by file descriptor; each operation was completely independent. this resulted in non-conforming behavior for non-seekable/append-mode writes (which are required to be ordered) and made it impossible to implement aio_cancel, which in turn made closing file descriptors with outstanding aio operations unsafe. the new implementation is significantly heavier (roughly twice the size, and seems to be slightly slower) and presently aims mainly at correctness, not performance. most of the public interfaces have been moved into a single file, aio.c, because there is little benefit to be had from splitting them. whenever any aio functions are used, aio_cancel and the internal queue lifetime management and fd-to-queue mapping code must be linked, and these functions make up the bulk of the code size. the close function's interaction with aio is implemented with weak alias magic, to avoid pulling in heavy aio cancellation code in programs that don't use aio, and the expensive cancellation path (which includes signal blocking) is optimized out when there are no active aio queues.
* fix bad character checking in wordexpRich Felker2015-02-111-0/+1
| | | | | | | | | the character sequence '$((' was incorrectly interpreted as the opening of arithmetic even within single-quoted contexts, thereby suppressing the checks for bad characters after the closing quote. presently bad character checking is only performed when the WRDE_NOCMD is used; this patch only corrects checking in that case.
* refactor passwd file access codeJosiah Worcester2015-02-106-49/+65
| | | | | this allows getpwnam and getpwuid to share code with the _r versions in preparation for alternate backend support.
* x86_64/memset: avoid performing final store twiceDenys Vlasenko2015-02-101-1/+1
| | | | | | | | | | | | | | The code does a potentially misaligned 8-byte store to fill the tail of the buffer. Then it fills the initial part of the buffer which is a multiple of 8 bytes. Therefore, if size is divisible by 8, we were storing last word twice. This patch decrements byte count before dividing it by 8, making one less store in "size is divisible by 8" case, and not changing anything in all other cases. All at the cost of replacing one MOV insn with LEA insn. Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
* x86_64/memset: simple optimizationsDenys Vlasenko2015-02-101-14/+16
| | | | | | | | | | | | | | | | "and $0xff,%esi" is a six-byte insn (81 e6 ff 00 00 00), can use 4-byte "movzbl %sil,%esi" (40 0f b6 f6) instead. 64-bit imul is slow, move it as far up as possible so that the result (rax) has more time to be ready by the time we start using it in mem stores. There is no need to shuffle registers in preparation to "rep movs" if we are not going to take that code path. Thus, patch moves "jump if len < 16" instructions up, and changes alternate code path to use rdx and rdi instead of rcx and r8. Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
* make protocol table zero byte separated and add ipv6 protocolsTimo Teräs2015-02-101-22/+26
|
* use the internal macro name FUTEX_PRIVATE in __waitSzabolcs Nagy2015-02-091-1/+1
| | | | | the name was recently added for the setxid/synccall rework, so use the name now that we have it.
* add IEEE binary128 long double support to floatscanSzabolcs Nagy2015-02-091-1/+9
| | | | | | | | | | | | | | | | | just defining the necessary constants: LD_B1B_MAX is 2^113 - 1 in base 10^9 KMAX is 2048 so the x array can hold up to 18432 decimal digits (the worst case is converting 2^-16495 = 5^16495 * 10^-16495 to binary, it requires the processing of int(log10(5)*16495)+1 = 11530 decimal digits after discarding the leading zeros, the conversion requires some headroom in x, but KMAX is more than enough for that) However this code is not optimal on archs with IEEE binary128 long double because the arithmetics is software emulated (on all such platforms as far as i know) which means big and slow strtod.
* math: fix fmodl for IEEE binary128Szabolcs Nagy2015-02-091-1/+1
| | | | | This trivial copy-paste bug went unnoticed due to lack of testing. No currently supported target archs are affected.
* simplify armhf fesetenvSzabolcs Nagy2015-02-081-1/+0
| | | | armhf fesetenv implementation did a useless read of the fpscr.
* fix fesetenv(FE_DFL_ENV) on mipsSzabolcs Nagy2015-02-081-1/+3
| | | | | mips fesetenv did not handle FE_DFL_ENV, now fcsr is cleared in that case.
* math: fix __fpclassifyl(-0.0) for IEEE binary128Szabolcs Nagy2015-02-081-3/+2
| | | | | The sign bit was not cleared before checking for 0 so -0.0 was misclassified as FP_SUBNORMAL instead of FP_ZERO.
* add parenthesis in fma.c to clarify intent and silence warningsSzabolcs Nagy2015-02-081-1/+1
|
* make getaddrinfo support SOCK_RAW and other socket typesRich Felker2015-02-074-34/+42
| | | | | | | | | all socket types are accepted at this point, but that may be changed at a later time if the behavior is not meaningful for other types. as before, omitting type (a value of 0) gives both UDP and TCP results, and SOCK_DGRAM or SOCK_STREAM restricts to UDP or TCP, respectively. for other socket types, the service name argument is required to be a null pointer, and the protocol number provided by the caller is used.
* remove cruft from x86_64 syscall.hSzabolcs Nagy2015-02-071-0/+3
| | | | | | | x86_64 syscall.h defined some musl internal syscall names and made them public. These defines were already moved to src/internal/syscall.h (except for SYS_fadvise which is added now) so the cruft in x86_64 syscall.h is not needed.