| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the memory model we use internally for atomics permits plain loads of
values which may be subject to concurrent modification without
requiring that a special load function be used. since a compiler is
free to make transformations that alter the number of loads or the way
in which loads are performed, the compiler is theoretically free to
break this usage. the most obvious concern is with atomic cas
constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be
transformed to a_cas(p,*p,f(*p)); where the latter is intended to show
multiple loads of *p whose resulting values might fail to be equal;
this would break the atomicity of the whole operation. but even more
fundamental breakage is possible.
with the changes being made now, objects that may be modified by
atomics are modeled as volatile, and the atomic operations performed
on them by other threads are modeled as asynchronous stores by
hardware which happens to be acting on the request of another thread.
such modeling of course does not itself address memory synchronization
between cores/cpus, but that aspect was already handled. this all
seems less than ideal, but it's the best we can do without mandating a
C11 compiler and using the C11 model for atomics.
in the case of pthread_once_t, the ABI type of the underlying object
is not volatile-qualified. so we are assuming that accessing the
object through a volatile-qualified lvalue via casts yields volatile
access semantics. the language of the C standard is somewhat unclear
on this matter, but this is an assumption the linux kernel also makes,
and seems to be the correct interpretation of the standard.
|
|
|
|
|
|
| |
like close, pthread_join is a resource-deallocation function which is
also a cancellation point. the intent of masked cancellation mode is
to exempt such functions from failure with ECANCELED.
|
|
|
|
|
|
| |
pthread_testcancel is not in the ISO C reserved namespace and thus
cannot be used here. use the namespace-protected version of the
function instead.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously, the __timedwait function was optionally a cancellation
point depending on whether it was passed a pointer to a cleaup
function and context to register. as of now, only one caller actually
used such a cleanup function (and it may face removal soon); most
callers either passed a null pointer to disable cancellation or a
dummy cleanup function.
now, __timedwait is never a cancellation point, and __timedwait_cp is
the cancellable version. this makes the intent of the calling code
more obvious and avoids ugly dummy functions and long argument lists.
|
|
|
|
|
|
|
|
|
|
| |
as part of abstracting the futex wait, this function suppresses all
futex error values which callers should not see using a whitelist
approach. when the masked cancellation mode was added, the new
ECANCELED error was not whitelisted. this omission caused the new
pthread_cond_wait code using masked cancellation to exhibit a spurious
wake (rather than acting on cancellation) when the request arrived
after blocking on the cond var.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on most cpu models, "rep stosq" has high overhead that makes it
undesirable for small memset sizes. the new code extends the
minimal-branch fast path for short memsets from size 15 up to size
126, and shrink-wraps this code path. in addition, "rep stosq" is
sensitive to misalignment. the cost varies with size and with cpu
model, but it has been observed performing 1.5 times slower when the
destination address is not aligned mod 16. the new code thus ensures
alignment mod 16, but also preserves any existing additional
alignment, in case there are cpu models where it is beneficial.
this version is based in part on changes proposed by Denys Vlasenko.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on most cpu models, "rep stosl" has high overhead that makes it
undesirable for small memset sizes. the new code extends the
minimal-branch fast path for short memsets from size 15 up to size 62,
and shrink-wraps this code path. in addition, "rep stosl" is very
sensitive to misalignment. the cost varies with size and with cpu
model, but it has been observed performing 1.5 to 4 times slower when
the destination address is not aligned mod 16. the new code thus
ensures alignment mod 16, but also preserves any existing additional
alignment, in case there are cpu models where it is beneficial.
this version is based in part on changes to the x86_64 memset asm
proposed by Denys Vlasenko.
|
|
|
|
| |
Based on a patch by Szabolcs Nagy.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the equivalent checks for newly opened stdio output streams, used to
determine buffering mode, are also fixed.
on most archs, the TCGETS ioctl command shares a value with
SNDCTL_TMR_TIMEBASE, part of the OSS sound API which was apparently
used with certain MIDI and timer devices. for file descriptors
referring to such a device, TCGETS will not fail with ENOTTY as
expected; it may produce a different error, or may succeed, and if it
succeeds it changes the mode of the device. while it's unlikely that
such devices are in use, this is in principle very harmful behavior
for an operation which is supposed to do nothing but query whether the
fd refers to a tty.
TIOCGWINSZ, used to query logical window size for a terminal, was
chosen as an alternate ioctl to perform the isatty check. it does not
share a value with any other ioctl commands, and it succeeds on any
tty device.
this change also cleans up strace output to be less ugly and
misleading.
|
|
|
|
|
|
|
|
|
|
|
|
| |
due to accidental use of = instead of ==, the error code was always
set to zero in the signaled wake case for non-shared cv waits.
suppressing ETIMEDOUT (the only possible wait error) is harmless and
actually permitted in this case, but suppressing mutex errors could
give the caller false information about the state of the mutex.
commit 8741ffe625363a553e8f509dc3ca7b071bdbab47 introduced this
regression and commit d9da1fb8c592469431c764732d09f7756340190e
preserved it when reorganizing the code.
|
|
|
|
|
|
|
|
| |
when we fail to find the entry in the commonly accepted files, we
query a server over a Unix domain socket on /var/run/nscd/socket.
the protocol used here is compatible with glibc's nscd protocol on
most systems (all that use 32-bit numbers for all the protocol fields,
which appears to be everything but Alpha).
|
|
|
|
|
|
|
|
|
| |
errno was treated as the error status when the return value of getline
was negative, but this condition can simply indicate EOF and is not
necessarily an error.
the spurious errors caused by this bug masked the bug which was fixed
in commit fc5a96c9c8aa186effad7520d5df6b616bbfd29d.
|
|
|
|
|
|
|
| |
the wrong condition was used in determining the presence of a result
that needs space/copying for the _r functions. a zero return value
does not necessarily mean success; it can also be a non-error negative
result: no such user/group.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it's possible that signaling a waiter races with cancellation of that
same waiter. previously, cancellation was acted upon, causing the
signal to be consumed with no waiter returning. by using the new
masked cancellation state, it's possible to refuse to act on the
cancellation request and instead leave it pending.
to ease review and understanding of the changes made, this commit
leaves the unwait function, which was previously the cancellation
cleanup handler, in place. additional simplifications could be made by
removing it.
|
|
|
|
|
|
|
|
|
|
|
|
| |
this is a new extension which is presently intended only for
experimental and internal libc use. interface and behavior details may
change subject to feedback and experience from using it internally.
the basic concept for the new PTHREAD_CANCEL_MASKED state is that the
first cancellation point to observe the cancellation request fails
with an errno value of ECANCELED rather than acting on cancellation,
allowing the caller to process the status and choose whether/how to
act upon it.
|
| |
|
|
|
|
|
|
|
|
|
| |
commit 82dc1e2e783815e00a90cd3f681436a80d54a314 addressed the
resolution of Austin Group issue 529, which requires close to leave
the fd open when failing with EINTR, by returning the newly defined
error code EINPROGRESS. this turns out to be a bad idea, though, since
legacy applications not aware of the new specification are likely to
interpret any error from close except EINTR as a hard failure.
|
|
|
|
|
| |
this requirement is tucked away in XSH 2.9.5 Thread Cancellation under
the heading Thread Cancellation Cleanup Handlers.
|
|
|
|
|
|
|
| |
a_store is only valid for int, but ssize_t may be defined as long or
another type. since there is no valid way for another thread to acess
the return value without first checking the error/completion status of
the aiocb anyway, an atomic store is not necessary.
|
|
|
|
|
| |
this allows getgrnam and getgrgid to share code with the _r versions
in preparation for alternate backend support.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously, aio operations were not tracked by file descriptor; each
operation was completely independent. this resulted in non-conforming
behavior for non-seekable/append-mode writes (which are required to be
ordered) and made it impossible to implement aio_cancel, which in turn
made closing file descriptors with outstanding aio operations unsafe.
the new implementation is significantly heavier (roughly twice the
size, and seems to be slightly slower) and presently aims mainly at
correctness, not performance.
most of the public interfaces have been moved into a single file,
aio.c, because there is little benefit to be had from splitting them.
whenever any aio functions are used, aio_cancel and the internal
queue lifetime management and fd-to-queue mapping code must be linked,
and these functions make up the bulk of the code size.
the close function's interaction with aio is implemented with weak
alias magic, to avoid pulling in heavy aio cancellation code in
programs that don't use aio, and the expensive cancellation path
(which includes signal blocking) is optimized out when there are no
active aio queues.
|
|
|
|
|
|
|
|
|
| |
the character sequence '$((' was incorrectly interpreted as the
opening of arithmetic even within single-quoted contexts, thereby
suppressing the checks for bad characters after the closing quote.
presently bad character checking is only performed when the WRDE_NOCMD
is used; this patch only corrects checking in that case.
|
|
|
|
|
| |
this allows getpwnam and getpwuid to share code with the _r versions
in preparation for alternate backend support.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code does a potentially misaligned 8-byte store to fill the tail
of the buffer. Then it fills the initial part of the buffer
which is a multiple of 8 bytes.
Therefore, if size is divisible by 8, we were storing last word twice.
This patch decrements byte count before dividing it by 8,
making one less store in "size is divisible by 8" case,
and not changing anything in all other cases.
All at the cost of replacing one MOV insn with LEA insn.
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"and $0xff,%esi" is a six-byte insn (81 e6 ff 00 00 00), can use
4-byte "movzbl %sil,%esi" (40 0f b6 f6) instead.
64-bit imul is slow, move it as far up as possible so that the result
(rax) has more time to be ready by the time we start using it
in mem stores.
There is no need to shuffle registers in preparation to "rep movs"
if we are not going to take that code path. Thus, patch moves
"jump if len < 16" instructions up, and changes alternate code path
to use rdx and rdi instead of rcx and r8.
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
this syscall allows fexecve to be implemented without /proc, it is new
in linux v3.19, added in commit 51f39a1f0cea1cacf8c787f652f26dfee9611874
(sh and microblaze do not have allocated syscall numbers yet)
added a x32 fix as well: the io_setup and io_submit syscalls are no
longer common with x86_64, so use the x32 specific numbers.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
these socket options are new in linux v3.19, introduced in commit
2c8c56e15df3d4c2af3d656e44feb18789f75837 and commit
89aa075832b0da4402acebd698d0411dcc82d03e
with SO_INCOMING_CPU the cpu can be queried on which a socket is
managed inside the kernel and optimize polling of large number of
sockets accordingly.
SO_ATTACH_BPF lets eBPF programs (created by the bpf syscall) to
be attached to sockets.
|
|
|
|
|
| |
the name was recently added for the setxid/synccall rework,
so use the name now that we have it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
just defining the necessary constants:
LD_B1B_MAX is 2^113 - 1 in base 10^9
KMAX is 2048 so the x array can hold up to 18432 decimal digits
(the worst case is converting 2^-16495 = 5^16495 * 10^-16495 to
binary, it requires the processing of int(log10(5)*16495)+1 = 11530
decimal digits after discarding the leading zeros, the conversion
requires some headroom in x, but KMAX is more than enough for that)
However this code is not optimal on archs with IEEE binary128
long double because the arithmetics is software emulated (on
all such platforms as far as i know) which means big and slow
strtod.
|
|
|
|
|
| |
This trivial copy-paste bug went unnoticed due to lack of testing.
No currently supported target archs are affected.
|
|
|
|
| |
armhf fesetenv implementation did a useless read of the fpscr.
|
|
|
|
|
| |
mips fesetenv did not handle FE_DFL_ENV, now fcsr is cleared in that
case.
|
|
|
|
|
| |
The sign bit was not cleared before checking for 0 so -0.0
was misclassified as FP_SUBNORMAL instead of FP_ZERO.
|
| |
|
|
|
|
|
|
|
|
|
| |
all socket types are accepted at this point, but that may be changed
at a later time if the behavior is not meaningful for other types. as
before, omitting type (a value of 0) gives both UDP and TCP results,
and SOCK_DGRAM or SOCK_STREAM restricts to UDP or TCP, respectively.
for other socket types, the service name argument is required to be a
null pointer, and the protocol number provided by the caller is used.
|
|
|
|
|
|
|
| |
x86_64 syscall.h defined some musl internal syscall names and made
them public. These defines were already moved to src/internal/syscall.h
(except for SYS_fadvise which is added now) so the cruft in x86_64
syscall.h is not needed.
|
|
|
|
|
|
|
| |
in the case where a non-symlink file was replaced by a symlink during
the fchmodat operation with AT_SYMLINK_NOFOLLOW, mode change on the
new symlink target was successfully suppressed, but the error was not
reported. instead, fchmodat simply returned 0.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the specification for execvp itself is unclear as to whether
encountering a file that cannot be executed due to EACCES during the
PATH search is a mandatory error condition; however, XBD 8.3's
specification of the PATH environment variable clarifies that the
search continues until a file with "appropriate execution permissions"
is found.
since it seems undesirable/erroneous to report ENOENT rather than
EACCES when an early path element has a non-executable file and all
later path elements lack any file by the requested name, the new code
stores a flag indicating that EACCES was seen and sets errno back to
EACCES in this case.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in practice this was probably a non-issue, because the necessary
barrier almost certainly exists in kernel space -- implementing signal
delivery without such a barrier seems impossible -- but for the sake
of correctness, it should be done here too.
in principle, without a barrier, it is possible that the thread to be
cancelled does not see the store of its cancellation flag performed by
another thread. this affects both the case where the signal arrives
before entering the critical program counter range from __cp_begin to
__cp_end (in which case both the signal handler and the inline check
fail to see the value which was already stored) and the case where the
signal arrives during the critical range (in which case the signal
handler should be responsible for cancellation, but when it does not
see the cancellation flag, it assumes the signal is spurious and
refuses to act on it).
in the fix, the barrier is placed only in the signal handler, not in
the inline check at the beginning of the critical program counter
range. if the signal handler runs before the critical range is
entered, it will of course take no action, but its barrier will ensure
that the inline check subsequently sees the store. if on the other
hand the inline check runs first, it may miss seeing the store, but
the subsequent signal handler in the critical range will act upon the
cancellation request. this strategy avoids adding a memory barrier in
the common, non-cancellation code path.
|
|
|
|
| |
mxcs_mask should be mxcr_mask
|
|
|
|
|
| |
these are mandatory cancellation points per POSIX, so their omission
was a conformance bug.
|
|
|
|
|
|
| |
the definitions are generic for all kernel archs. exposure of these
macros now only occurs on the same feature test as for the function
accepting them, which is believed to be more correct.
|
|
|
|
|
|
| |
this typo did not result in an erroneous setjmp with at least binutils
2.22 but fix it for clarity and compatibility with potentially stricter
sh assemblers.
|
|
|
|
|
| |
the errno values are unused by the kernel and the macro definitions were
never exposed by glibc.
|
|
|
|
|
|
|
| |
based on patch by Vadim Ushakov. in general overriding LC_ALL rather
than specific categories (here, LC_MESSAGES) is undesirable, but
LC_ALL is easier and in this case there is nothing else that depends
on the locale in this invocation of the compiler.
|
|
|
|
|
|
|
|
|
| |
when using /etc/shadow (rather than tcb) as its backend, getspnam_r
matched any username starting with the caller-provided string rather
than requiring an exact match. in practice this seems to have affected
only systems where one valid username is a prefix for another valid
username, and where the longer username appears first in the shadow
file.
|