| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
patch by Jens Gustedt. this fixes a bug reported by Nadav Har'El. the
underlying issue was that a left-shift by 16 bits after promotion of
unsigned short to int caused integer overflow. while some compilers
define this overflow case as "shifting into the sign bit", doing so
doesn't help; the sign bit then gets extended through the upper bits
in subsequent arithmetic as unsigned long long. this patch imposes a
promotion to unsigned prior to the shift, so that the result is
well-defined and matches the specified behavior.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit 5345c9b884e7c4e73eb2c8bb83b8d0df20f95afb added a linked list to
track the FILE streams currently locked (via flockfile) by a thread.
due to a failure to fully link newly added members, removal from the
list could leave behind references which could later result in writes
to already-freed memory and possibly other memory corruption.
implicit stdio locking was unaffected; the list is only used in
conjunction with explicit flockfile locking.
this bug was not present in any releases; it was introduced and fixed
during the same release cycle.
patch by Timo Teräs, who discovered and tracked down the bug.
|
|
|
|
|
| |
This was not caught earlier because gcc incorrectly generates quiet
relational operators that never raise exceptions.
|
|
|
|
|
|
|
|
| |
incorrect behavior occurred only in cases where the input overflows
unsigned long long, not just the (possibly lower) range limit for the
result type. in this case, processing of the '-' sign character was
not suppressed, and the function returned a value of 1 despite setting
errno to ERANGE.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new code is a bit simpler and the generated code is about 1KB
smaller (on i386). The basic design was kept including internal
interfaces, TNFA generation was not touched.
The old tre parser had various issues:
[^aa-z]
negated overlapping ranges in a bracket expression were handled
incorrectly (eg [^aa-z] was handled as [^a] instead of [^a-z])
a{,2}
missing lower bound in a counted repetition should be an error,
but it was accepted with broken semantics: a{,2} was treated as
a{0,3}, the new parser rejects it
a{999,}
large min count was not rejected (a{5000,} failed with REG_ESPACE
due to reaching a stack limit), the new parser enforces the
RE_DUP_MAX limit
\xff
regcomp used to accept a pattern with illegal sequences in it
(treated them as empty expression so p\xffq matched pq) the new
parser rejects such patterns with REG_BADPAT or REG_ERANGE
[^b-fD-H] with REG_ICASE
old parser turned this into [^b-fB-F] because of the negated
overlapping range issue (see above), the new parser treats it
as [^b-hB-H], POSIX seems to require [^d-fD-F], but practical
implementations do case-folding first and negate the character
set later instead of the other way around. (Supporting the posix
way efficiently would require significant changes so it was left
as is, it is unclear if any application actually expects the
posix behaviour, this issue is raised on the austingroup tracker:
http://austingroupbugs.net/view.php?id=872 ).
another case-insensitive matching issue is that unicode case
folding rules can group more than two characters together while
towupper and towlower can only work for a pair of upper and
lower case characters, this is a limitation of POSIX so it is
not fixed.
invalid bracket and brace expressions may return different error
codes now (REG_ERANGE instead of REG_EBRACK or REG_BADBR instead
of REG_EBRACE) otherwise the new parser should be compatible with
the old one.
regcomp should be able to handle arbitrary pattern input if the
pattern length is limited, the only exception is the use of large
repetition counts (eg. (a{255}){255}) which require exp amount
of memory and there is no easy workaround.
|
|
|
|
|
|
| |
the C11 _Alignas keyword is not present in C++, and despite it being
in the reserved namespace and thus reasonable to support even in
non-C11 modes, compilers seem to fail to support it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
as a result of commit ab8f6a6e42ff893041f7545a23e6d6a0edde07fb, this
definition is now equivalent to the actual "default profile" which
appears immediately below in features.h, and which defines both
_BSD_SOURCE and _XOPEN_SOURCE.
the intent of providing a _DEFAULT_SOURCE, which glibc also now
provides, is to give applications a way to "get back" the default
feature profile when it was lost either by compiler flags that inhibit
it (such as -std=c99) or by library-provided predefined macros (such
as -D_POSIX_C_SOURCE=200809L) which may inhibit exposure of features
that were otherwise visible by default and which the application may
need. without _DEFAULT_SOURCE, the application had encode knowledge of
a particular libc's defaults, and such knowledge was fragile and
subject to bitrot.
eventually the names _GNU_SOURCE and _BSD_SOURCE should be phased out
in favor of the more-descriptive and more-accurate _ALL_SOURCE and
_DEFAULT_SOURCE, leaving the old names as aliases but using the new
ones internally. however this is a more invasive change that would
require extensive regression testing, so it is deferred.
|
|
|
|
|
| |
this could be an error if _GNU_SOURCE was already defined differently
by the application.
|
|
|
|
|
|
|
|
|
|
|
| |
the vast majority of these failures seem to have been oversights at
the time _BSD_SOURCE was added, or perhaps shortly afterward. the one
which may have had some reason behind it is omission of setpgrp from
the _BSD_SOURCE feature profile, since the standard setpgrp interface
conflicts with a legacy (pre-POSIX) BSD interface by the same name.
however, such omission is not aligned with our general policy in this
area (for example, handling of similar _GNU_SOURCE cases) and should
not be preserved.
|
|
|
|
|
|
| |
the previous commit was a no op in exp10l because LDBL_* macros
were implicitly 0 (the preprocessor does not warn about undefined
symbols).
|
|
|
|
|
|
| |
__polevll, __p1evll and exp10l were provided on archs when long double
is the same as double. The first two were completely unused and exp10l
can be a wrapper around exp10.
|
|
|
|
|
|
|
|
|
| |
open file description locks are inherited across fork and only auto
dropped after the last fd of the file description is closed, they can be
used to synchronize between threads that open separate file descriptions
for the same file.
new in linux 3.15 commit 0d3f7a2dd2f5cf9642982515e020c1aee2cf7af6
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
based on patch by Jens Gustedt.
the main difficulty here is handling the difference between start
function signatures and thread return types for C11 threads versus
POSIX threads. pointers to void are assumed to be able to represent
faithfully all values of int. the function pointer for the thread
start function is cast to an incorrect type for passing through
pthread_create, but is cast back to its correct type before calling so
that the behavior of the call is well-defined.
changes to the existing threads implementation were kept minimal to
reduce the risk of regressions, and duplication of code that carries
implementation-specific assumptions was avoided for ease and safety of
future maintenance.
|
|
|
|
|
| |
Because of the clear separation for private pthread_cond_t these
interfaces are quite simple and direct.
|
| |
|
|
|
|
|
|
| |
These all have POSIX equivalents, but aside from tss_get, they all
have minor changes to the signature or return value and thus need to
exist as separate functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
based on patch by Jens Gustedt.
mtx_t and cnd_t are defined in such a way that they are formally
"compatible types" with pthread_mutex_t and pthread_cond_t,
respectively, when accessed from a different translation unit. this
makes it possible to implement the C11 functions using the pthread
functions (which will dereference them with the pthread types) without
having to use the same types, which would necessitate either namespace
violations (exposing pthread type names in threads.h) or incompatible
changes to the C++ name mangling ABI for the pthread types.
for the rest of the types, things are much simpler; using identical
types is possible without any namespace considerations.
|
|
|
|
|
|
|
|
|
|
| |
The intent of this is to avoid name space pollution of the C threads
implementation.
This has two sides to it. First we have to provide symbols that wouldn't
pollute the name space for the C threads implementation. Second we have
to clean up some internal uses of POSIX functions such that they don't
implicitly drag in such symbols.
|
|
|
|
|
|
| |
based on patch by Jens Gustedt for inclusion with C11 threads
implementation, but committed separately since it's independent of
threads.
|
| |
|
|
|
|
|
|
| |
there is no blksize64_t (blksize_t is always long) but there are
fsblkcnt64_t and fsfilcnt64_t types in sys/stat.h and sys/types.h.
and glob.h missed glob64_t.
|
|
|
|
|
|
| |
versionsort64, aio*64 and lio*64 symbols were missing, they are
only needed for glibc ABI compatibility, on the source level
dirent.h and aio.h already redirect them.
|
| |
|
|
|
|
|
|
| |
this error resulted in an out-of-bounds read, as opposed to a reported
error, when calling the function with an argument one greater than the
max valid index.
|
|
|
|
|
|
|
|
|
| |
if the loop stopped due to reaching the end of the string, the
subsequent increment could possibly move the position one past the end
of the buffer. no further writes happen, the reads cannot fault anyway
unless the stack completely lacks any zero bytes, and reading junk
should not yield an incorrect result from the function either.
nonetheless the code was wrong and needs to be fixed.
|
|
|
|
|
|
| |
the condition was probably intended to be !*p rather than !p, but
neither is needed here. the subsequent code naturally handles the case
where it's already at end of string.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
U+00DF ('ß') has had an uppercase form (U+1E9E) available since
Unicode 5.1, but Unicode lacks the case mappings for it due to
stability policy. when I added support for the new character in commit
1a63a9fc30e7a1f1239e3cedcb5041e5ec1c5351, I omitted the mapping in the
lowercase-to-uppercase direction. this choice was not based on any
actual information, only assumptions.
this commit adds bidirectional case mappings between U+00DF and
U+1E9E, and removes the special-case hack that allowed U+00DF to be
identified as lowecase despite lacking a mapping. aside from strong
evidence that this is the "right" behavior for real-world usage of
these characters, several factors informed this decision:
- the other "potentially correct" mapping, to "SS", is not
representable in the C case-mapping system anyway.
- leaving one letter in lowercase form when transforming a string to
uppercase is obviously wrong.
- having a character which is nominally lowercase but which is fixed
under case mapping violates reasonable invariants.
|
|
|
|
|
|
|
| |
per POSIX these functions are both cancellation points, so they must
act on any cancellation request which is pending prior to the call.
previously, only the code path where actual waiting took place could
act on cancellation.
|
|
|
|
|
|
|
|
|
|
|
| |
the outer getnameinfo function already has a properly-sized temporary
buffer for storing the reverse dns (ptr) result. there is no reason
for the callback to use a secondary buffer and copy it on success, and
doing so potentially expanded the impact of the dn_expand bug that was
fixed in commit 49d2c8c6bcf8c926e52c7f510033b6adc31355f5.
this change reduces the code size by a small amount, and also reduces
the run-time stack space requirements by about 256 bytes.
|
|
|
|
|
|
|
|
|
|
|
| |
previously, fgets, fputs, fread, and fwrite completely omitted locking
and access to the FILE object when their arguments yielded a zero
length read or write operation independent of the FILE state. this
optimization was invalid; it wrongly skipped marking the stream as
byte-oriented (a C conformance bug) and exposed observably missing
synchronization (a POSIX conformance bug) where one of these functions
could wrongly complete despite another thread provably holding the
lock.
|
|
|
|
|
|
|
|
|
| |
the C standard requires that "the contents of the array remain
unchanged" in this case.
this patch also changes the behavior on read errors, but in that case
"the array contents are indeterminate", so the application cannot
inspect them anyway.
|
|
|
|
|
|
|
|
|
|
| |
Empty name was rejected in dn_expand since commit
56b57f37a46dab432247bf29d96fcb11fbd02a6d
which is a regression as reported by Natanael Copa.
Furthermore if an offset pointer in a compressed name
pointed to a terminating 0 byte (instead of a label)
the returned name was not null terminated.
|
|
|
|
|
| |
add static_assert and protect the other new C11 keyword macros
with #ifndef __cplusplus so they don't conflict with C++ keywords.
|
|
|
|
| |
C11 introduced *_DECIMAL_DIG and *_HAS_SUBNORM macros.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this function is needed for some important practical applications of
ABI compatibility, and may be useful for supporting some non-portable
software at the source level too.
I was hesitant to add a function which imposes any constraints on
malloc internals; however, it turns out that any malloc implementation
which has realloc must already have an efficient way to determine the
size of existing allocations, so no additional constraint is imposed.
for now, some internal malloc definitions are duplicated in the new
source file. if/when malloc is refactored to put them in a shared
internal header file, these could be removed.
since malloc_usable_size is conventionally declared in malloc.h, the
empty stub version of this file was no longer suitable. it's updated
to provide the standard allocator functions, nonstandard ones (even if
stdlib.h would not expose them based on the feature test macros in
effect), and any malloc-extension functions provided (currently, only
malloc_usable_size).
|
|
|
|
|
|
|
|
|
|
| |
if there is already a waiter for a lock, spinning on the lock is
essentially an attempt to steal it from whichever waiter would obtain
it via any priority rules in place, and is therefore undesirable. in
the current implementation, there is always an inherent race window at
unlock during which a newly-arriving thread may steal the lock from
the existing waiters, but we should aim to keep this window minimal
rather than enlarging it.
|
| |
|
|
|
|
|
|
|
|
|
| |
empirically, this increases the maximum rate of wait/post operations
between two threads by 20-150 times on machines I tested, including
x86 and arm. conceptually, it makes sense to do some spinning because
semaphores are intended to be usable as a notification mechanism
between threads, not just as locks, and low-latency notification is a
valuable property to have.
|
|
|
|
| |
this was broken by commit ea818ea8340c13742a4f41e6077f732291aea4bc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the previous spin limit of 10000 was utterly unreasonable.
empirically, it could consume up to 200000 cycles, whereas a failed
futex wait (EAGAIN) typically takes 1000 cycles or less, and even a
true wait/wake round seems much less expensive.
the new counts (100 for general wait, 200 in barrier) were simply
chosen to be in the range of what's reasonable without having adverse
effects on casual micro-benchmark tests I have been running. they may
still be too high, from a standpoint of not wasting cpu cycles, but at
least they're a lot better than before. rigorous testing across
different archs and cpu models should be performed at some point to
determine whether further adjustments should be made.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
conceptually, a_spin needs to be at least a compiler barrier, so the
compiler will not optimize out loops (and the load on each iteration)
while spinning. it should also be a memory barrier, or the spinning
thread might keep spinning without noticing stores from other threads,
thus delaying for longer than it should.
ideally, an optimal a_spin implementation that avoids unnecessary
cache/memory contention should be chosen for each arch, but for now,
the easiest thing is to perform a useless a_cas on the calling
thread's stack.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this is analogous commit fffc5cda10e0c5c910b40f7be0d4fa4e15bb3f48
which fixed the corresponding issue for mutexes.
the robust list can't be used here because the locks do not share a
common layout with mutexes. at some point it may make sense to simply
incorporate a mutex object into the FILE structure and use it, but
that would be a much more invasive change, and it doesn't mesh well
with the current design that uses a simpler code path for internal
locking and pulls in the recursive-mutex-like code when the flockfile
API is used explicitly.
|
|
|
|
| |
for unknown syscall commands, the kernel produces ENOSYS, not EINVAL.
|
|
|
|
|
|
|
|
|
|
| |
the subsequent code in pthread_create and the code which copies TLS
initialization images to the new thread's TLS space assume that the
memory provided to them is zero-initialized, which is true when it's
obtained by pthread_create using mmap. however, when the caller
provides a stack using pthread_attr_setstack, pthread_create cannot
make any assumptions about the contents. simply zero-filling the
relevant memory in this case is the simplest and safest fix.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
unfortunately this needs to be able to vary by arch, because of a huge
mess GCC made: the GCC definition, which became the ABI, depends on
quirks in GCC's definition of __alignof__, which does not match the
formal alignment of the type.
GCC's __alignof__ unexpectedly exposes the an implementation detail,
its "preferred alignment" for the type, rather than the formal/ABI
alignment of the type, which it only actually uses in structures. on
most archs the two values are the same, but on some (at least i386)
the preferred alignment is greater than the ABI alignment.
I considered using _Alignas(8) unconditionally, but on at least one
arch (or1k), the alignment of max_align_t with GCC's definition is
only 4 (even the "preferred alignment" for these types is only 4).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the main idea of the changes made is to have waiters wait directly on
the "barrier" lock that was used to prevent them from making forward
progress too early rather than first waiting on the atomic state value
and then attempting to lock the barrier.
in addition, adjustments to the mutex waiter count are optimized.
previously, each waking waiter decremented the count (unless it was
the first) then immediately incremented it again for the next waiter
(unless it was the last). this was a roundabout was of achieving the
equivalent of incrementing it once for the first waiter and
decrementing it once for the last.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously, wake order could be unpredictable: if a waiter happened to
leave its futex wait on the state early, e.g. due to EAGAIN while
restarting after a signal handler, it could acquire the mutex out of
turn. handling this required ugly O(n) list walking in the unwait
function and accounting to remove waiters that already woke from the
list.
with the new changes, the "barrier" locks in each waiter node are only
unlocked in turn. in addition to simplifying the code, this seems to
improve performance slightly, probably by reducing the number of
accesses threads make to each other's stacks.
as an additional benefit, unrecoverable mutex re-locking errors
(mainly ENOTRECOVERABLE for robust mutexes) no longer need to be
handled with deadlock; they can be reported to the caller, since the
unlocking sequence makes it unnecessary to rely on the mutex to
synchronize access to the waiter list.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the immediate issue that was reported by Jens Gustedt and needed to be
fixed was corruption of the cv/mutex waiter states when switching to
using a new mutex with the cv after all waiters were unblocked but
before they finished returning from the wait function.
self-synchronized destruction was also handled poorly and may have had
race conditions. and the use of sequence numbers for waking waiters
admitted a theoretical missed-wakeup if the sequence number wrapped
through the full 32-bit space.
the new implementation is largely documented in the comments in the
source. the basic principle is to use linked lists initially attached
to the cv object, but detachable on signal/broadcast, made up of nodes
residing in automatic storage (stack) on the threads that are waiting.
this eliminates the need for waiters to access the cv object after
they are signaled, and allows us to limit wakeup to one waiter at a
time during broadcasts even when futex requeue cannot be used.
performance is also greatly improved, roughly double some tests.
basically nothing is changed in the process-shared cond var case,
where this implementation does not work, since processes do not have
access to one another's local storage.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when the kernel is responsible for waking waiters on a robust mutex
whose owner died, it does not have a waiters count available and must
rely entirely on the waiter bit of the lock value.
normally, this bit is only set by newly arriving waiters, so it will
be clear if no new waiters arrived after the current owner obtained
the lock, even if there are other waiters present. leaving it clear is
desirable because it allows timed-lock operations to remove themselves
as waiters and avoid causing unnecessary futex wake syscalls. however,
for process-shared robust mutexes, we need to set the bit whenever
there are existing waiters so that the kernel will know to wake them.
for non-process-shared robust mutexes, the wake happens in userspace
and can look at the waiters count, so the bit does not need to be set
in the non-process-shared case.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when manipulating the robust list, the order of stores matters,
because the code may be asynchronously interrupted by a fatal signal
and the kernel will then access the robust list in what is essentially
an async-signal context.
previously, aliasing considerations made it seem unlikely that a
compiler could reorder the stores, but proving that they could not be
reordered incorrectly would have been extremely difficult. instead
I've opted to make all the pointers used as part of the robust list,
including those in the robust list head and in the individual mutexes,
volatile.
in addition, the format of the robust list has been changed to point
back to the head at the end, rather than ending with a null pointer.
this is to match the documented kernel robust list ABI. the null
pointer, which was previously used, only worked because faults during
access terminate the robust list processing.
|