| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
unify the code paths for allocated and non-allocated locale objects,
always using a tmp object. this is necessary to avoid clobbering the
base locale object too soon if we allow for the possibility that
looking up an explicitly requested locale name may fail, and makes the
code simpler and cleaner anyway.
eliminate the complex and fragile logic for checking whether one of
the non-allocated locale objects can be used for the result, and
instead just memcmp against each of them.
|
|
|
|
|
|
|
|
|
|
| |
commit 63c188ec42e76ff768e81f6b65b11c68fc43351e missed making this
change when switching from atomics to locking for modification of the
global locale, leaving access to locale structures unnecessarily
burdened with the restrictions of volatile.
the volatile qualification was originally added in commit
56fbaa3bbe73f12af2bfbbcf2adb196e6f9fe264.
|
|
|
|
|
|
|
| |
introduce a new LOC_MAP_FAILED sentinel for errors, since null
pointers for a category's locale map indicate the C locale. at this
time, __get_locale does not fail, so there should be no functional
change by this commit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the choice of signed char for lbf was a theoretically space-saving
hack that was not helping, and was unwantedly expensive. while
comparing bytes against a byte-sized member sounds easy, the trick
here was that the byte to be compared was unsigned while the lbf
member was signed, making it possible to set lbf negative to disable
line buffering. however, this imposed a requirement to promote both
operands, zero-extending one and sign-extending the other, in order to
compare them.
to fix this, repurpose the waiters count slot (unused since commit
c21f750727515602a9e84f2a190ee8a0a2aeb2a1). while we're at it, switch
mode (orientation) from signed char to int as well. this makes no
semantic difference (its only possible values are -1, 0, and 1) but it
might help on archs where byte access is awkward.
|
|
|
|
|
|
|
|
| |
to check whether flush due to line buffering is needed, the int-type
character argument must be truncated to unsigned char for comparison.
if the original value is subsequently passed to __overflow, it must be
preserved, adding to register pressure. since it doesn't matter,
truncate all uses so the original value is no longer live.
|
|
|
|
|
|
|
| |
the internal putc_unlocked macro was wrongly returning a meaningless
boolean result rather than the written character or EOF.
bug was found by reading (very surprising) asm.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
check whether the lock is free before loading the calling thread's
tid. if so, just use a dummy tid value that cannot compare equal to
any actual thread id (because it's one bit wider). this also avoids
the need to save the tid and pass it to locking_getc or locking_putc,
reducing register pressure.
this change might slightly hurt the case where the caller already
holds the lock, but it does not affect the single-threaded case, and
may significantly improve the multi-threaded case, especially on archs
where loading the thread pointer is disproportionately expensive like
early mips and arm ISA levels. but even on i386 it helps, at least on
some machines; I measured roughly a 10-15% improvement.
|
|
|
|
|
|
| |
this is not needed for correctness, but doesn't hurt, and in some
cases the compiler may pessimize the call assuming the callee might be
variadic when it lacks a prototype.
|
|
|
|
|
|
|
| |
commit 4390383b32250a941ec616e8bff6f568a801b1c0 inadvertently used "r"
instead of "0" for the input constraint, which only happened to work
for the configuration I tested it on because it usually makes sense
for the compiler to choose the same input and output register.
|
|
|
|
|
| |
commit d664061adb4d7f6647ab2059bc351daa394bf5da inadvertently omitted
the new file putc.h.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
by ABI, the public stdin/out/err macros use extern pointer objects,
and this is necessary to avoid copy relocations that would be
expensive and make the size of the FILE structure part of the ABI.
however, internally it makes sense to access the underlying FILE
objects directly. this avoids both an indirection through the GOT to
find the address of the stdin/out/err pointer objects (which can't be
computed PC-relative because they may have been moved to the main
program by copy relocations) and an indirection through the resulting
pointer object.
in most places this is just a minor optimization, but in the case of
getchar and putchar (and the unlocked versions thereof), ipa constant
propagation makes all accesses to members of stdin/out PC-relative or
GOT-relative, possibly reducing register pressure as well.
|
|
|
|
|
| |
this is the analog of commit dd8f02b7dce53d6b1c4282439f1636a2d63bee01,
but for putc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with these changes, in a program that has not created any threads
besides the main thread and that has not called f[try]lockfile, getc
performs indistinguishably from getc_unlocked. this was measured on
several i386 and x86_64 models, and should hold on other archs too
simply by the properties of the code generation.
the case where the caller already holds the lock (via flockfile) is
improved significantly as well (40-60% reduction in time on machines
tested) and the case where locking is needed is improved somewhat
(roughly 10%).
the key technique used here is forcing the non-hot path out-of-line
and enabling it to be a tail call. a static noinline function
(conditional on __GNUC__) is used rather than the extern hiddens used
elsewhere for this purpose, so that the compiler can choose
non-default calling conventions, making it possible to tail-call to a
callee that takes more arguments than the caller on archs where
arguments are passed on the stack or must have space reserved on the
stack for spilling the. the tid could just be reloaded via the thread
pointer in locking_getc, but that would be ridiculously expensive on
some archs where thread pointer load requires a trap or syscall.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on multiple occasions I've started to flatten/inline the code in
__init_libc, only to rediscover the reason it was not inlined: GCC
fails to deallocate its stack (and now, with the changes in commit
4390383b32250a941ec616e8bff6f568a801b1c0, fails to produce a tail call
to the stage 2 function; see PR #87639) before calling main if it was
inlined.
document this with a comment and use an explicit noinline attribute if
__GNUC__ is defined so that even with CFLAGS that heavily favor
inlining it won't get inlined.
|
|
|
|
|
|
|
|
|
|
|
|
| |
this is the analog of commit 1c84c99913bf1cd47b866ed31e665848a0da84a2
for static linking. unlike with dynamic linking, we don't have
symbolic lookup to use as a barrier. use a dummy (target-agnostic)
degenerate inline asm fragment instead. this technique has precedent
in commit 05ac345f895098657cf44d419b5d572161ebaf43 where it's used for
explicit_bzero. if it proves problematic in any way, loading the
address of the stage 2 function from a pointer object whose address
leaks to kernelspace during thread pointer init could be used as an
even stronger barrier.
|
|
|
|
|
|
|
|
|
|
| |
this will allow the compiler to cache and reuse the result, meaning we
no longer have to take care not to load it more than once for the sake
of archs where the load may be expensive.
depends on commit 1c84c99913bf1cd47b866ed31e665848a0da84a2 for
correctness, since otherwise the compiler could hoist loads during
stage 3 of dynamic linking before the initial thread-pointer setup.
|
|
|
|
|
| |
versions of clang all the way back to 3.1 lack the bug this was
purportedly working around.
|
|
|
|
|
|
|
|
| |
revert commit a603a75a72bb469c6be4963ed1b55fabe675fe15.
as a result of commit 1c84c99913bf1cd47b866ed31e665848a0da84a2 this is
now safe, assuming an interpretation of the somewhat-underspecified
attribute((const)) consistent with real-world usage.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit a603a75a72bb469c6be4963ed1b55fabe675fe15 removed attribute
const from __errno_location and pthread_self, and the same reasoning
forced arch definitions of __pthread_self to use volatile asm,
significantly impacting code generation and imposing manual caching of
pointers where the impact might be noticable.
reorder the thread pointer setup and place it across a strong barrier
(symbolic function lookup) so that there is no assumed ordering
between the initialization and the accesses to the thread pointer in
stage 3.
|
|
|
|
| |
don't repeat definition in two places.
|
|
|
|
|
| |
the placement triggered -Wmisleading-indentation warnings if enabled,
and was gratuitously confusing to anyone reading the code.
|
|
|
|
|
|
|
| |
fma is only available on recent x86_64 cpus and it is much faster than
a software fma, so this should be done with a runtime check, however
that requires more changes, this patch just adds the code so it can be
tested when musl is compiled with -mfma or -mfma4.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vfma is available in the vfpv4 fpu and above, the ACLE standard feature
test for double precision hardware fma support is
__ARM_FEATURE_FMA && __ARM_FP&8
we need further checks to work around clang bugs (fixed in clang >=7.0)
&& !__SOFTFP__
because __ARM_FP is defined even with -mfloat-abi=soft
&& !BROKEN_VFP_ASM
to disable the single precision code when inline asm handling is broken.
For runtime selection the HWCAP_ARM_VFPv4 hwcap flag can be used, but
that requires further work.
|
|
|
|
|
| |
These are only available on hard float target and sqrt is not available
in the base ISA, so further check is used.
|
|
|
|
| |
These are available in the s390x baseline isa -march=z900.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
previously (before and after rewrite), spurious escaping of path
separators as \/ was not treated the same as /, but rather got split
as an unpaired \ at the end of the fnmatch pattern and an unescaped /,
resulting in a mismatch/error.
for the case of \/ as part of the maximal literal prefix, remove the
explicit rejection of it and move the handling of / below escape
processing.
for the case of \/ after a proper glob pattern, it's hard to parse the
pattern, so don't. instead cheat and count repetitions of \ prior to
the already-found / character. if there are an odd number, the last is
escaping the /, so back up the split position by one. now the
char clobbered by null termination is variable, so save it and restore
as needed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this code has been long overdue for a rewrite, but the immediate cause
that necessitated it was total failure to see past unreadable path
components. for example, A/B/* would fail to match anything, even
though it should succeed, when both A and A/B are searchable but only
A/B is readable. this problem both was caught in conformance testing,
and impacted users.
the old glob implementation insisted on searching the listing of each
path component for a match, even if the next component was a literal.
it also used considerable stack space, up to length of the pattern,
per recursion level, and relied on an artificial bound of the pattern
length by PATH_MAX, which was incorrect because a pattern can be much
longer than PATH_MAX while having matches shorter (for example, with
necessarily long bracket expressions, or with redundancy).
in the new implementation, each level of recursion starts by consuming
the maximal literal (possibly escaped-literal) path prefix remaining
in the pattern, and only opening a directory to read when there is a
proper glob pattern in the next path component. it then recurses into
each matching entry. the top-level glob function provided automatic
storage (up to PATH_MAX) for construction of candidate/result strings,
and allocates a duplicate of the pattern that can be modified in-place
with temporary null-termination to pass to fnmatch. this allocation is
not a big deal since glob already has to perform allocation, and has
to link free to clean up if it experiences an allocation failure or
other error after some results have already been allocated.
care is taken to use the d_type field from iterated dirents when
possible; stat is called only when there are literal path components
past the last proper-glob component, or when needed to disambiguate
symlinks for the purpose of GLOB_MARK.
one peculiarity with the new implementation is the manner in which the
error handling callback will be called. if attempting to match */B/C/D
where a directory A exists that is inaccessible, the error reported
will be a stat error for A/B/C/D rather than (previous and wrong
implementation) an opendir error for A, or (likely on other
implementations) a stat error for A/B. such behavior does not seem to
be non-conforming, but if it turns out to be undesirable for any
reason, backtracking could be done on error to report the first
component producing it.
also, redundant slashes are no longer normalized, but preserved as
they appear in the pattern; this is probably more correct, and falls
out naturally from the algorithm used. since trailing slashes (which
force all matches to be directories) are preserved as well, the
behavior of GLOB_MARK has been adjusted not to append an additional
slash to results that already end in slash.
|
|
|
|
|
|
|
|
| |
commit 6ba5517a460c6c438f64d69464fdfc3269a4c91a modified
__tls_get_addr to offset the address by +DTP_OFFSET (0x8000 on
powerpc, mips, etc.) and adjusted the result of DTPREL relocations by
-DTP_OFFSET to compensate, but missed changing the argument setup for
calls to __tls_get_addr from dlsym.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
as explained in commit 6ba5517a460c6c438f64d69464fdfc3269a4c91a, some
archs use an offset (typicaly -0x8000) with their DTPOFF relocations,
which __tls_get_addr needs to invert. on affected archs, which lack
direct support for large immediates, this can cost multiple extra
instructions in the hot path. instead, incorporate the DTP_OFFSET into
the DTV entries. this means they are no longer valid pointers, so
store them as an array of uintptr_t rather than void *; this also
makes it easier to access slot 0 as a valid slot count.
commit e75b16cf93ebbc1ce758d3ea6b2923e8b2457c68 left behind cruft in
two places, __reset_tls and __tls_get_new, from back when it was
possible to have uninitialized gap slots indicated by a null pointer
in the DTV. since the concept of null pointer is no longer meaningful
with an offset applied, remove this cruft.
presently there are no archs with both TLSDESC and nonzero DTP_OFFSET,
but the dynamic TLSDESC relocation code is also updated to apply an
inverted offset to its offset field, so that the offset DTV would not
impose a runtime cost in TLSDESC resolver functions.
|
|
|
|
|
| |
len was already passed as an argument, so don't use strcat, and use
memcpy instead of strcpy.
|
| |
|
|
|
|
| |
Rounding modes are not bit flags, but arbitrary non-negative integers.
|
|
|
|
|
|
|
|
|
|
|
| |
when invoking the assembler, arm gcc does not always pass the right
flags to enable use of vfp instruction mnemonics. for C code it
produces, it emits the .fpu directive, but this does not help when
building asm source files, which tlsdesc needs to be. to fix, use an
explicit directive here.
commit 0beb9dfbecad38af9759b1e83eeb007e28b70abb introduced this
regression. it has not appeared in any release.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the specification for freeaddrinfo allows it to be used to free
"arbitrary sublists" of the list returned by getaddrinfo. it's not
clearly stated how such sublists come into existence, but the
interpretation seems to be that the application can edit the ai_next
pointers to cut off a portion of the list and then free it.
actual freeing of individual list slots is contrary to the design of
our getaddrinfo implementation, which has no failure paths after
making a single allocation, so that light callers can avoid linking
realloc/free. freeing individual slots is also incompatible with
sharing the string for ai_canonname, which the current implementation
does despite no requirement that it be present except on the first
result. so, rather than actually freeing individual slots, provide a
way to find the start of the allocated array, and reference-count it,
freeing the memory all at once after the last slot has been freed.
since the language in the spec is "arbitrary sublists", no provision
for handling other constructs like multiple lists glued together,
circular links, etc. is made. presumably passing such a construct to
freeaddrinfo produces undefined behavior.
|
|
|
|
|
|
|
|
|
|
|
|
| |
the indirect function call is a significant portion of the code path
for the dynamic case, and most users are probably building for ISA
levels where it can be omitted.
we could drop at least one register save/restore (lr) with this
change, and possibly another (ip) with some clever shuffling, but it's
not clear whether there's a way to do it that's not more expensive, or
whether avoiding the save/restore would have any practical effect, so
in the interest of avoiding complexity it's omitted for now.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
unlike other asm where the baseline ISA is used, these functions are
hot paths and use ISA-level specializations.
call-clobbered vfp registers are saved before calling __tls_get_new,
since there is no guarantee it won't use them. while setjmp/longjmp
have to use hwcap to decide whether to the fpu is in use, since
application code could be using vfp registers even if libc was
compiled as pure softfloat, __tls_get_new is part of libc and can be
assumed not to have access to vfp registers if tlsdesc.S does not.
thus it suffices just to check the predefined preprocessor macros. the
check for __ARM_PCS_VFP is redundant; !__SOFTFP__ must always be true
if the target ISA level includes fpu instructions/registers.
|
|
|
|
|
|
|
|
|
| |
use the GNU C may_alias attribute if available, and fallback to naive
byte-by-byte loops if __GNUC__ is not defined.
this patch has been written to minimize changes so that history
remains reviewable; it does not attempt to bring the affected code
into a more consistent or elegant form.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the comparison must take place in the address space model as an
integer type, since comparing pointers that are not pointing into the
same array is undefined.
the subsequent d<s comparison however is valid, because it's only
reached in the case where the source and dest overlap, in which case
they are necessarily pointing to parts of the same array.
to make the comparison, use an unsigned range check for dist(s,d)>=n,
algebraically !(-n<s-d<n). subtracting n yields !(-2*n<s-d-n<0), which
mapped into unsigned modular arithmetic is !(-2*n<s-d-n) or rather
-2*n>=s-d-n.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rewrote the AVL tree implementation:
- It is now non-recursive with fixed stack usage (large enough for
worst case tree height). twalk and tdestroy are still recursive as
that's smaller/simpler.
- Moved unrelated interfaces into separate translation units.
- The node structure is changed to use indexed children instead of
left/right pointers, this simplifies the balancing logic.
- Using void * pointers instead of struct node * in various places,
because this better fits the api (node address is passed in a void**
argument, so it is tempting to incorrectly cast it to struct node **).
- As a further performance improvement the rebalancing now stops
when it is not needed (subtree height is unchanged). Otherwise
the behaviour should be the same as before (checked over generated
random inputs that the resulting tree shape is equivalent).
- Removed the old copyright notice (including prng related one: it
should be licensed under the same terms as the rest of the project).
.text size of pic tsearch + tfind + tdelete + twalk:
x86_64 i386 aarch64 arm mips powerpc ppc64le sh4 m68k s390x
old 941 899 1220 1068 1852 1400 1600 1008 1008 1488
new 857 881 1040 976 1564 1192 1360 736 820 1408
|
|
|
|
|
|
| |
These should have been added in commit
df6d9450ea19fd71e52cf5cdb4c85beb73066394
that added target specific PTRACE_ macros, but somehow got missed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
despite not being documented to do so in the standard or Linux
documentation, attempts to udp connect to 127.0.0.1 or ::1 generate
EADDRNOTAVAIL when the loopback device is not configured and there is
no default route for IPv6. this caused getaddrinfo with AI_ADDRCONFIG
to fail with EAI_SYSTEM and EADDRNOTAVAIL on some no-IPv6
configurations, rather than the intended behavior of detecting IPv6 as
unsuppported and producing IPv4-only results.
previously, only EAFNOSUPPORT was treated as unavailability of the
address family being probed. instead, treat all errors related to
inability to get an address or route as conclusive that the family
being probed is unsupported, and only fail with EAI_SYSTEM on other
errors.
further improvements may be desirable, such as reporting EAI_AGAIN
instead of EAI_SYSTEM for errors which are expected to be transient,
but this patch should suffice to fix the serious regression.
|
|
|
|
|
|
| |
the clang internal assembler does not accept assembler options passed
via the usual -Wa mechanism, but it does accept -mimplicit-it directly
as an option to the compiler driver.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this facilitates building software that assumes a large default stack
size without any patching to call pthread_setattr_default_np or
pthread_attr_setstacksize at each thread creation site, using just
LDFLAGS.
normally the PT_GNU_STACK header is used only to reflect whether
executable stack is desired, but with GNU ld at least, passing
-Wl,-z,stack-size=N will set a size on the program header. with this
patch, that size will be incorporated into the default stack size
(subject to increase-only rule and DEFAULT_STACK_MAX limit).
both static and dynamic linking honor the program header. for dynamic
linking, all libraries loaded at program start, including preloaded
ones, are considered. dlopened libraries are not considered, for
several reasons. extra logic would be needed to defer processing until
the load of the new library is commited, synchronization woud be
needed since other threads may be running concurrently, and the
effectiveness woud be limited since the larger size would not apply to
threads that already existed at the time of dlopen. programs that will
dlopen code expecting a large stack need to declare the requirement
themselves, or pthread_setattr_default_np can be used.
|
|
|
|
|
|
|
|
|
|
|
|
| |
stack size default is increased from 80k to 128k. this coincides with
Linux's hard-coded default stack for the main thread (128k is
initially committed; growth beyond that up to ulimit is contingent on
additional allocation succeeding) and GNU ld's default PT_GNU_STACK
size for FDPIC, at least on sh.
guard size default is increased from 4k to 8k to reduce the risk of
guard page jumping on overflow, since use of just over 4k of stack is
common (PATH_MAX buffers, etc.).
|
|
|
|
|
|
|
|
|
|
| |
limit to 8MB/1MB, repectively. since the defaults cannot be reduced
once increased, excessively large settings would lead to an
unrecoverably broken state. this change is in preparation to allow
defaults to be increased via program headers at the linker level.
creation of threads that really need larger sizes needs to be done
with an explicit attribute.
|
|
|
|
| |
these are now declared in pthread_impl.h.
|
|
|
|
| |
access to defaults should be protected against concurrent changes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
per POSIX, deletion of a key for which some threads still have values
stored is permitted, and newly created keys must initially hold the
null value in all threads. these properties were not met by our
implementation; if a key was deleted with values left and a new key
was created in the same slot, the old values were still visible.
moreover, due to lack of any synchronization in pthread_key_delete,
there was a TOCTOU race whereby a concurrent pthread_exit could
attempt to call a null destructor pointer for the newly orphaned
value.
this commit introduces a solution based on __synccall, stopping the
world to zero out the values for deleted keys, but only does so lazily
when all key slots have been exhausted. pthread_key_delete is split
off into a separate translation unit so that static-linked programs
which only create keys but never delete them will not pull in the
__synccall machinery.
a global rwlock is added to synchronize creation and deletion of keys
with dtor execution. since the dtor execution loop now has to release
and retake the lock around its call to each dtor, checks are made not
to call the nodtor dummy function for keys which lack a dtor.
|