| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
based on patch by Isaac Dunham, moved to its own file to avoid
increasing bss on static linked programs not using this nonstandard
function but using the standard getgrent function, and vice versa.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this definitely has the potential to be a bikeshed topic, so some
justification is in order. most of the changes made fit into one of
the following categories:
1. alignment with text in posix, xsh 2.3
2. eliminating overly-specific text for shared error codes
3. making the message match more closely with the macro name
4. removing extraneous words
in particular, the EAGAIN/EWOULDBLOCK text is updated to match the
description of EAGAIN (which covers both uses) rather than saying the
operation would block, and ENOTSUP/EOPNOTSUPP is updated not to
mention sockets.
the distinction between ENFILE/EMFILE has also been clarified; ENFILE
is aligned with the posix text, and EMFILE, which lacks concise posix
text matching any historic message, is updated to emphasize that the
exhausted resource is not open files/open file descriptions, but
rather the integer 'address space' of file descriptors.
some messages may be further tweaked based on feedback.
|
|
|
|
|
| |
this avoids duplicating the fragile logic for executing an external
program without fork.
|
|
|
|
|
|
|
| |
read should never return anything but 0 or sizeof ec here, but if it
does, we want to treat any other return as "success". then the caller
will get back the pid and is responsible for waiting on it when it
immediately exits.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the proposed change was described in detail in detail previously on
the mailing list. in short, vfork is unsafe because:
1. the compiler could make optimizations that cause the child to
clobber the parent's local vars.
2. strace is buggy and allows the vforking parent to run before the
child execs when run under strace.
the new design uses a close-on-exec pipe instead of vfork semantics to
synchronize the parent and child so that the parent does not return
before the child has finished using its arguments (and now, also its
stack). this also allows reporting exec failures to the caller instead
of giving the caller a child that mysteriously exits with status 127
on exec error.
basic testing has been performed on both the success and failure code
paths. further testing should be done.
|
|
|
|
|
|
| |
also, don't waste code/time on F_GETFL since pipes always have blank
flags initially (at least on old kernels, which are all this fallback
code matters for).
|
|
|
|
|
|
|
|
|
|
| |
this change shaves ~1k off libc.so bss size, and also avoids hard
errors in the case where the static buffer was not large enough to
hold the result.
this whole framework is really ugly and might should be replaced or at
least heavily overhauled when some changes/factorizations are made to
getaddrinfo internals in the future.
|
| |
|
| |
|
|
|
|
| |
they're supposed to return an error code rather than using errno.
|
|
|
|
|
|
| |
this bug seems to have been introduced when the map_library signatures
was changed to return the mapping in a temp dso structure instead of
into separate variables.
|
|
|
|
| |
this bug seems to have been around a long time.
|
|
|
|
|
| |
this bug was introduced when support for application-provided stacks
was originally added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the main goal of these changes is to address the case where an
application provides a stack of size N, but TLS has size M that's a
significant portion of the size N (or even larger than N), thus giving
the application less stack space than it expected or no stack at all!
the new strategy pthread_create now uses is to only put TLS on the
application-provided stack if TLS is smaller than 1/8 of the stack
size or 2k, whichever is smaller. this ensures that the application
always has "close enough" to what it requested, and the threshold is
chosen heuristically to make sure "sane" amounts of TLS still end up
in the application-provided stack.
if TLS does not fit the above criteria, pthread_create uses mmap to
obtain space for TLS, but still uses the application-provided stack
for actual call frame stack. this is to avoid wasting memory, and for
the sake of supporting ugly hacks like garbage collection based on
assumptions that the implementation will use the provided stack range.
in order for the above heuristics to ever succeed, the amount of TLS
space wasted on POSIX TSD (pthread_key_create based) needed to be
reduced. otherwise, these changes would preclude any use of
pthread_create without mmap, which would have serious memory usage and
performance costs for applications trying to create huge numbers of
threads using pre-allocated stack space. the new value of
PTHREAD_KEYS_MAX is the minimum allowed by POSIX, 128. this should
still be plenty more than real-world applications need, especially now
that C11/gcc-style TLS is now supported in musl, and most apps and
libraries choose to use that instead of POSIX TSD when available.
at the same time, PTHREAD_STACK_MIN has been decreased. it was
originally set to PAGE_SIZE back when there was no support for TLS or
application-provided stacks, and requests smaller than a whole page
did not make sense. now, there are two good reasons to support
requests smaller than a page: (1) applications could provide
pre-allocated stacks smaller than a page, and (2) with smaller stack
sizes, stack+TLS+TSD can all fit in one page, making it possible for
applications which need huge numbers of threads with minimal stack
needs to allocate exactly one page per thread. the new value of
PTHREAD_STACK_MIN, 2k, is aligned with the minimum size for
sigaltstack.
|
|
|
|
| |
this way they'll go into .rodata, decreasing memory pressure.
|
|
|
|
|
|
|
|
|
|
|
| |
this should generate faster and smaller code, especially with inline
syscalls. the conditional with cnt is ugly, but thankfully cnt is
always a constant anyway so it gets evaluated at compile time. it may
be preferable to make separate __wake and __wakeall macros without a
count argument.
priv flag is not used yet; private futex support still needs to be
done at some point in the future.
|
| |
|
|
|
|
|
|
| |
it's not clear to me at the moment whether the code that was removed
(and which is now being re-added) is needed, but it's far from being a
no-op, and i don't want to risk breaking regex in this release.
|
|
|
|
| |
report/patch by Hiltjo Posthuma <hiltjo@codemadness.org>
|
|
|
|
|
|
|
|
| |
based on patch by Pierre Carrier <pierre@gcarrier.fr> that just added
the flag constant, but with minimal additional code so that it
actually works as documented. this is a nonstandard option but some
major software (reportedly, Firefox) uses it and it was easy to add
anyway.
|
| |
|
|
|
|
|
| |
struct dso was not defined in this case, and it's not needed in the
code that was using it anyway; void pointers work just as well.
|
| |
|
|
|
|
|
| |
some structs and functions had reference to the params
feature of tre that is not used by the code anymore
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
common part of erf and erfc was put in a separate function which
saved some space and the new code is using unsigned arithmetics
erfcf had a bug: for some inputs in [7.95,8] the result had
more than 60ulp error: in expf(-z*z - 0.5625f) the argument
must be exact but not enough lowbits of z were zeroed,
-SET_FLOAT_WORD(z, ix&0xfffff000);
+SET_FLOAT_WORD(z, ix&0xffffe000);
fixed the issue
|
| |
| |
| |
| | |
pos_start local variable is not used in tre_tnfa_run_backtrack
|
| |
| |
| |
| |
| |
| | |
original FreeSec code accessed keybuf as uint32* and uint8* as well
(incorrectly), this got fixed with an union, but then it seems the
uint32* access is no longer needed so the code can be simplified
|
| |
| |
| |
| |
| |
| | |
the internal sha2 hash sum functions had incorrect array size
in the prototype for the message digest argument, fixed by
using pointer so it is not misleading
|
| | |
|
|/
|
|
|
|
|
| |
this is wasteful and useless from a standpoint of sane programs, but
it is required by the standard, and the current requirements were
upheld with the closure of Austin Group issue #639:
http://austingroupbugs.net/view.php?id=639
|
|
|
|
|
|
|
|
| |
for _Noreturn functions, gcc generates code that trashes the
stack frame, and so it makes it impossible to inspect the causes
of an assert error in gdb.
abort() is not affected (i have not yet investigated why).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
both jn and yn functions had integer overflow issues for large
and small n
to handle these issues nm1 (== |n|-1) is used instead of n and -n
in the code and some loops are changed to make sure the iteration
counter does not overflow
(another solution could be to use larger integer type or even double
but that has more size and runtime cost, on x87 loading int64_t or
even uint32_t into an fpu register is more than two times slower than
loading int32_t, and using double for n slows down iteration logic)
yn(-1,0) now returns inf
posix2008 specifies that on overflow and at +-0 all y0,y1,yn functions
return -inf, this is not consistent with math when n<0 odd integer in yn
(eg. when x->0, yn(-1,x)->inf, but historically yn(-1,0) seems to be
special cased and returned -inf)
some threshold values in jnf and ynf were fixed that seems to be
incorrectly copy-pasted from the double version
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a common code path in j1 and y1 was factored out so the resulting
object code is a bit smaller
unsigned int arithmetics is used for bit manipulation
j1(-inf) now returns 0 instead of -0
an incorrect threshold in the common code of j1f and y1f got fixed
(this caused spurious overflow and underflow exceptions)
the else branch in pone and pzero functions are fixed
(so code analyzers dont warn about uninitialized values)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a common code path in j0 and y0 was factored out so the resulting
object code is smaller
unsigned int arithmetics is used for bit manipulation
the logic of j0 got a bit simplified (x < 1 case was handled
separately with a bit higher precision than now, but there are large
errors in other domains anyway so that branch has been removed)
some threshold values were adjusted in j0f and y0f
|
| |
|
|
|
|
|
|
|
| |
libc is the macro, __libc is the internal symbol, but under some
configurations on old/broken compilers, the symbol might not actually
exist and the libc macro might instead use __libc_loc() to obtain
access to the object.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the previous logic was assuming the kernel would give EINVAL when
passed an invalid address, but instead with MAP_FIXED it was giving
EPERM, as it considered this an attempt to map over kernel memory.
instead of trying to get the kernel to do the rigth thing, the new
code just handles the error in userspace.
I have also cleaned up the code to use a single mask to check for
invalid low bits and unsupported high bits, so it's simpler and more
clearly correct. the old code was actually wrong for sizeof(long)
smaller than sizeof(off_t) but not equal to 4; now it should be
correct for all possibilities.
for 64-bit systems, the low-bits test is new and extraneous (the
kernel should catch the error anyway when the mmap2 syscall is not
used), but it's cheap anyway. if this is an issue, the OFF_MASK
definition could be tweaked to omit the low bits when SYS_mmap2 is not
defined.
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
previously 0x1p-1000 and 0x1p1000 was used for raising inexact
exception like x+tiny (when x is big) or x+huge (when x is small)
the rational is that these float consts are large enough
(0x1p-120 + 1 raises inexact even on ld128 which has 113 mant bits)
and float consts maybe smaller or easier to load on some platforms
(on i386 this reduced the object file size by 4bytes in some cases)
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
this is not a full rewrite just fixes to the special case logic:
+-0 and non-integer x<INT_MIN inputs incorrectly raised invalid
exception and for +-0 the return value was wrong
so integer test and odd/even test for negative inputs are changed
and a useless overflow test was removed
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
comments are kept in the double version of the function
compared to fdlibm/freebsd we partition the domain into one
more part and select different threshold points:
now the [log(5/3)/2,log(3)/2] and [log(3)/2,inf] domains
should have <1.5ulp error
(so only the last bit may be wrong, assuming good exp, expm1)
(note that log(3)/2 and log(5/3)/2 are the points where tanh
changes resolution: tanh(log(3)/2)=0.5, tanh(log(5/3)/2)=0.25)
for some x < log(5/3)/2 (~=0.2554) the error can be >1.5ulp
but it should be <2ulp
(the freebsd code had some >2ulp errors in [0.255,1])
even with the extra logic the new code produces smaller
object files
|
| |
| |
| |
| | |
comments are kept in the double version of the function
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
changed the algorithm: large input is not special cased
(when exp(-x) is small compared to exp(x))
and the threshold values are reevaluated
(fdlibm code had a log(2)/2 cutoff for which i could not find
justification, log(2) seems to be a better threshold and this
was verified empirically)
the new code is simpler, makes smaller binaries and should be
faster for common cases
the old comments were removed as they are no longer true for the
new algorithm and the fdlibm copyright was dropped as well
because there is no common code or idea with the original anymore
except for trivial ones.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
with naive exp2l(x*log2e) the last 12bits of the result was incorrect
for x with large absolute value
with hi + lo = x*log2e is caluclated to 128 bits precision and then
expl(x) = exp2l(hi) + exp2l(hi) * f2xm1(lo)
this gives <1.5ulp measured error everywhere in nearest rounding mode
|