| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
The underflow exception is not raised correctly in some
cornercases (see previous fma commit), added comments
with examples for fmaf, fmal and non-x86 fma.
In fmaf store the result before returning so it has the
correct precision when FLT_EVAL_METHOD!=0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) in downward rounding fma(1,1,-1) should be -0 but it was 0 with
gcc, the code was correct but gcc does not support FENV_ACCESS ON
so it used common subexpression elimination where it shouldn't have.
now volatile memory access is used as a barrier after fesetround.
2) in directed rounding modes there is no double rounding issue
so the complicated adjustments done for nearest rounding mode are
not needed. the only exception to this rule is raising the underflow
flag: assume "small" is an exactly representible subnormal value in
double precision and "verysmall" is a much smaller value so that
(long double)(small plus verysmall) == small
then
(double)(small plus verysmall)
raises underflow because the result is an inexact subnormal, but
(double)(long double)(small plus verysmall)
does not because small is not a subnormal in long double precision
and it is exact in double precision.
now this problem is fixed by checking inexact using fenv when the
result is subnormal
|
|
|
|
|
|
|
|
| |
* use unsigned arithmetics
* use unsigned to store arg reduction quotient (so n&3 is understood)
* remove z=0.0 variables, use literal 0
* raise underflow and inexact exceptions properly when x is small
* fix spurious underflow in tanl
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* use unsigned arithmetics on the representation
* store arg reduction quotient in unsigned (so n%2 would work like n&1)
* use different convention to pass the arg reduction bit to __tan
(this argument used to be 1 for even and -1 for odd reduction
which meant obscure bithacks, the new n&1 is cleaner)
* raise inexact and underflow flags correctly for small x
(tanl(x) may still raise spurious underflow for small but normal x)
(this exception raising code increases codesize a bit, similar fixes
are needed in many other places, it may worth investigating at some
point if the inexact and underflow flags are worth raising correctly
as this is not strictly required by the standard)
* tanf manual reduction optimization is kept for now
* tanl code path is cleaned up to follow similar logic to tan and tanf
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When FLT_EVAL_METHOD!=0 (only i386 with x87 fp) the excess
precision of an expression must be removed in an assignment.
(gcc needs -fexcess-precision=standard or -std=c99 for this)
This is done by extra load/store instructions which adds code
bloat when lot of temporaries are used and it makes the result
less precise in many cases.
Using double_t and float_t avoids these issues on i386 and
it makes no difference on other archs.
For now only a few functions are modified where the excess
precision is clearly beneficial (mostly polynomial evaluations
with temporaries).
object size differences on i386, gcc-4.8:
old new
__cosdf.o 123 95
__cos.o 199 169
__sindf.o 131 95
__sin.o 225 203
__tandf.o 207 151
__tan.o 605 499
erff.o 1470 1416
erf.o 1703 1649
j0f.o 1779 1745
j0.o 2308 2274
j1f.o 1602 1568
j1.o 2286 2252
tgamma.o 1431 1424
math/*.o 64164 63635
|
|
|
|
|
|
|
|
|
|
|
|
| |
common part of erf and erfc was put in a separate function which
saved some space and the new code is using unsigned arithmetics
erfcf had a bug: for some inputs in [7.95,8] the result had
more than 60ulp error: in expf(-z*z - 0.5625f) the argument
must be exact but not enough lowbits of z were zeroed,
-SET_FLOAT_WORD(z, ix&0xfffff000);
+SET_FLOAT_WORD(z, ix&0xffffe000);
fixed the issue
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
both jn and yn functions had integer overflow issues for large
and small n
to handle these issues nm1 (== |n|-1) is used instead of n and -n
in the code and some loops are changed to make sure the iteration
counter does not overflow
(another solution could be to use larger integer type or even double
but that has more size and runtime cost, on x87 loading int64_t or
even uint32_t into an fpu register is more than two times slower than
loading int32_t, and using double for n slows down iteration logic)
yn(-1,0) now returns inf
posix2008 specifies that on overflow and at +-0 all y0,y1,yn functions
return -inf, this is not consistent with math when n<0 odd integer in yn
(eg. when x->0, yn(-1,x)->inf, but historically yn(-1,0) seems to be
special cased and returned -inf)
some threshold values in jnf and ynf were fixed that seems to be
incorrectly copy-pasted from the double version
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a common code path in j1 and y1 was factored out so the resulting
object code is a bit smaller
unsigned int arithmetics is used for bit manipulation
j1(-inf) now returns 0 instead of -0
an incorrect threshold in the common code of j1f and y1f got fixed
(this caused spurious overflow and underflow exceptions)
the else branch in pone and pzero functions are fixed
(so code analyzers dont warn about uninitialized values)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a common code path in j0 and y0 was factored out so the resulting
object code is smaller
unsigned int arithmetics is used for bit manipulation
the logic of j0 got a bit simplified (x < 1 case was handled
separately with a bit higher precision than now, but there are large
errors in other domains anyway so that branch has been removed)
some threshold values were adjusted in j0f and y0f
|
|
|
|
|
|
|
|
|
|
| |
previously 0x1p-1000 and 0x1p1000 was used for raising inexact
exception like x+tiny (when x is big) or x+huge (when x is small)
the rational is that these float consts are large enough
(0x1p-120 + 1 raises inexact even on ld128 which has 113 mant bits)
and float consts maybe smaller or easier to load on some platforms
(on i386 this reduced the object file size by 4bytes in some cases)
|
|
|
|
|
|
|
|
|
| |
this is not a full rewrite just fixes to the special case logic:
+-0 and non-integer x<INT_MIN inputs incorrectly raised invalid
exception and for +-0 the return value was wrong
so integer test and odd/even test for negative inputs are changed
and a useless overflow test was removed
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
comments are kept in the double version of the function
compared to fdlibm/freebsd we partition the domain into one
more part and select different threshold points:
now the [log(5/3)/2,log(3)/2] and [log(3)/2,inf] domains
should have <1.5ulp error
(so only the last bit may be wrong, assuming good exp, expm1)
(note that log(3)/2 and log(5/3)/2 are the points where tanh
changes resolution: tanh(log(3)/2)=0.5, tanh(log(5/3)/2)=0.25)
for some x < log(5/3)/2 (~=0.2554) the error can be >1.5ulp
but it should be <2ulp
(the freebsd code had some >2ulp errors in [0.255,1])
even with the extra logic the new code produces smaller
object files
|
|
|
|
| |
comments are kept in the double version of the function
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
changed the algorithm: large input is not special cased
(when exp(-x) is small compared to exp(x))
and the threshold values are reevaluated
(fdlibm code had a log(2)/2 cutoff for which i could not find
justification, log(2) seems to be a better threshold and this
was verified empirically)
the new code is simpler, makes smaller binaries and should be
faster for common cases
the old comments were removed as they are no longer true for the
new algorithm and the fdlibm copyright was dropped as well
because there is no common code or idea with the original anymore
except for trivial ones.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
with naive exp2l(x*log2e) the last 12bits of the result was incorrect
for x with large absolute value
with hi + lo = x*log2e is caluclated to 128 bits precision and then
expl(x) = exp2l(hi) + exp2l(hi) * f2xm1(lo)
this gives <1.5ulp measured error everywhere in nearest rounding mode
|
|
|
|
|
|
|
|
|
|
|
|
| |
uses the lanczos approximation method with the usual tweaks.
same parameters were selected as in boost and python.
(avoides some extra work and special casing found in boost
so the precision is not that good: measured error is <5ulp for
positive x and <10ulp for negative)
an alternative lgamma_r implementation is also given in the same
file which is simpler and smaller than the current one, but less
precise so it's ifdefed out for now.
|
|
|
|
| |
do fabs by hand, don't check for nan and inf separately
|
| |
|
|
|
|
|
| |
__invtrigl is not needed when acosl, asinl, atanl have asm
implementations
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
modifications:
* avoid unsigned->signed conversions
* removed various volatile hacks
* use FORCE_EVAL when evaluating only for side-effects
* factor out R() rational approximation instead of manual inline
* __invtrigl.h now only provides __invtrigl_R, __pio2_hi and __pio2_lo
* use 2*pio2_hi, 2*pio2_lo instead of pi_hi, pi_lo
otherwise the logic is not changed, long double versions will
need a revisit when a genaral long double cleanup happens
|
|
|
|
|
|
|
|
|
|
|
|
| |
modifications:
* avoid unsigned->signed integer conversion
* do not handle special cases when they work correctly anyway
* more strict threshold values (0x1p26 instead of 0x1p28 etc)
* smaller code, cleaner branching logic
* same precision as the old code:
acosh(x) has up to 2ulp error in [1,1.125]
asinh(x) has up to 1.6ulp error in [0.125,0.5], [-0.5,-0.125]
atanh(x) has up to 1.7ulp error in [0.125,0.5], [-0.5,-0.125]
|
| |
|
|
|
|
| |
use the 'f' suffix when a float constant is not representable
|
|
|
|
| |
raise overflow and underflow when necessary, fix various comments.
|
|
|
|
|
|
| |
similar to exp.c cleanup: use scalbnf, don't return excess precision,
drop some optimizatoins.
exp.c was changed to be more consistent with expf.c code.
|
|
|
|
|
|
| |
* old code relied on sign extension on right shift
* exp2l ld64 wrapper was wrong
* use scalbn instead of bithacks
|
|
|
|
|
|
|
| |
overflow and underflow was incorrect when the result was not stored.
an optimization for the 0.5*ln2 < |x| < 1.5*ln2 domain was removed.
did various cleanups around static constants and made the comments
consistent with the code.
|
|
|
|
| |
keeping only commonly used data in invtrigl
|
|
|
|
|
| |
this also fixes overflow/underflow raising and excess
precision issues (as those are handled well in scalbn)
|
| |
|
|
|
|
|
|
|
| |
old code was correct only if the result was stored (without the
excess precision) or musl was compiled with -ffloat-store.
now we use STRICT_ASSIGN to work around the issue.
(see note 160 in c11 section 6.8.6.4)
|
|
|
|
|
|
| |
old code was correct only if the result was stored (without the
excess precision) or musl was compiled with -ffloat-store.
(see note 160 in n1570.pdf section 6.8.6.4)
|
|
|
|
|
| |
old code (return x+x;) returns correct value and raises correct
flags only if the result is stored as double (or float)
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| | |
exp(inf), exp(-inf), exp(nan) used to raise wrong flags
|
| |
| |
| |
| |
| |
| |
| | |
this function never existed historically; since the float/double
functions it's based on are nonstandard and deprecated, there's really
no justification for its existence except that glibc has it. it can be
added back if there's ever really a need...
|
|/ |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The long double adjustment was wrong:
The usual check is
mant_bits & 0x7ff == 0x400
before doing a mant_bits++ or mant_bits-- adjustment since
this is the only case when rounding an inexact ld80 into
double can go wrong. (only in nearest rounding mode)
After such a check the ++ and -- is ok (the mantissa will end
in 0x401 or 0x3ff).
fma is a bit different (we need to add 3 numbers with correct
rounding: hi_xy + lo_xy + z so we should survive two roundings
at different places without precision loss)
The adjustment in fma only checks for zero low bits
mant_bits & 0x3ff == 0
this way the adjusted value is correct when rounded to
double or *less* precision.
(this is an important piece in the fma puzzle)
Unfortunately in this case the -- is not a correct adjustment
because mant_bits might underflow so further checks are needed
and this was the source of the bug.
|
|
|
|
|
| |
this is silly, but it makes apps that read binary junk and interpret
it as ld80 "safer", and it gets gnulib to stop replacing printf...
|