| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the fscale instruction is slow everywhere, probably because it
involves a costly and unnecessary integer truncation operation that
ends up being a no-op in common usages. instead, construct a floating
point scale value with integer arithmetic and simply multiply by it,
when possible.
for float and double, this is always possible by going to the
next-larger type. we use some cheap but effective saturating
arithmetic tricks to make sure even very large-magnitude exponents
fit. for long double, if the scaling exponent is too large to fit in
the exponent of a long double value, we simply fallback to the
expensive fscale method.
on atom cpu, these changes speed up scalbn by over 30%. (min rdtsc
timing dropped from 110 cycles to 70 cycles.)
|
|
|
|
|
| |
this is a lot more efficient and also what is generally wanted.
perhaps the bit shuffling could be more efficient...
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
zero, one, two, half are replaced by const literals
The policy was to use the f suffix for float consts (1.0f),
but don't use suffix for long double consts (these consts
can be exactly represented as double).
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Underflow exception is only raised when the result is
invalid, but fmod is always exact. x87 has a denormalization
exception, but that's nonstandard. And the superflous *1.0
will be optimized away by any compiler that does not honor
signaling nans.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Some code assumed ldexp(x, 1) is faster than 2.0*x,
but ldexp is a wrapper around scalbn which uses
multiplications inside, so this optimization is
wrong.
This commit also fixes fmal which accidentally
used ldexp instead of ldexpl loosing precision.
There are various additional changes from the
work-in-progress const cleanups.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
exponents (base 2) near 16383 were broken due to (1) wrong cutoff, and
(2) inability to fit the necessary range of scalings into a long
double value.
as a solution, we fall back to using frndint/fscale for insanely large
exponents, and also have to special-case infinities here to avoid
inf-inf generating nan.
thankfully the costly code never runs in normal usage cases.
|
|\| |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Some long double consts were stored in two doubles as a workaround
for x86_64 and i386 with the following comment:
/* Long double constants are slow on these arches, and broken on i386. */
This is most likely old gcc bug related to the default x87 fpu
precision setting (it's double instead of double extended on BSD).
|
| | |
|
|/ |
|
| |
|
| |
|
|
|
|
|
| |
this could perhaps use some additional testing for corner cases, but
it seems to be correct.
|
|
|
|
|
|
| |
up to 30% faster exp2 by avoiding slow frndint and fscale functions.
expm1 also takes a much more direct path for small arguments (the
expected usage case).
|
|\ |
|
| |
| |
| |
| |
| | |
The old scalbn.c was wrong and slow, the new one is just slow.
(scalbn(0x1p+1023,-2097) should give 0x1p-1074, but the old code gave 0)
|
| |\ |
|
| |\ \ |
|
| |\ \ \ |
|
| |\ \ \ \ |
|
| | | | | | |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
correctly rounded double precision fma using extended
precision arithmetics for ld80 systems (x87)
|
| | | | | | |
|
| |_|_|_|/
|/| | | | |
|
| |_|_|/
|/| | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
unlike some implementations, these functions perform the equivalent of
gcc's -ffloat-store on the result before returning. this is necessary
to raise underflow/overflow/inexact exceptions, perform the correct
rounding with denormals, etc.
|
| |_|/
|/| |
| | |
| | |
| | |
| | |
| | | |
unlike trig functions, these are easy to do in asm because they do not
involve (arbitrary-precision) argument reduction. fpatan automatically
takes care of domain issues, and in asin and acos, fsqrt takes care of
them for us.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
infinities were getting converted into nans. the new code simply tests
for infinity and replaces it with a large magnitude value of the same
sign.
also, the fcomi instruction is apparently not part of the i387
instruction set, so avoid using it.
|
| |/
|/| |
|
|/ |
|
| |
|
|
|
|
|
|
|
| |
these are functions that have direct fpu approaches to implementation
without problematic exception or rounding issues. x86_64 lacks
float/double versions because i'm unfamiliar with the necessary sse
code for performing these operations.
|
|
|
|
|
| |
Simple wrappers around round is enough because
spurious inexact exception is allowed.
|
| |
|
|
|
|
|
|
|
| |
A faster workaround for spurious inexact exceptions
when the result cannot be represented. The old code
actually could be wrong, because gcc reordered the
integer conversion and the exception check.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
this is necessary to support archs where fenv is incomplete or
unavailable (presently arm). fma, fmal, and the lrint family should
work perfectly fine with this change; fmaf is slightly broken with
respect to rounding as it depends on non-default rounding modes to do
its work.
|
| |
|
|
|
|
|
| |
otherwise, the standard C lgamma function will clobber a symbol in the
namespace reserved for the application.
|
|
|
|
| |
standard functions cannot depend on nonstandard symbols
|