| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace the loop-over-scalar placeholder routines with optimised
implementations from Arm Optimized Routines (AOR).
Also add some headers containing utilities for aarch64 libmvec
routines, and update libm-test-ulps.
Data tables for new routines are used via a pointer with a
barrier on it, in order to prevent overly aggressive constant
inlining in GCC. This allows a single adrp, combined with offset
loads, to be used for every constant in the table.
Special-case handlers are marked NOINLINE in order to confine the
save/restore overhead of switching from vector to normal calling
standard. This way we only incur the extra memory access in the
exceptional cases. NOINLINE definitions have been moved to
math_private.h in order to reduce duplication.
AOR exposes a config option, WANT_SIMD_EXCEPT, to enable
selective masking (and later fixing up) of invalid lanes, in
order to trigger fp exceptions correctly (AdvSIMD only). This is
tested and maintained in AOR, however it is configured off at
source level here for performance reasons. We keep the
WANT_SIMD_EXCEPT blocks in routine sources to greatly simplify
the upstreaming process from AOR to glibc.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
|
|
|
|
|
|
|
| |
Finally remove all mpa related files, headers, declarations, probes, unused
tables and update makefiles.
Reviewed-By: Paul Zimmermann <Paul.Zimmermann@inria.fr>
|
|
|
|
|
| |
The __sin32 and __cos32 functions were only used in the now removed slow
path of asin and acos.
|
|
|
|
|
|
|
|
|
| |
issignalingf is a very small function used in some areas where
better performance (and smaller code) might be helpful.
Create inline implementation for issignalingf.
Reviewed-by: Joseph Myers <joseph@codesourcery.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The algorithm is exp(y * log(x)), where log(x) is computed with about
1.3*2^-68 relative error (1.5*2^-68 without fma), returning the result
in two doubles, and the exp part uses the same algorithm (and lookup
tables) as exp, but takes the input as two doubles and a sign (to handle
negative bases with odd integer exponent). The __exp1 internal symbol
is no longer necessary.
There is separate code path when fma is not available but the worst case
error is about 0.54 ULP in both cases. The lookup table and consts for
log are 4168 bytes. The .rodata+.text is decreased by 37908 bytes on
aarch64. The non-nearest rounding error is less than 1 ULP.
Improvements on Cortex-A72 compared to current glibc master:
pow thruput: 2.40x in [0.01 11.1]x[0.01 11.1]
pow latency: 1.84x in [0.01 11.1]x[0.01 11.1]
Tested on
aarch64-linux-gnu (defined __FP_FAST_FMA, TOINT_INTRINSICS) and
arm-linux-gnueabihf (!defined __FP_FAST_FMA, !TOINT_INTRINSICS) and
x86_64-linux-gnu (!defined __FP_FAST_FMA, !TOINT_INTRINSICS) and
powerpc64le-linux-gnu (defined __FP_FAST_FMA, !TOINT_INTRINSICS) targets.
* NEWS: Mention pow improvements.
* math/Makefile (type-double-routines): Add e_pow_log_data.
* sysdeps/generic/math_private.h (__exp1): Remove.
* sysdeps/i386/fpu/e_pow_log_data.c: New file.
* sysdeps/ia64/fpu/e_pow_log_data.c: New file.
* sysdeps/ieee754/dbl-64/Makefile (CFLAGS-e_pow.c): Allow fma
contraction.
* sysdeps/ieee754/dbl-64/e_exp.c (__exp1): Remove.
(exp_inline): Remove.
(__ieee754_exp): Only single double input is handled.
* sysdeps/ieee754/dbl-64/e_pow.c: Rewrite.
* sysdeps/ieee754/dbl-64/e_pow_log_data.c: New file.
* sysdeps/ieee754/dbl-64/math_config.h (issignaling_inline): Define.
(__pow_log_data): Define.
* sysdeps/ieee754/dbl-64/upow.h: Remove.
* sysdeps/ieee754/dbl-64/upow.tbl: Remove.
* sysdeps/m68k/m680x0/fpu/e_pow_log_data.c: New file.
* sysdeps/x86_64/fpu/multiarch/Makefile (CFLAGS-e_pow-fma.c): Allow fma
contraction.
(CFLAGS-e_pow-fma4.c): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Continuing the cleanup of math_private.h, with a view to it becoming
the header for the APIs defined therein and not also a header with
inline variants of math.h APIs, this patch moves inline definitions of
__isinff128 and fabsf128 to include/math.h, so that any users of
math.h in glibc automatically get the optimized functions rather than
quietly missing them if they do not also include math_private.h.
Tested for x86_64 and x86, and with build-many-glibcs.py with GCC 6.
There are changes to installed stripped libc.so on configurations with
distinct _Float128, because of __printf_fp_l code that now gets the
__isinff128 inline where previously it called the out-of-line
function because of the lack of a math_private.h call. It seems
appropriate that this code does get the inline (as it would
automatically with GCC 7 and later when the built-in function is used)
rather than being the only place in glibc that does not.
* sysdeps/generic/math_private.h
[__HAVE_DISTINCT_FLOAT128 && !__GNUC_PREREQ (7, 0)] (__isinff128):
Move this inline function ....
[__HAVE_DISTINCT_FLOAT128] (fabsf128): And this one ....
* include/math.h [!_ISOMAC]: To here....
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Continuing the clean-up related to the catch-all math_private.h
header, this patch stops math_private.h from including fenv_private.h.
Instead, fenv_private.h is included directly from those users of
math_private.h that also used interfaces from fenv_private.h. No
attempt is made to remove unused includes of math_private.h, but that
is a natural followup.
(However, since math_private.h sometimes defines optimized versions of
math.h interfaces or __* variants thereof, as well as defining its own
interfaces, I think it might make sense to get all those optimized
versions included from include/math.h, not requiring a separate header
at all, before eliminating unused math_private.h includes - that
avoids a file quietly becoming less-optimized if someone adds a call
to one of those interfaces without restoring a math_private.h include
to that file.)
There is still a pitfall that if code uses plain fe* and __fe*
interfaces, but only includes fenv.h and not fenv_private.h or (before
this patch) math_private.h, it will compile on platforms with
exceptions and rounding modes but not get the optimized versions (and
possibly not compile) on platforms without exception and rounding mode
support, so making it easy to break the build for such platforms
accidentally.
I think it would be most natural to move the inlines / macros for fe*
and __fe* in the case of no exceptions and rounding modes into
include/fenv.h, so that all code including fenv.h with _ISOMAC not
defined automatically gets them. Then fenv_private.h would be purely
the header for the libc_fe*, SET_RESTORE_ROUND etc. internal
interfaces and the risk of breaking the build on other platforms than
the one you tested on because of a missing fenv_private.h include
would be much reduced (and there would be some unused fenv_private.h
includes to remove along with unused math_private.h includes).
Tested for x86_64 and x86, and tested with build-many-glibcs.py that
installed stripped shared libraries are unchanged by this patch.
* sysdeps/generic/math_private.h: Do not include <fenv_private.h>.
* math/fromfp.h: Include <fenv_private.h>.
* math/math-narrow.h: Likewise.
* math/s_cexp_template.c: Likewise.
* math/s_csin_template.c: Likewise.
* math/s_csinh_template.c: Likewise.
* math/s_ctan_template.c: Likewise.
* math/s_ctanh_template.c: Likewise.
* math/s_iseqsig_template.c: Likewise.
* math/w_acos_compat.c: Likewise.
* math/w_acosf_compat.c: Likewise.
* math/w_acosl_compat.c: Likewise.
* math/w_asin_compat.c: Likewise.
* math/w_asinf_compat.c: Likewise.
* math/w_asinl_compat.c: Likewise.
* math/w_ilogb_template.c: Likewise.
* math/w_j0_compat.c: Likewise.
* math/w_j0f_compat.c: Likewise.
* math/w_j0l_compat.c: Likewise.
* math/w_j1_compat.c: Likewise.
* math/w_j1f_compat.c: Likewise.
* math/w_j1l_compat.c: Likewise.
* math/w_jn_compat.c: Likewise.
* math/w_jnf_compat.c: Likewise.
* math/w_llogb_template.c: Likewise.
* math/w_log10_compat.c: Likewise.
* math/w_log10f_compat.c: Likewise.
* math/w_log10l_compat.c: Likewise.
* math/w_log2_compat.c: Likewise.
* math/w_log2f_compat.c: Likewise.
* math/w_log2l_compat.c: Likewise.
* math/w_log_compat.c: Likewise.
* math/w_logf_compat.c: Likewise.
* math/w_logl_compat.c: Likewise.
* sysdeps/aarch64/fpu/feholdexcpt.c: Likewise.
* sysdeps/aarch64/fpu/fesetround.c: Likewise.
* sysdeps/aarch64/fpu/fgetexcptflg.c: Likewise.
* sysdeps/aarch64/fpu/ftestexcept.c: Likewise.
* sysdeps/ieee754/dbl-64/e_atan2.c: Likewise.
* sysdeps/ieee754/dbl-64/e_exp.c: Likewise.
* sysdeps/ieee754/dbl-64/e_exp2.c: Likewise.
* sysdeps/ieee754/dbl-64/e_gamma_r.c: Likewise.
* sysdeps/ieee754/dbl-64/e_jn.c: Likewise.
* sysdeps/ieee754/dbl-64/e_pow.c: Likewise.
* sysdeps/ieee754/dbl-64/e_remainder.c: Likewise.
* sysdeps/ieee754/dbl-64/e_sqrt.c: Likewise.
* sysdeps/ieee754/dbl-64/gamma_product.c: Likewise.
* sysdeps/ieee754/dbl-64/lgamma_neg.c: Likewise.
* sysdeps/ieee754/dbl-64/s_atan.c: Likewise.
* sysdeps/ieee754/dbl-64/s_fma.c: Likewise.
* sysdeps/ieee754/dbl-64/s_fmaf.c: Likewise.
* sysdeps/ieee754/dbl-64/s_llrint.c: Likewise.
* sysdeps/ieee754/dbl-64/s_llround.c: Likewise.
* sysdeps/ieee754/dbl-64/s_lrint.c: Likewise.
* sysdeps/ieee754/dbl-64/s_lround.c: Likewise.
* sysdeps/ieee754/dbl-64/s_nearbyint.c: Likewise.
* sysdeps/ieee754/dbl-64/s_sin.c: Likewise.
* sysdeps/ieee754/dbl-64/s_sincos.c: Likewise.
* sysdeps/ieee754/dbl-64/s_tan.c: Likewise.
* sysdeps/ieee754/dbl-64/wordsize-64/s_lround.c: Likewise.
* sysdeps/ieee754/dbl-64/wordsize-64/s_nearbyint.c: Likewise.
* sysdeps/ieee754/dbl-64/x2y2m1.c: Likewise.
* sysdeps/ieee754/float128/float128_private.h: Likewise.
* sysdeps/ieee754/flt-32/e_gammaf_r.c: Likewise.
* sysdeps/ieee754/flt-32/e_j1f.c: Likewise.
* sysdeps/ieee754/flt-32/e_jnf.c: Likewise.
* sysdeps/ieee754/flt-32/lgamma_negf.c: Likewise.
* sysdeps/ieee754/flt-32/s_llrintf.c: Likewise.
* sysdeps/ieee754/flt-32/s_llroundf.c: Likewise.
* sysdeps/ieee754/flt-32/s_lrintf.c: Likewise.
* sysdeps/ieee754/flt-32/s_lroundf.c: Likewise.
* sysdeps/ieee754/flt-32/s_nearbyintf.c: Likewise.
* sysdeps/ieee754/k_standardl.c: Likewise.
* sysdeps/ieee754/ldbl-128/e_expl.c: Likewise.
* sysdeps/ieee754/ldbl-128/e_gammal_r.c: Likewise.
* sysdeps/ieee754/ldbl-128/e_j1l.c: Likewise.
* sysdeps/ieee754/ldbl-128/e_jnl.c: Likewise.
* sysdeps/ieee754/ldbl-128/gamma_productl.c: Likewise.
* sysdeps/ieee754/ldbl-128/lgamma_negl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_fmal.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_llrintl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_llroundl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_lrintl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_lroundl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_nearbyintl.c: Likewise.
* sysdeps/ieee754/ldbl-128/x2y2m1l.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_expl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_j1l.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_jnl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/lgamma_negl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_fmal.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_llrintl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_llroundl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_lrintl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_lroundl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_rintl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/x2y2m1l.c: Likewise.
* sysdeps/ieee754/ldbl-96/e_gammal_r.c: Likewise.
* sysdeps/ieee754/ldbl-96/e_jnl.c: Likewise.
* sysdeps/ieee754/ldbl-96/gamma_productl.c: Likewise.
* sysdeps/ieee754/ldbl-96/lgamma_negl.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_fma.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_fmal.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_llrintl.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_llroundl.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_lrintl.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_lroundl.c: Likewise.
* sysdeps/ieee754/ldbl-96/x2y2m1l.c: Likewise.
* sysdeps/powerpc/fpu/e_sqrt.c: Likewise.
* sysdeps/powerpc/fpu/e_sqrtf.c: Likewise.
* sysdeps/riscv/rv64/rvd/s_ceil.c: Likewise.
* sysdeps/riscv/rv64/rvd/s_floor.c: Likewise.
* sysdeps/riscv/rv64/rvd/s_nearbyint.c: Likewise.
* sysdeps/riscv/rv64/rvd/s_round.c: Likewise.
* sysdeps/riscv/rv64/rvd/s_roundeven.c: Likewise.
* sysdeps/riscv/rv64/rvd/s_trunc.c: Likewise.
* sysdeps/riscv/rvd/s_finite.c: Likewise.
* sysdeps/riscv/rvd/s_fmax.c: Likewise.
* sysdeps/riscv/rvd/s_fmin.c: Likewise.
* sysdeps/riscv/rvd/s_fpclassify.c: Likewise.
* sysdeps/riscv/rvd/s_isinf.c: Likewise.
* sysdeps/riscv/rvd/s_isnan.c: Likewise.
* sysdeps/riscv/rvd/s_issignaling.c: Likewise.
* sysdeps/riscv/rvf/fegetround.c: Likewise.
* sysdeps/riscv/rvf/feholdexcpt.c: Likewise.
* sysdeps/riscv/rvf/fesetenv.c: Likewise.
* sysdeps/riscv/rvf/fesetround.c: Likewise.
* sysdeps/riscv/rvf/feupdateenv.c: Likewise.
* sysdeps/riscv/rvf/fgetexcptflg.c: Likewise.
* sysdeps/riscv/rvf/ftestexcept.c: Likewise.
* sysdeps/riscv/rvf/s_ceilf.c: Likewise.
* sysdeps/riscv/rvf/s_finitef.c: Likewise.
* sysdeps/riscv/rvf/s_floorf.c: Likewise.
* sysdeps/riscv/rvf/s_fmaxf.c: Likewise.
* sysdeps/riscv/rvf/s_fminf.c: Likewise.
* sysdeps/riscv/rvf/s_fpclassifyf.c: Likewise.
* sysdeps/riscv/rvf/s_isinff.c: Likewise.
* sysdeps/riscv/rvf/s_isnanf.c: Likewise.
* sysdeps/riscv/rvf/s_issignalingf.c: Likewise.
* sysdeps/riscv/rvf/s_nearbyintf.c: Likewise.
* sysdeps/riscv/rvf/s_roundevenf.c: Likewise.
* sysdeps/riscv/rvf/s_roundf.c: Likewise.
* sysdeps/riscv/rvf/s_truncf.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On some architectures, the parts of math_private.h relating to the
floating-point environment are in a separate file fenv_private.h
included from math_private.h. As this is purely an
architecture-specific convention used by several architectures,
however, all such architectures still need their own math_private.h,
even if it has nothing to do beyond #include <fenv_private.h> and
peculiarity of including the i386 file directly instead of having a
shared file in sysdeps/x86.
This patch makes the fenv_private.h name an architecture-independent
convention in glibc. The include of fenv_private.h from
math_private.h becomes architecture-independent (until callers are
updated to include fenv_private.h directly so the include from
math_private.h is no longer needed). Some architecture math_private.h
headers are removed if no longer needed, or renamed to fenv_private.h
if all they define belongs in that header; architecture fenv_private.h
headers now do require #include_next <fenv_private.h>. The i386
fenv_private.h file moves to sysdeps/x86/fpu/ to reflect how it is
actually shared with x86_64. The generic math_private.h gets a new
include of <stdbool.h>, as needed for bool in some prototypes in that
header (previously that was indirectly included via include/fenv.h,
which now only gets included too late in math_private.h, after those
prototypes).
Tested for x86_64 and x86, and tested with build-many-glibcs.py that
installed stripped shared libraries are unchanged by the patch.
* sysdeps/aarch64/fpu/fenv_private.h: New file. Based on ....
* sysdeps/aarch64/fpu/math_private.h: ... this file. All contents
moved to fenv_private.h except for ...
(TOINT_INTRINSICS): Kept in math_private.h.
(roundtoint): Likewise.
(converttoint): Likewise.
* sysdeps/arm/fenv_private.h: Change multiple-include guard to
[ARM_FENV_PRIVATE_H]. Include next <fenv_private.h>.
* sysdeps/arm/math_private.h: Remove.
* sysdeps/generic/fenv_private.h: New file. Contents moved from
....
* sysdeps/generic/math_private.h: ... this file. Include
<stdbool.h>. Do not include <fenv.h> or <get-rounding-mode.h>.
Include <fenv_private.h>. Remove functions and macros moved to
fenv_private.h.
* sysdeps/i386/fpu/math_private.h: Remove.
* sysdeps/mips/math_private.h: Move to ....
* sysdeps/mips/fpu/fenv_private.h: ... here. Change
multiple-include guard to [MIPS_FENV_PRIVATE_H]. Remove
[__mips_hard_float] conditional. Include next <fenv_private.h>.
* sysdeps/powerpc/fpu/fenv_private.h: Change multiple-include
guard to [POWERPC_FENV_PRIVATE_H]. Include next <fenv_private.h>.
* sysdeps/powerpc/fpu/math_private.h: Do not include
<fenv_private.h>.
* sysdeps/riscv/rvf/math_private.h: Move to ....
* sysdeps/riscv/rvf/fenv_private.h: ... here. Change
multiple-include guard to [RISCV_FENV_PRIVATE_H]. Include next
<fenv_private.h>.
* sysdeps/sparc/fpu/fenv_private.h: Change multiple-include guard
to [SPARC_FENV_PRIVATE_H]. Include next <fenv_private.h>.
* sysdeps/sparc/fpu/math_private.h: Remove.
* sysdeps/i386/fpu/fenv_private.h: Move to ....
* sysdeps/x86/fpu/fenv_private.h: ... here. Change
multiple-include guard to [X86_FENV_PRIVATE_H]. Include next
<fenv_private.h>.
* sysdeps/x86_64/fpu/math_private.h: Do not include
<sysdeps/i386/fpu/fenv_private.h>.
|
|
|
|
|
|
| |
Previously, only the SSE2 rounding mode was set, so the assembler
implementations using 387 were not following the expecting rounding
mode.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch continues the math_private.h cleanup by stopping
math_private.h from including math-barriers.h and making the users of
the barrier macros include the latter header directly. No attempt is
made to remove any math_private.h includes that are now unused, except
in strtod_l.c where that is done to avoid line number changes in
assertions, so that installed stripped shared libraries can be
compared before and after the patch. (I think the floating-point
environment support in math_private.h should also move out - some
architectures already have fenv_private.h as an architecture-internal
header included from their math_private.h - and after moving that out
might be a better time to identify unused math_private.h includes.)
Tested for x86_64 and x86, and tested with build-many-glibcs.py that
installed stripped shared libraries are unchanged by the patch.
* sysdeps/generic/math_private.h: Do not include
<math-barriers.h>.
* stdlib/strtod_l.c: Include <math-barriers.h> instead of
<math_private.h>.
* math/fromfp.h: Include <math-barriers.h>.
* math/math-narrow.h: Likewise.
* math/s_nextafter.c: Likewise.
* math/s_nexttowardf.c: Likewise.
* sysdeps/aarch64/fpu/s_llrint.c: Likewise.
* sysdeps/aarch64/fpu/s_llrintf.c: Likewise.
* sysdeps/aarch64/fpu/s_lrint.c: Likewise.
* sysdeps/aarch64/fpu/s_lrintf.c: Likewise.
* sysdeps/i386/fpu/s_nextafterl.c: Likewise.
* sysdeps/i386/fpu/s_nexttoward.c: Likewise.
* sysdeps/i386/fpu/s_nexttowardf.c: Likewise.
* sysdeps/ieee754/dbl-64/e_atan2.c: Likewise.
* sysdeps/ieee754/dbl-64/e_atanh.c: Likewise.
* sysdeps/ieee754/dbl-64/e_exp.c: Likewise.
* sysdeps/ieee754/dbl-64/e_exp2.c: Likewise.
* sysdeps/ieee754/dbl-64/e_j0.c: Likewise.
* sysdeps/ieee754/dbl-64/e_sqrt.c: Likewise.
* sysdeps/ieee754/dbl-64/s_expm1.c: Likewise.
* sysdeps/ieee754/dbl-64/s_fma.c: Likewise.
* sysdeps/ieee754/dbl-64/s_fmaf.c: Likewise.
* sysdeps/ieee754/dbl-64/s_log1p.c: Likewise.
* sysdeps/ieee754/dbl-64/s_nearbyint.c: Likewise.
* sysdeps/ieee754/dbl-64/wordsize-64/s_nearbyint.c: Likewise.
* sysdeps/ieee754/flt-32/e_atanhf.c: Likewise.
* sysdeps/ieee754/flt-32/e_j0f.c: Likewise.
* sysdeps/ieee754/flt-32/s_expm1f.c: Likewise.
* sysdeps/ieee754/flt-32/s_log1pf.c: Likewise.
* sysdeps/ieee754/flt-32/s_nearbyintf.c: Likewise.
* sysdeps/ieee754/flt-32/s_nextafterf.c: Likewise.
* sysdeps/ieee754/k_standardl.c: Likewise.
* sysdeps/ieee754/ldbl-128/e_asinl.c: Likewise.
* sysdeps/ieee754/ldbl-128/e_expl.c: Likewise.
* sysdeps/ieee754/ldbl-128/e_powl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_fmal.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_nearbyintl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_nextafterl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_nexttoward.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_nexttowardf.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_asinl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_fmal.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_nextafterl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_nexttoward.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_nexttowardf.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_rintl.c: Likewise.
* sysdeps/ieee754/ldbl-96/e_atanhl.c: Likewise.
* sysdeps/ieee754/ldbl-96/e_j0l.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_fma.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_fmal.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_nexttoward.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_nexttowardf.c: Likewise.
* sysdeps/ieee754/ldbl-opt/s_nexttowardfd.c: Likewise.
* sysdeps/m68k/m680x0/fpu/s_nextafterl.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch continues cleaning up math_private.h by moving the
math_check_force_underflow set of macros to a separate header
math-underflow.h.
This header is included by the files that need it rather than from
math_private.h. Moving these macros to a separate file removes the
math_private.h uses of macros from float.h, so the inclusion of
float.h in math_private.h is also removed; files that were depending
on that inclusion are fixed to include float.h directly. The
inclusion of math-barriers.h from math_private.h will be removed in a
separate patch.
Tested for x86_64 and x86. Also tested with build-many-glibcs.py that
installed stripped shared libraries are unchanged by this patch.
* math/math-underflow.h: New file.
* sysdeps/generic/math_private.h: Do not include <float.h>.
(fabs_tg): Remove macro. Moved to math-underflow.h.
(min_of_type_f): Likewise.
(min_of_type_): Likewise.
(min_of_type_l): Likewise.
(min_of_type_f128): Likewise.
(min_of_type): Likewise.
(math_check_force_underflow): Likewise.
(math_check_force_underflow_nonneg): Likewise.
(math_check_force_underflow_complex): Likewise.
* math/e_exp2_template.c: Include <math-underflow.h>.
* math/k_casinh_template.c: Likewise.
* math/s_catan_template.c: Likewise.
* math/s_catanh_template.c: Likewise.
* math/s_ccosh_template.c: Likewise.
* math/s_cexp_template.c: Likewise.
* math/s_clog10_template.c: Likewise.
* math/s_clog_template.c: Likewise.
* math/s_csin_template.c: Likewise.
* math/s_csinh_template.c: Likewise.
* math/s_csqrt_template.c: Likewise.
* math/s_ctan_template.c: Likewise.
* math/s_ctanh_template.c: Likewise.
* sysdeps/ieee754/dbl-64/e_asin.c: Likewise.
* sysdeps/ieee754/dbl-64/e_atanh.c: Likewise.
* sysdeps/ieee754/dbl-64/e_exp2.c: Likewise.
* sysdeps/ieee754/dbl-64/e_gamma_r.c: Likewise.
* sysdeps/ieee754/dbl-64/e_hypot.c: Likewise.
* sysdeps/ieee754/dbl-64/e_j1.c: Likewise.
* sysdeps/ieee754/dbl-64/e_jn.c: Likewise.
* sysdeps/ieee754/dbl-64/e_pow.c: Likewise.
* sysdeps/ieee754/dbl-64/e_sinh.c: Likewise.
* sysdeps/ieee754/dbl-64/s_asinh.c: Likewise.
* sysdeps/ieee754/dbl-64/s_atan.c: Likewise.
* sysdeps/ieee754/dbl-64/s_erf.c: Likewise.
* sysdeps/ieee754/dbl-64/s_expm1.c: Likewise.
* sysdeps/ieee754/dbl-64/s_log1p.c: Likewise.
* sysdeps/ieee754/dbl-64/s_sin.c: Likewise.
* sysdeps/ieee754/dbl-64/s_sincos.c: Likewise.
* sysdeps/ieee754/dbl-64/s_tan.c: Likewise.
* sysdeps/ieee754/dbl-64/s_tanh.c: Likewise.
* sysdeps/ieee754/flt-32/e_asinf.c: Likewise.
* sysdeps/ieee754/flt-32/e_atanhf.c: Likewise.
* sysdeps/ieee754/flt-32/e_gammaf_r.c: Likewise.
* sysdeps/ieee754/flt-32/e_j1f.c: Likewise.
* sysdeps/ieee754/flt-32/e_jnf.c: Likewise.
* sysdeps/ieee754/flt-32/e_sinhf.c: Likewise.
* sysdeps/ieee754/flt-32/k_sinf.c: Likewise.
* sysdeps/ieee754/flt-32/k_tanf.c: Likewise.
* sysdeps/ieee754/flt-32/s_asinhf.c: Likewise.
* sysdeps/ieee754/flt-32/s_atanf.c: Likewise.
* sysdeps/ieee754/flt-32/s_erff.c: Likewise.
* sysdeps/ieee754/flt-32/s_expm1f.c: Likewise.
* sysdeps/ieee754/flt-32/s_log1pf.c: Likewise.
* sysdeps/ieee754/flt-32/s_tanhf.c: Likewise.
* sysdeps/ieee754/ldbl-128/e_asinl.c: Likewise.
* sysdeps/ieee754/ldbl-128/e_atanhl.c: Likewise.
* sysdeps/ieee754/ldbl-128/e_expl.c: Likewise.
* sysdeps/ieee754/ldbl-128/e_gammal_r.c: Likewise.
* sysdeps/ieee754/ldbl-128/e_hypotl.c: Likewise.
* sysdeps/ieee754/ldbl-128/e_j1l.c: Likewise.
* sysdeps/ieee754/ldbl-128/e_jnl.c: Likewise.
* sysdeps/ieee754/ldbl-128/e_sinhl.c: Likewise.
* sysdeps/ieee754/ldbl-128/k_sincosl.c: Likewise.
* sysdeps/ieee754/ldbl-128/k_sinl.c: Likewise.
* sysdeps/ieee754/ldbl-128/k_tanl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_asinhl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_atanl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_erfl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_expm1l.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_log1pl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_tanhl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_asinl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_atanhl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_hypotl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_j1l.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_jnl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_powl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_sinhl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/k_sincosl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/k_sinl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/k_tanl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_asinhl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_atanl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_erfl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_fmal.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_tanhl.c: Likewise.
* sysdeps/ieee754/ldbl-96/e_asinl.c: Likewise.
* sysdeps/ieee754/ldbl-96/e_atanhl.c: Likewise.
* sysdeps/ieee754/ldbl-96/e_gammal_r.c: Likewise.
* sysdeps/ieee754/ldbl-96/e_hypotl.c: Likewise.
* sysdeps/ieee754/ldbl-96/e_j1l.c: Likewise.
* sysdeps/ieee754/ldbl-96/e_jnl.c: Likewise.
* sysdeps/ieee754/ldbl-96/e_sinhl.c: Likewise.
* sysdeps/ieee754/ldbl-96/k_sinl.c: Likewise.
* sysdeps/ieee754/ldbl-96/k_tanl.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_asinhl.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_erfl.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_tanhl.c: Likewise.
* sysdeps/powerpc/fpu/e_hypot.c: Likewise.
* sysdeps/x86/fpu/powl_helper.c: Likewise.
* sysdeps/ieee754/dbl-64/s_nextup.c: Include <float.h>.
* sysdeps/ieee754/flt-32/s_nextupf.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_nextupl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_nextupl.c: Likewise.
* sysdeps/ieee754/ldbl-96/s_nextupl.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch continues cleaning up math_private.h by moving the
math_opt_barrier and math_force_eval macros to a separate header
math-barriers.h.
At present, those macros are inside a "#ifndef math_opt_barrier" in
math_private.h to allow architectures to override them and then use
a separate math-barriers.h header, no such #ifndef or #include_next is
needed; architectures just have their own alternative version of
math-barriers.h when providing their own optimized versions that avoid
going through memory unnecessarily. The generic math-barriers.h has a
comment added to document these two macros.
In this patch, math_private.h is made to #include <math-barriers.h>,
so files using these macros do not need updating yet. That is because
of uses of math_force_eval in math_check_force_underflow and
math_check_force_underflow_nonneg, which are still defined in
math_private.h. Once those are moved out to a separate header, that
separate header can be made to include <math-barriers.h>, as can the
other files directly using these barrier macros, and then the include
of <math-barriers.h> from math_private.h can be removed.
Tested for x86_64 and x86. Also tested with build-many-glibcs.py that
installed stripped shared libraries are unchanged by this patch.
* sysdeps/generic/math-barriers.h: New file.
* sysdeps/generic/math_private.h [!math_opt_barrier]
(math_opt_barrier): Move to math-barriers.h.
[!math_opt_barrier] (math_force_eval): Likewise.
* sysdeps/aarch64/fpu/math-barriers.h: New file.
* sysdeps/aarch64/fpu/math_private.h (math_opt_barrier): Move to
math-barriers.h.
(math_force_eval): Likewise.
* sysdeps/alpha/fpu/math-barriers.h: New file.
* sysdeps/alpha/fpu/math_private.h (math_opt_barrier): Move to
math-barriers.h.
(math_force_eval): Likewise.
* sysdeps/x86/fpu/math-barriers.h: New file.
* sysdeps/i386/fpu/fenv_private.h (math_opt_barrier): Move to
math-barriers.h.
(math_force_eval): Likewise.
* sysdeps/m68k/m680x0/fpu/math_private.h: Move to....
* sysdeps/m68k/m680x0/fpu/math-barriers.h: ... here. Adjust
multiple-include guard for rename.
* sysdeps/powerpc/fpu/math-barriers.h: New file.
* sysdeps/powerpc/fpu/math_private.h (math_opt_barrier): Move to
math-barriers.h.
(math_force_eval): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch continues cleaning up the math_private.h header, which
contains lots of different definitions many of which are only needed
by a limited subset of files using that header (and some of which are
overridden by architectures that only want to override selected parts
of the header), by moving the math_narrow_eval macro out to a separate
math-narrow-eval.h header, only included by those files that need it.
That header is placed in include/ (since it's used in stdlib/, not
just files built in math/, but no sysdeps variants are needed at
present).
Tested for x86_64, and with build-many-glibcs.py. (Installed stripped
shared libraries change because of line numbers in assertions in
strtod_l.c.)
* include/math-narrow-eval.h: New file. Contents moved from ....
* sysdeps/generic/math_private.h: ... here.
(math_narrow_eval): Remove macro. Moved to math-narrow-eval.h.
[FLT_EVAL_METHOD != 0] (excess_precision): Likewise.
* math/s_fdim_template.c: Include <math-narrow-eval.h>.
* stdlib/strtod_l.c: Likewise.
* sysdeps/i386/fpu/s_f32xaddf64.c: Likewise.
* sysdeps/i386/fpu/s_f32xsubf64.c: Likewise.
* sysdeps/i386/fpu/s_fdim.c: Likewise.
* sysdeps/ieee754/dbl-64/e_cosh.c: Likewise.
* sysdeps/ieee754/dbl-64/e_gamma_r.c: Likewise.
* sysdeps/ieee754/dbl-64/e_j1.c: Likewise.
* sysdeps/ieee754/dbl-64/e_jn.c: Likewise.
* sysdeps/ieee754/dbl-64/e_lgamma_r.c: Likewise.
* sysdeps/ieee754/dbl-64/e_sinh.c: Likewise.
* sysdeps/ieee754/dbl-64/gamma_productf.c: Likewise.
* sysdeps/ieee754/dbl-64/k_rem_pio2.c: Likewise.
* sysdeps/ieee754/dbl-64/lgamma_neg.c: Likewise.
* sysdeps/ieee754/dbl-64/s_erf.c: Likewise.
* sysdeps/ieee754/dbl-64/s_llrint.c: Likewise.
* sysdeps/ieee754/dbl-64/s_lrint.c: Likewise.
* sysdeps/ieee754/flt-32/e_coshf.c: Likewise.
* sysdeps/ieee754/flt-32/e_exp2f.c: Likewise.
* sysdeps/ieee754/flt-32/e_expf.c: Likewise.
* sysdeps/ieee754/flt-32/e_gammaf_r.c: Likewise.
* sysdeps/ieee754/flt-32/e_j1f.c: Likewise.
* sysdeps/ieee754/flt-32/e_jnf.c: Likewise.
* sysdeps/ieee754/flt-32/e_lgammaf_r.c: Likewise.
* sysdeps/ieee754/flt-32/e_sinhf.c: Likewise.
* sysdeps/ieee754/flt-32/k_rem_pio2f.c: Likewise.
* sysdeps/ieee754/flt-32/lgamma_negf.c: Likewise.
* sysdeps/ieee754/flt-32/s_erff.c: Likewise.
* sysdeps/ieee754/flt-32/s_llrintf.c: Likewise.
* sysdeps/ieee754/flt-32/s_lrintf.c: Likewise.
* sysdeps/ieee754/ldbl-96/gamma_product.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the __slowexp code, so exp is no longer correctly rounded. The
result is computed to about 70 bits precision so the worst case ulp
error is about 0.500007 in nearest rounding mode.
* manual/probes.texi: Remove slowexp probes.
* math/Makefile: Remove slowexp.
* sysdeps/generic/math_private.h (__slowexp): Remove.
* sysdeps/ieee754/dbl-64/e_exp.c (__ieee754_exp): Remove __slowexp and
document error bounds.
* sysdeps/i386/fpu/slowexp.c: Remove.
* sysdeps/ia64/fpu/slowexp.c: Remove.
* sysdeps/ieee754/dbl-64/slowexp.c: Remove.
* sysdeps/ieee754/dbl-64/uexp.h (err_0): Remove.
* sysdeps/m68k/m680x0/fpu/slowexp.c: Remove.
* sysdeps/powerpc/power4/fpu/Makefile (CPPFLAGS-slowexp.c): Remove.
* sysdeps/x86_64/fpu/multiarch/Makefile: Remove slowexp-fma.
* sysdeps/x86_64/fpu/multiarch/e_exp-avx.c (__slowexp): Remove.
* sysdeps/x86_64/fpu/multiarch/e_exp-fma.c (__slowexp): Remove.
* sysdeps/x86_64/fpu/multiarch/e_exp-fma4.c (__slowexp): Remove.
* sysdeps/x86_64/fpu/multiarch/slowexp-avx.c: Remove.
* sysdeps/x86_64/fpu/multiarch/slowexp-fma.c: Remove.
* sysdeps/x86_64/fpu/multiarch/slowexp-fma4.c: Remove.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the slow paths from pow. Like several other double precision math
functions, pow is exactly rounded. This is not required from math functions
and causes major overheads as it requires multiple fallbacks using higher
precision arithmetic if a result is close to 0.5ULP. Ridiculous slowdowns
of up to 100000x have been reported when the highest precision path triggers.
All GLIBC math tests pass on AArch64 and x64 (with ULP of pow set to 1).
The worst case error is ~0.506ULP. A simple test over a few hundred million
values shows pow is 10% faster on average. This fixes BZ #13932.
[BZ #13932]
* sysdeps/ieee754/dbl-64/uexp.h (err_1): Remove.
* benchtests/pow-inputs: Update comment for slow path cases.
* manual/probes.texi (slowpow_p10): Delete removed probe.
(slowpow_p10): Likewise.
* math/Makefile: Remove halfulp.c and slowpow.c.
* sysdeps/aarch64/libm-test-ulps: Set ULP of pow to 1.
* sysdeps/generic/math_private.h (__exp1): Remove error argument.
(__halfulp): Remove.
(__slowpow): Remove.
* sysdeps/i386/fpu/halfulp.c: Delete file.
* sysdeps/i386/fpu/slowpow.c: Likewise.
* sysdeps/ia64/fpu/halfulp.c: Likewise.
* sysdeps/ia64/fpu/slowpow.c: Likewise.
* sysdeps/ieee754/dbl-64/e_exp.c (__exp1): Remove error argument,
improve comments and add error analysis.
* sysdeps/ieee754/dbl-64/e_pow.c (__ieee754_pow): Add error analysis.
(power1): Remove function:
(log1): Remove error argument, add error analysis.
(my_log2): Remove function.
* sysdeps/ieee754/dbl-64/halfulp.c: Delete file.
* sysdeps/ieee754/dbl-64/slowpow.c: Likewise.
* sysdeps/m68k/m680x0/fpu/halfulp.c: Likewise.
* sysdeps/m68k/m680x0/fpu/slowpow.c: Likewise.
* sysdeps/powerpc/power4/fpu/Makefile: Remove CPPFLAGS-slowpow.c.
* sysdeps/x86_64/fpu/libm-test-ulps: Set ULP of pow to 1.
* sysdeps/x86_64/fpu/multiarch/Makefile: Remove slowpow-fma.c,
slowpow-fma4.c, halfulp-fma.c, halfulp-fma4.c.
* sysdeps/x86_64/fpu/multiarch/e_pow-fma.c (__slowpow): Remove define.
* sysdeps/x86_64/fpu/multiarch/e_pow-fma4.c (__slowpow): Likewise.
* sysdeps/x86_64/fpu/multiarch/halfulp-fma.c: Delete file.
* sysdeps/x86_64/fpu/multiarch/halfulp-fma4.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowpow-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowpow-fma4.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Continuing the process of improving and cleaning up the handling of
configurations lacking support for floating-point exceptions and
rounding modes, this patch adds trivial inline definitions of
feholdexcept and __feholdexcept to the set of inlines for such
configurations in math_private.h. These inlines were missing from the
tile version used as a basis for the previous inlines, despite a few
such function calls ending up in libm.so.
Tested with build-many-glibcs.py. As expected, installed stripped
shared libraries are unchanged for architectures supporting exceptions
and rounding modes, but changed for architectures lacking such
support.
* sysdeps/generic/math_private.h
[!FE_HAVE_ROUNDING_MODES && FE_ALL_EXCEPT == 0] (feholdexcept):
New inline function.
[!FE_HAVE_ROUNDING_MODES && FE_ALL_EXCEPT == 0] (__feholdexcept):
Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The tile version of math_private.h defines some inline functions for
fenv.h functions, to optimize away internal calls to these functions
that do nothing given no support for floating-point exceptions and
rounding modes. (Some functions may have error cases for invalid
arguments, but those aren't applicable to the internal calls from
within glibc.) Other configurations lacking support for exceptions
and rounding modes lack such inline functions. This patch moves them
to the generic math_private.h, appropriately conditioned, so that all
such configurations can benefit from the.
include/fenv.h is made to check whether there are any non-default
rounding modes; that needs to be done there, rather than later,
because get-rounding-mode.h defines values for otherwise unsupported
FE_* rounding modes. It also gives an error for FE_TONEAREST
undefined, a case that already did not work for building the glibc
testsuite; the convention has by now been established that all
architectures need to provide a version of bits/fenv.h that at least
defines FE_TONEAREST.
Tested with build-many-glibcs.py. As expected, installed stripped
shared libraries are unchanged for tile and for architectures
supporting exceptions and rounding modes, but changed for non-tile
architectures not supporting exceptions and rounding modes that
previously lacked this optimization (e.g. Nios II libm.so is about 1kB
smaller).
The optimization is not in fact complete (does not cover feholdexcept
/ __feholdexcept, so a few calls to those remain unnecessarily within
libm even after this patch), but that can be dealt with separately.
* include/fenv.h [!_ISOMAC && !FE_TONEAREST]: Give #error.
[!_ISOMAC] (FE_HAVE_ROUNDING_MODES): New macro.
* sysdeps/generic/math_private.h
[!FE_HAVE_ROUNDING_MODES && FE_ALL_EXCEPT == 0] (fegetenv): New
inline function.
[!FE_HAVE_ROUNDING_MODES && FE_ALL_EXCEPT == 0] (__fegetenv):
Likewise.
[!FE_HAVE_ROUNDING_MODES && FE_ALL_EXCEPT == 0] (fesetenv):
Likewise.
[!FE_HAVE_ROUNDING_MODES && FE_ALL_EXCEPT == 0] (__fesetenv):
Likewise.
[!FE_HAVE_ROUNDING_MODES && FE_ALL_EXCEPT == 0] (feupdateenv):
Likewise.
[!FE_HAVE_ROUNDING_MODES && FE_ALL_EXCEPT == 0] (__feupdateenv):
Likewise.
[!FE_HAVE_ROUNDING_MODES] (fegetround): Likewise.
[!FE_HAVE_ROUNDING_MODES] (__fegetround): Likewise.
[!FE_HAVE_ROUNDING_MODES] (fesetround): Likewise.
[!FE_HAVE_ROUNDING_MODES] (__fesetround): Likewise.
* sysdeps/tile/math_private.h (fegetenv): Remove inline function.
(__fegetenv): Likewise.
(fesetenv): Likewise.
(__fesetenv): Likewise.
(feupdateenv): Likewise.
(__feupdateenv): Likewise.
(fegetround): Likewise.
(__fegetround): Likewise.
(fesetround): Likewise.
(__fesetround): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Various configurations lacking support for floating-point exceptions
and rounding modes have a math_private.h that overrides certain
functions and macros, internal and external, to avoid references to
FE_* constants that are undefined in those configurations. For
example, there are unconditional feraiseexcept (FE_INVALID) calls in
generic libm code, and these macro definitions duly define
feraiseexcept to ignore its argument to avoid an error from FE_INVALID
being undefined.
In fact it is easy to tell in an architecture-independent way whether
this is needed, by testing whether FE_ALL_EXCEPT == 0. Thus, this
patch puts such a test, and feraiseexcept and __feraiseexcept macros,
in the generic math_private.h, so reducing the duplication between
architecture versions of this header. The feclearexcept macro present
in several versions of this header, and fetestexcept in the tile
version, are not needed; they would have been needed before there were
proper soft-fp fma implementations (when generic versions, that depend
on FE_TOWARDZERO and FE_INEXACT, were being used for configurations
not supporting those features), but aren't needed any more, and so are
removed.
The tile version of this header has several inline functions for
fenv.h functions to optimize calls to them away in such configurations
where they do nothing useful, and all these header versions also have
definitions of some of the libc_fe* internal macros. I intend to make
those generic in subsequent patches.
Tested with build-many-glibcs.py that installed stripped shared
libraries are unchanged by this patch.
* sysdeps/generic/math_private.h [FE_ALL_EXCEPT == 0]
(feraiseexcept): New macro.
[FE_ALL_EXCEPT == 0] (__feraiseexcept): Likewise.
* sysdeps/m68k/coldfire/nofpu/math_private.h (feraiseexcept):
Remove macro.
(__feraiseexcept): Likewise.
(feclearexcept): Likewise.
* sysdeps/microblaze/math_private.h (feraiseexcept): Likewise.
(__feraiseexcept): Likewise.
(feclearexcept): Likewise.
* sysdeps/nios2/math_private.h (feraiseexcept): Likewise.
(__feraiseexcept): Likewise.
(feclearexcept): Likewise.
* sysdeps/tile/math_private.h (feraiseexcept): Likewise.
(__feraiseexcept): Likewise.
(feclearexcept): Likewise.
(fetestexcept): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For soft-float powerpc, the math/test-nearbyint-except-2 test fails
because nearbyintl traps when traps on "inexact" are enabled on entry
(and an "inexact" exception is generated internally, though cleared
for the final return).
The problem is the default implementation of
libc_feholdsetround_noex_ctx, which does not disable exception traps.
There is some ambiguity about whether the *noex* interfaces are
required to do so or only permitted to do so. But given that we
support fe* interfaces to enable and disable traps (on architectures
with that functionality), functions that must not raise an exception
(must not leave the flag set on exit if not set on entry) should also
not trap on it when traps on that exception are enabled. So it is
appropriate to define these interfaces to have the feholdexcept effect
of disabling exception traps; this patch updates the default
implementation and comments accordingly.
At least some architecture versions already disable traps; there are
few uses of the *noex* interfaces at all, and while it's possible
there are bugs on any architecture versions failing to disable traps
that appear in the exp2 and remainder implementations, there are
currently no tests, other than this one for nearbyintl (where only the
ldbl-128ibm implementation uses SET_RESTORE_ROUND_NOEX), that would
fail as a result of such a bug. (Hard-float powerpc does disable
traps here, hence the nearbyintl failure not appearing there.)
Tested for powerpc (soft-float). This brings that configuration to
clean math/ test results, provided you build with GCC 8 to get the fix
for GCC bug 64811.
[BZ #22702]
* sysdeps/generic/math_private.h (libc_feresetround_noex): Update
comment to say exceptions are discarded.
(libc_feholdsetround_noex_ctx): Use __feholdexcept instead of
__fegetenv.
(SET_RESTORE_ROUND_NOEX): Update comment to say non-stop mode must
be enabled.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Revert:
2017-12-19 Joseph Myers <joseph@codesourcery.com>
* sysdeps/x86_64/fpu/libm-test-ulps: Update.
2017-12-19 Patrick McGehearty <patrick.mcgehearty@oracle.com>
* sysdeps/ieee754/dbl-64/e_exp.c: Include <math-svid-compat.h> and
<errno.h>. Include "eexp.tbl".
(half): New constant.
(one): Likewise.
(__ieee754_exp): Rewrite.
(__slowexp): Remove prototype.
* sysdeps/ieee754/dbl-64/eexp.tbl: New file.
* sysdeps/ieee754/dbl-64/slowexp.c: Remove file.
* sysdeps/i386/fpu/slowexp.c: Likewise.
* sysdeps/ia64/fpu/slowexp.c: Likewise.
* sysdeps/m68k/m680x0/fpu/slowexp.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowexp-avx.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowexp-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowexp-fma4.c: Likewise.
* sysdeps/generic/math_private.h (__slowexp): Remove prototype.
* sysdeps/ieee754/dbl-64/e_pow.c: Remove mention of slowexp.c in
comment.
* sysdeps/powerpc/power4/fpu/Makefile [$(subdir) = math]
(CPPFLAGS-slowexp.c): Remove variable.
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
Remove slowexp-fma, slowexp-fma4 and slowexp-avx.
(CFLAGS-slowexp-fma.c): Remove variable.
(CFLAGS-slowexp-fma4.c): Likewise.
(CFLAGS-slowexp-avx.c): Likewise.
* sysdeps/x86_64/fpu/multiarch/e_exp-avx.c (__slowexp): Do not
define as macro.
* sysdeps/x86_64/fpu/multiarch/e_exp-fma.c (__slowexp): Likewise.
* sysdeps/x86_64/fpu/multiarch/e_exp-fma4.c (__slowexp): Likewise.
* math/Makefile (type-double-routines): Remove slowexp.
* manual/probes.texi (slowexp_p6): Remove.
(slowexp_p32): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These changes will be active for all platforms that don't provide
their own exp() routines. They will also be active for ieee754
versions of ccos, ccosh, cosh, csin, csinh, sinh, exp10, gamma, and
erf.
Typical performance gains is typically around 5x when measured on
Sparc s7 for common values between exp(1) and exp(40).
Using the glibc perf tests on sparc,
sparc (nsec) x86 (nsec)
old new old new
max 17629 395 5173 144
min 399 54 15 13
mean 5317 200 1349 23
The extreme max times for the old (ieee754) exp are due to the
multiprecision computation in the old algorithm when the true value is
very near 0.5 ulp away from an value representable in double
precision. The new algorithm does not take special measures for those
cases. The current glibc exp perf tests overrepresent those values.
Informal testing suggests approximately one in 200 cases might
invoke the high cost computation. The performance advantage of the new
algorithm for other values is still large but not as large as indicated
by the chart above.
Glibc correctness tests for exp() and expf() were run. Within the
test suite 3 input values were found to cause 1 bit differences (ulp)
when "FE_TONEAREST" rounding mode is set. No differences in exp() were
seen for the tested values for the other rounding modes.
Typical example:
exp(-0x1.760cd2p+0) (-1.46113312244415283203125)
new code: 2.31973271630014299393707e-01 0x1.db14cd799387ap-3
old code: 2.31973271630014271638132e-01 0x1.db14cd7993879p-3
exp = 2.31973271630014285508337 (high precision)
Old delta: off by 0.49 ulp
New delta: off by 0.51 ulp
In addition, because ieee754_exp() is used by other routines, cexp()
showed test results with very small imaginary input values where the
imaginary portion of the result was off by 3 ulp when in upward
rounding mode, but not in the other rounding modes. For x86, tgamma
showed a few values where the ulp increased to 6 (max ulp for tgamma
is 5). Sparc tgamma did not show these failures. I presume the tgamma
differences are due to compiler optimization differences within the
gamma function.The gamma function is known to be difficult to compute
accurately.
* sysdeps/ieee754/dbl-64/e_exp.c: Include <math-svid-compat.h> and
<errno.h>. Include "eexp.tbl".
(half): New constant.
(one): Likewise.
(__ieee754_exp): Rewrite.
(__slowexp): Remove prototype.
* sysdeps/ieee754/dbl-64/eexp.tbl: New file.
* sysdeps/ieee754/dbl-64/slowexp.c: Remove file.
* sysdeps/i386/fpu/slowexp.c: Likewise.
* sysdeps/ia64/fpu/slowexp.c: Likewise.
* sysdeps/m68k/m680x0/fpu/slowexp.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowexp-avx.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowexp-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowexp-fma4.c: Likewise.
* sysdeps/generic/math_private.h (__slowexp): Remove prototype.
* sysdeps/ieee754/dbl-64/e_pow.c: Remove mention of slowexp.c in
comment.
* sysdeps/powerpc/power4/fpu/Makefile [$(subdir) = math]
(CPPFLAGS-slowexp.c): Remove variable.
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
Remove slowexp-fma, slowexp-fma4 and slowexp-avx.
(CFLAGS-slowexp-fma.c): Remove variable.
(CFLAGS-slowexp-fma4.c): Likewise.
(CFLAGS-slowexp-avx.c): Likewise.
* sysdeps/x86_64/fpu/multiarch/e_exp-avx.c (__slowexp): Do not
define as macro.
* sysdeps/x86_64/fpu/multiarch/e_exp-fma.c (__slowexp): Likewise.
* sysdeps/x86_64/fpu/multiarch/e_exp-fma4.c (__slowexp): Likewise.
* math/Makefile (type-double-routines): Remove slowexp.
* manual/probes.texi (slowexp_p6): Remove.
(slowexp_p32): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
math_private.h uses __MATH_TG in defining the min_of_type macro used
within libm, with min_of_type_<suffix> macros for each type. This
runs into problems with __MATH_TG expansions used with additional
_FloatN and _FloatNx type support, because those can end up
macro-expanding the FUNC argument to __MATH_TG before it gets
concatenated with a suffix - meaning that min_of_type_ can't
simultaneously be the macro name for double, and a prefix to other
macro names, since the latter case requires such premature macro
expansion not to occur. (This is not a problem for the uses of
__MATH_TG in installed headers because FUNC there is a function name
in the implementation namespace, and the suffixes themselves don't get
macro-expanded.)
This patch fixes the problem by making min_of_type_<suffix> macros
function-like, so no macro expansion occurs when min_of_type_ is
expanded on its own as a macro argument, only later when followed by
() after expansion.
Tested for x86_64, including in conjunction with _Float64x support
patches.
* sysdeps/generic/math_private.h (min_of_type_f): Make into a
function-like macro.
(min_of_type_): Likewise.
(min_of_type_l): Likewise.
(min_of_type_f128): Likewise.
(min_of_type): Pass () as last argument of __MATH_TG.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch obsoletes support for SVID libm error handling (the system
where a user-defined function matherr is called on a libm function
error; only enabled if you also set _LIB_VERSION = _SVID_ or
_LIB_VERSION = _XOPEN_) and the use of the _LIB_VERSION global
variable to control libm error handling. matherr and _LIB_VERSION are
made into compat symbols, not supported for new ports or for static
linking. The libieee.a object file (which sets _LIB_VERSION = _IEEE_,
so disabling errno setting for some functions) is also removed, and
all the related definitions are removed from math.h.
The manual already recommends against using matherr, and it's already
not supported for _Float128 functions (those use new wrappers that
don't support matherr, only errno) - this patch means that it becomes
possible to e.g. add sinf32 as an alias to sinf without that resulting
in undesired matherr support in sinf32 for existing glibc ports.
matherr support is not part of any standard supported by glibc (it was
removed in XPG4).
Because matherr is a function to be defined by the user, of course
user programs defining such a function will still continue to link; it
just quietly won't be used. If they try to write to the library's
copy of _LIB_VERSION to enable SVID error handling, however, they will
get a link error (but if they define their own _LIB_VERSION variable,
they won't).
I expect the most likely case of build failures from this patch to be
programs with unconditional cargo-culted uses of -lieee (based on a
notion of "I want IEEE floating point", not any actual requirement for
that library).
Ideally, the new-port-or-static-linking case would use the new
wrappers used for _Float128. This is not implemented in this patch,
because of the complication of architecture-specific (powerpc32 and
sparc) sqrt wrappers that use _LIB_VERSION and __kernel_standard
directly. Thus, the old wrappers and __kernel_standard are still
built unconditionally, and _LIB_VERSION still exists in static libm.
But when the old wrappers and __kernel_standard are built in the
non-compat case, _LIB_VERSION and matherr are defined as macros so
code to support those features isn't actually built into static libm
or new ports' shared libm after this patch.
I intend to move to the new wrappers for static libm and new ports in
followup patches. I believe the sqrt wrappers for powerpc32 and sparc
can reasonably be removed. GCC already optimizes the normal case of
sqrt by generating code that uses a hardware instruction and only
calls the sqrt function if the argument was negative (if
-fno-math-errno, of course, it just uses the hardware instruction
without any check for negative argument being needed). Thus those
wrappers will only actually get called in the case of negative
arguments, which is not a case it makes sense to optimize for. But
even without removing the powerpc32 and sparc wrappers it should still
be possible to move to the new wrappers for static libm and new ports,
just without having those dubious architecture-specific optimizations
in static libm.
Everything said about matherr equally applies to matherrf and matherrl
(IA64-specific, undocumented), except that the structure of IA64 libm
means it won't be converted to using the new wrappers (it doesn't use
the old ones either, but its own error-handling code instead).
As with other tests of compat symbols, I expect test-matherr and
test-matherr-2 to need to become appropriately conditional once we
have a system for disabling such tests for ports too new to have the
relevant symbols.
Tested for x86_64 and x86, and with build-many-glibcs.py.
* math/math.h [__USE_MISC] (_LIB_VERSION_TYPE): Remove.
[__USE_MISC] (_LIB_VERSION): Likewise.
[__USE_MISC] (struct exception): Likewise.
[__USE_MISC] (matherr): Likewise.
[__USE_MISC] (DOMAIN): Likewise.
[__USE_MISC] (SING): Likewise.
[__USE_MISC] (OVERFLOW): Likewise.
[__USE_MISC] (UNDERFLOW): Likewise.
[__USE_MISC] (TLOSS): Likewise.
[__USE_MISC] (PLOSS): Likewise.
[__USE_MISC] (HUGE): Likewise.
[__USE_XOPEN] (MAXFLOAT): Define even if [__USE_MISC].
* math/math-svid-compat.h: New file.
* conform/linknamespace.pl (@whitelist): Remove matherr, matherrf
and matherrl.
* include/math.h [!_ISOMAC] (__matherr): Remove.
* manual/arith.texi (FP Exceptions): Do not document matherr.
* math/Makefile (tests): Change test-matherr to test-matherr-3.
(tests-internal): New variable.
(install-lib): Do not add libieee.a.
(non-lib.a): Likewise.
(extra-objs): Do not add libieee.a and ieee-math.o.
(CPPFLAGS-s_lib_version.c): Remove variable.
($(objpfx)libieee.a): Remove rule.
($(addprefix $(objpfx), $(tests-internal)): Depend on $(libm).
* math/ieee-math.c: Remove.
* math/libm-test-support.c (matherr): Remove.
* math/test-matherr.c: Use <support/test-driver.c>. Add copyright
and license notices. Include <math-svid-compat.h> and
<shlib-compat.h>.
(matherr): Undefine as macro. Use compat_symbol_reference.
(_LIB_VERSION): Likewise.
* math/test-matherr-2.c: New file.
* math/test-matherr-3.c: Likewise.
* sysdeps/generic/math_private.h (__kernel_standard): Remove
declaration.
(__kernel_standard_f): Likewise.
(__kernel_standard_l): Likewise.
* sysdeps/ieee754/s_lib_version.c: Do not include <math.h> or
<math_private.h>. Include <math-svid-compat.h>.
(_LIB_VERSION): Undefine as macro.
(_LIB_VERSION_INTERNAL): Always initialize to _POSIX_. Define
only if [LIBM_SVID_COMPAT || !defined SHARED]. If
[LIBM_SVID_COMPAT], use compat_symbol.
* sysdeps/ieee754/s_matherr.c: Do not include <math.h> or
<math_private.h>. Include <math-svid-compat.h>.
(matherr): Undefine as macro.
(__matherr): Define only if [LIBM_SVID_COMPAT]. Use
compat_symbol.
* sysdeps/ia64/fpu/libm_error.c: Include <math-svid-compat.h>.
[_LIBC && LIBM_SVID_COMPAT] (matherrf): Use
compat_symbol_reference.
[_LIBC && LIBM_SVID_COMPAT] (matherrl): Likewise.
[_LIBC && !LIBM_SVID_COMPAT] (matherrf): Define as macro.
[_LIBC && !LIBM_SVID_COMPAT] (matherrl): Likewise.
* sysdeps/ia64/fpu/libm_support.h: Include <math-svid-compat.h>.
(MATHERR_D): Remove declaration.
[!_LIBC] (_LIB_VERSION_TYPE): Likewise
[!LIBM_BUILD] (_LIB_VERSIONIMF): Likewise.
[LIBM_BUILD] (pmatherrf): Likewise.
[LIBM_BUILD] (pmatherr): Likewise.
[LIBM_BUILD] (pmatherrl): Likewise.
(DOMAIN): Likewise.
(SING): Likewise.
(OVERFLOW): Likewise.
(UNDERFLOW): Likewise.
(TLOSS): Likewise.
(PLOSS): Likewise.
* sysdeps/ia64/fpu/s_matherrf.c: Include <math-svid-compat.h>.
(__matherrf): Define only if [LIBM_SVID_COMPAT]. Use
compat_symbol.
* sysdeps/ia64/fpu/s_matherrl.c: Include <math-svid-compat.h>.
(__matherrl): Define only if [LIBM_SVID_COMPAT]. Use
compat_symbol.
* math/lgamma-compat.h: Include <math-svid-compat.h>.
* math/w_acos_compat.c: Likewise.
* math/w_acosf_compat.c: Likewise.
* math/w_acosh_compat.c: Likewise.
* math/w_acoshf_compat.c: Likewise.
* math/w_acoshl_compat.c: Likewise.
* math/w_acosl_compat.c: Likewise.
* math/w_asin_compat.c: Likewise.
* math/w_asinf_compat.c: Likewise.
* math/w_asinl_compat.c: Likewise.
* math/w_atan2_compat.c: Likewise.
* math/w_atan2f_compat.c: Likewise.
* math/w_atan2l_compat.c: Likewise.
* math/w_atanh_compat.c: Likewise.
* math/w_atanhf_compat.c: Likewise.
* math/w_atanhl_compat.c: Likewise.
* math/w_cosh_compat.c: Likewise.
* math/w_coshf_compat.c: Likewise.
* math/w_coshl_compat.c: Likewise.
* math/w_exp10_compat.c: Likewise.
* math/w_exp10f_compat.c: Likewise.
* math/w_exp10l_compat.c: Likewise.
* math/w_exp2_compat.c: Likewise.
* math/w_exp2f_compat.c: Likewise.
* math/w_exp2l_compat.c: Likewise.
* math/w_fmod_compat.c: Likewise.
* math/w_fmodf_compat.c: Likewise.
* math/w_fmodl_compat.c: Likewise.
* math/w_hypot_compat.c: Likewise.
* math/w_hypotf_compat.c: Likewise.
* math/w_hypotl_compat.c: Likewise.
* math/w_j0_compat.c: Likewise.
* math/w_j0f_compat.c: Likewise.
* math/w_j0l_compat.c: Likewise.
* math/w_j1_compat.c: Likewise.
* math/w_j1f_compat.c: Likewise.
* math/w_j1l_compat.c: Likewise.
* math/w_jn_compat.c: Likewise.
* math/w_jnf_compat.c: Likewise.
* math/w_jnl_compat.c: Likewise.
* math/w_lgamma_main.c: Likewise.
* math/w_lgamma_r_compat.c: Likewise.
* math/w_lgammaf_main.c: Likewise.
* math/w_lgammaf_r_compat.c: Likewise.
* math/w_lgammal_main.c: Likewise.
* math/w_lgammal_r_compat.c: Likewise.
* math/w_log10_compat.c: Likewise.
* math/w_log10f_compat.c: Likewise.
* math/w_log10l_compat.c: Likewise.
* math/w_log2_compat.c: Likewise.
* math/w_log2f_compat.c: Likewise.
* math/w_log2l_compat.c: Likewise.
* math/w_log_compat.c: Likewise.
* math/w_logf_compat.c: Likewise.
* math/w_logl_compat.c: Likewise.
* math/w_pow_compat.c: Likewise.
* math/w_powf_compat.c: Likewise.
* math/w_powl_compat.c: Likewise.
* math/w_remainder_compat.c: Likewise.
* math/w_remainderf_compat.c: Likewise.
* math/w_remainderl_compat.c: Likewise.
* math/w_scalb_compat.c: Likewise.
* math/w_scalbf_compat.c: Likewise.
* math/w_scalbl_compat.c: Likewise.
* math/w_sinh_compat.c: Likewise.
* math/w_sinhf_compat.c: Likewise.
* math/w_sinhl_compat.c: Likewise.
* math/w_sqrt_compat.c: Likewise.
* math/w_sqrtf_compat.c: Likewise.
* math/w_sqrtl_compat.c: Likewise.
* math/w_tgamma_compat.c: Likewise.
* math/w_tgammaf_compat.c: Likewise.
* math/w_tgammal_compat.c: Likewise.
* sysdeps/ieee754/dbl-64/w_exp_compat.c: Likewise.
* sysdeps/ieee754/flt-32/w_expf_compat.c: Likewise.
* sysdeps/ieee754/k_standard.c: Likewise.
* sysdeps/ieee754/k_standardf.c: Likewise.
* sysdeps/ieee754/k_standardl.c: Likewise.
* sysdeps/ieee754/ldbl-128/w_expl_compat.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/w_expl_compat.c: Likewise.
* sysdeps/ieee754/ldbl-96/w_expl_compat.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/w_sqrt_compat.S: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/w_sqrtf_compat.S: Likewise.
* sysdeps/powerpc/powerpc32/power5/fpu/w_sqrt_compat.S: Likewise.
* sysdeps/powerpc/powerpc32/power5/fpu/w_sqrtf_compat.S: Likewise.
* sysdeps/sparc/sparc32/fpu/w_sqrt_compat.S: Likewise.
* sysdeps/sparc/sparc32/fpu/w_sqrtf_compat.S: Likewise.
* sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/w_sqrt_compat-vis3.S:
Likewise.
* sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/w_sqrtf_compat-vis3.S:
Likewise.
* sysdeps/sparc/sparc32/sparcv9/fpu/w_sqrt_compat.S: Likewise.
* sysdeps/sparc/sparc32/sparcv9/fpu/w_sqrtf_compat.S: Likewise.
* sysdeps/sparc/sparc64/fpu/w_sqrt_compat.S: Likewise.
* sysdeps/sparc/sparc64/fpu/w_sqrtf_compat.S: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch reimplements the libm-internal min_of_type macro to use
__MATH_TG instead of its own local type-generic implementation, so
simplifying the code and reducing the number of different type-generic
implementation variants in use in glibc.
Tested for x86_64.
* sysdeps/generic/math_private.h (__EXPR_FLT128): Remove macro.
(min_of_type_f): New macro.
(min_of_type_): Likewise.
(min_of_type_l): Likewise.
(min_of_type_f128): Likewise.
(min_of_type): Define using __MATH_TG and taking an expression
argument.
(math_check_force_underflow): Pass expression instead of type to
min_of_type.
(math_check_force_underflow_nonneg): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch changes libm code to make consistent use of C99 uintN_t
types instead of sometimes using those and sometimes using the older
nonstandard u_intN_t names. This makes sense as a cleanup in its own
right, and also facilitates merges to GCC's libquadmath (which gets
the types from stdint.h and so may not have u_intN_t available at
all).
Tested for x86_64, and with build-many-glibcs.py.
* math/s_nextafter.c (__nextafter): Use uintN_t instead of
u_intN_t.
* math/s_nexttowardf.c (__nexttowardf): Likewise.
* sysdeps/generic/math_private.h (ieee_double_shape_type):
Likewise.
(ieee_float_shape_type): Likewise.
* sysdeps/i386/fpu/s_fpclassifyl.c (__fpclassifyl): Likewise.
* sysdeps/i386/fpu/s_isnanl.c (__isnanl): Likewise.
* sysdeps/i386/fpu/s_nextafterl.c (__nextafterl): Likewise.
* sysdeps/i386/fpu/s_nexttoward.c (__nexttoward): Likewise.
* sysdeps/i386/fpu/s_nexttowardf.c (__nexttowardf): Likewise.
* sysdeps/ieee754/dbl-64/e_acosh.c (__ieee754_acosh): Likewise.
* sysdeps/ieee754/dbl-64/e_cosh.c (__ieee754_cosh): Likewise.
* sysdeps/ieee754/dbl-64/e_fmod.c (__ieee754_fmod): Likewise.
* sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r):
Likewise.
* sysdeps/ieee754/dbl-64/e_hypot.c (__ieee754_hypot): Likewise.
* sysdeps/ieee754/dbl-64/e_jn.c (__ieee754_jn): Likewise.
(__ieee754_yn): Likewise.
* sysdeps/ieee754/dbl-64/e_log10.c (__ieee754_log10): Likewise.
* sysdeps/ieee754/dbl-64/e_log2.c (__ieee754_log2): Likewise.
* sysdeps/ieee754/dbl-64/e_rem_pio2.c (__ieee754_rem_pio2):
Likewise.
* sysdeps/ieee754/dbl-64/e_sinh.c (__ieee754_sinh): Likewise.
* sysdeps/ieee754/dbl-64/s_ceil.c (__ceil): Likewise.
* sysdeps/ieee754/dbl-64/s_copysign.c (__copysign): Likewise.
* sysdeps/ieee754/dbl-64/s_erf.c (__erf): Likewise.
(__erfc): Likewise.
* sysdeps/ieee754/dbl-64/s_expm1.c (__expm1): Likewise.
* sysdeps/ieee754/dbl-64/s_finite.c (FINITE): Likewise.
* sysdeps/ieee754/dbl-64/s_floor.c (__floor): Likewise.
* sysdeps/ieee754/dbl-64/s_fpclassify.c (__fpclassify): Likewise.
* sysdeps/ieee754/dbl-64/s_isnan.c (__isnan): Likewise.
* sysdeps/ieee754/dbl-64/s_issignaling.c (__issignaling):
Likewise.
* sysdeps/ieee754/dbl-64/s_llrint.c (__llrint): Likewise.
* sysdeps/ieee754/dbl-64/s_llround.c (__llround): Likewise.
* sysdeps/ieee754/dbl-64/s_lrint.c (__lrint): Likewise.
* sysdeps/ieee754/dbl-64/s_lround.c (__lround): Likewise.
* sysdeps/ieee754/dbl-64/s_modf.c (__modf): Likewise.
* sysdeps/ieee754/dbl-64/s_nextup.c (__nextup): Likewise.
* sysdeps/ieee754/dbl-64/s_remquo.c (__remquo): Likewise.
* sysdeps/ieee754/dbl-64/s_round.c (__round): Likewise.
* sysdeps/ieee754/dbl-64/s_trunc.c (__trunc): Likewise.
* sysdeps/ieee754/dbl-64/wordsize-64/s_issignaling.c
(__issignaling): Likewise.
* sysdeps/ieee754/flt-32/e_atan2f.c (__ieee754_atan2f): Likewise.
* sysdeps/ieee754/flt-32/e_fmodf.c (__ieee754_fmodf): Likewise.
* sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r):
Likewise.
* sysdeps/ieee754/flt-32/e_jnf.c (__ieee754_ynf): Likewise.
* sysdeps/ieee754/flt-32/e_log10f.c (__ieee754_log10f): Likewise.
* sysdeps/ieee754/flt-32/e_powf.c (__ieee754_powf): Likewise.
* sysdeps/ieee754/flt-32/e_rem_pio2f.c (__ieee754_rem_pio2f):
Likewise.
* sysdeps/ieee754/flt-32/e_remainderf.c (__ieee754_remainderf):
Likewise.
* sysdeps/ieee754/flt-32/e_sqrtf.c (__ieee754_sqrtf): Likewise.
* sysdeps/ieee754/flt-32/s_ceilf.c (__ceilf): Likewise.
* sysdeps/ieee754/flt-32/s_copysignf.c (__copysignf): Likewise.
* sysdeps/ieee754/flt-32/s_erff.c (__erff): Likewise.
(__erfcf): Likewise.
* sysdeps/ieee754/flt-32/s_expm1f.c (__expm1f): Likewise.
* sysdeps/ieee754/flt-32/s_finitef.c (FINITEF): Likewise.
* sysdeps/ieee754/flt-32/s_floorf.c (__floorf): Likewise.
* sysdeps/ieee754/flt-32/s_fpclassifyf.c (__fpclassifyf):
Likewise.
* sysdeps/ieee754/flt-32/s_isnanf.c (__isnanf): Likewise.
* sysdeps/ieee754/flt-32/s_issignalingf.c (__issignalingf):
Likewise.
* sysdeps/ieee754/flt-32/s_llrintf.c (__llrintf): Likewise.
* sysdeps/ieee754/flt-32/s_llroundf.c (__llroundf): Likewise.
* sysdeps/ieee754/flt-32/s_lrintf.c (__lrintf): Likewise.
* sysdeps/ieee754/flt-32/s_lroundf.c (__lroundf): Likewise.
* sysdeps/ieee754/flt-32/s_modff.c (__modff): Likewise.
* sysdeps/ieee754/flt-32/s_remquof.c (__remquof): Likewise.
* sysdeps/ieee754/flt-32/s_roundf.c (__roundf): Likewise.
* sysdeps/ieee754/ldbl-128/e_acoshl.c (__ieee754_acoshl):
Likewise.
* sysdeps/ieee754/ldbl-128/e_atan2l.c (__ieee754_atan2l):
Likewise.
* sysdeps/ieee754/ldbl-128/e_atanhl.c (__ieee754_atanhl):
Likewise.
* sysdeps/ieee754/ldbl-128/e_fmodl.c (__ieee754_fmodl): Likewise.
* sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r):
Likewise.
* sysdeps/ieee754/ldbl-128/e_hypotl.c (__ieee754_hypotl):
Likewise.
* sysdeps/ieee754/ldbl-128/e_jnl.c (__ieee754_jnl): Likewise.
(__ieee754_ynl): Likewise.
* sysdeps/ieee754/ldbl-128/e_powl.c (__ieee754_powl): Likewise.
* sysdeps/ieee754/ldbl-128/e_rem_pio2l.c (__ieee754_rem_pio2l):
Likewise.
* sysdeps/ieee754/ldbl-128/e_remainderl.c (__ieee754_remainderl):
Likewise.
* sysdeps/ieee754/ldbl-128/e_sinhl.c (__ieee754_sinhl): Likewise.
* sysdeps/ieee754/ldbl-128/k_cosl.c (__kernel_cosl): Likewise.
* sysdeps/ieee754/ldbl-128/k_sincosl.c (__kernel_sincosl):
Likewise.
* sysdeps/ieee754/ldbl-128/k_sinl.c (__kernel_sinl): Likewise.
* sysdeps/ieee754/ldbl-128/s_ceill.c (__ceill): Likewise.
* sysdeps/ieee754/ldbl-128/s_copysignl.c (__copysignl): Likewise.
* sysdeps/ieee754/ldbl-128/s_erfl.c (__erfcl): Likewise.
* sysdeps/ieee754/ldbl-128/s_fabsl.c (__fabsl): Likewise.
* sysdeps/ieee754/ldbl-128/s_finitel.c (__finitel): Likewise.
* sysdeps/ieee754/ldbl-128/s_floorl.c (__floorl): Likewise.
* sysdeps/ieee754/ldbl-128/s_fpclassifyl.c (__fpclassifyl):
Likewise.
* sysdeps/ieee754/ldbl-128/s_frexpl.c (__frexpl): Likewise.
* sysdeps/ieee754/ldbl-128/s_isnanl.c (__isnanl): Likewise.
* sysdeps/ieee754/ldbl-128/s_issignalingl.c (__issignalingl):
Likewise.
* sysdeps/ieee754/ldbl-128/s_llrintl.c (__llrintl): Likewise.
* sysdeps/ieee754/ldbl-128/s_llroundl.c (__llroundl): Likewise.
* sysdeps/ieee754/ldbl-128/s_lrintl.c (__lrintl): Likewise.
* sysdeps/ieee754/ldbl-128/s_lroundl.c (__lroundl): Likewise.
* sysdeps/ieee754/ldbl-128/s_modfl.c (__modfl): Likewise.
* sysdeps/ieee754/ldbl-128/s_nearbyintl.c (__nearbyintl):
Likewise.
* sysdeps/ieee754/ldbl-128/s_nextafterl.c (__nextafterl):
Likewise.
* sysdeps/ieee754/ldbl-128/s_nexttoward.c (__nexttoward):
Likewise.
* sysdeps/ieee754/ldbl-128/s_nexttowardf.c (__nexttowardf):
Likewise.
* sysdeps/ieee754/ldbl-128/s_nextupl.c (__nextupl): Likewise.
* sysdeps/ieee754/ldbl-128/s_remquol.c (__remquol): Likewise.
* sysdeps/ieee754/ldbl-128/s_rintl.c (__rintl): Likewise.
* sysdeps/ieee754/ldbl-128/s_roundl.c (__roundl): Likewise.
* sysdeps/ieee754/ldbl-128/s_tanhl.c (__tanhl): Likewise.
* sysdeps/ieee754/ldbl-128/s_truncl.c (__truncl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_fmodl.c (__ieee754_fmodl):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_powl.c (__ieee754_powl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_rem_pio2l.c (__ieee754_rem_pio2l):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_remainderl.c
(__ieee754_remainderl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/k_cosl.c (__kernel_cosl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/k_sinl.c (__kernel_sinl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_fabsl.c (__fabsl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_fpclassifyl.c (___fpclassifyl):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_modfl.c (__modfl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_nexttowardf.c (__nexttowardf):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_remquol.c (__remquol): Likewise.
* sysdeps/ieee754/ldbl-96/e_acoshl.c (__ieee754_acoshl): Likewise.
* sysdeps/ieee754/ldbl-96/e_asinl.c (__ieee754_asinl): Likewise.
* sysdeps/ieee754/ldbl-96/e_atanhl.c (__ieee754_atanhl): Likewise.
* sysdeps/ieee754/ldbl-96/e_coshl.c (__ieee754_coshl): Likewise.
* sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r):
Likewise.
* sysdeps/ieee754/ldbl-96/e_hypotl.c (__ieee754_hypotl): Likewise.
* sysdeps/ieee754/ldbl-96/e_j0l.c (__ieee754_j0l): Likewise.
(__ieee754_y0l): Likewise.
(pzero): Likewise.
(qzero): Likewise.
* sysdeps/ieee754/ldbl-96/e_j1l.c (__ieee754_j1l): Likewise.
(__ieee754_y1l): Likewise.
(pone): Likewise.
(qone): Likewise.
* sysdeps/ieee754/ldbl-96/e_jnl.c (__ieee754_jnl): Likewise.
(__ieee754_ynl): Likewise.
* sysdeps/ieee754/ldbl-96/e_lgammal_r.c (sin_pi): Likewise.
(__ieee754_lgammal_r): Likewise.
* sysdeps/ieee754/ldbl-96/e_rem_pio2l.c (__ieee754_rem_pio2l):
Likewise.
* sysdeps/ieee754/ldbl-96/e_sinhl.c (__ieee754_sinhl): Likewise.
* sysdeps/ieee754/ldbl-96/s_copysignl.c (__copysignl): Likewise.
* sysdeps/ieee754/ldbl-96/s_erfl.c (__erfl): Likewise.
(__erfcl): Likewise.
* sysdeps/ieee754/ldbl-96/s_frexpl.c (__frexpl): Likewise.
* sysdeps/ieee754/ldbl-96/s_issignalingl.c (__issignalingl):
Likewise.
* sysdeps/ieee754/ldbl-96/s_llrintl.c (__llrintl): Likewise.
* sysdeps/ieee754/ldbl-96/s_llroundl.c (__llroundl): Likewise.
* sysdeps/ieee754/ldbl-96/s_lrintl.c (__lrintl): Likewise.
* sysdeps/ieee754/ldbl-96/s_lroundl.c (__lroundl): Likewise.
* sysdeps/ieee754/ldbl-96/s_modfl.c (__modfl): Likewise.
* sysdeps/ieee754/ldbl-96/s_nexttoward.c (__nexttoward): Likewise.
* sysdeps/ieee754/ldbl-96/s_nexttowardf.c (__nexttowardf):
Likewise.
* sysdeps/ieee754/ldbl-96/s_nextupl.c (__nextupl): Likewise.
* sysdeps/ieee754/ldbl-96/s_remquol.c (__remquol): Likewise.
* sysdeps/ieee754/ldbl-96/s_roundl.c (__roundl): Likewise.
* sysdeps/ieee754/ldbl-96/s_tanhl.c (__tanhl): Likewise.
* sysdeps/ieee754/ldbl-opt/s_nexttowardfd.c (__nldbl_nexttowardf):
Likewise.
* sysdeps/m68k/m680x0/fpu/e_pow.c (s(__ieee754_pow)): Likewise.
* sysdeps/m68k/m680x0/fpu/s_fpclassifyl.c (__fpclassifyl):
Likewise.
* sysdeps/m68k/m680x0/fpu/s_llrint.c (__llrint): Likewise.
* sysdeps/m68k/m680x0/fpu/s_llrintf.c (__llrintf): Likewise.
* sysdeps/m68k/m680x0/fpu/s_llrintl.c (__llrintl): Likewise.
* sysdeps/m68k/m680x0/fpu/s_nextafterl.c (__nextafterl): Likewise.
* sysdeps/x86/fpu/powl_helper.c (__powl_helper): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The math_private.h macro min_of_type has broken _Float128 handling:
instead of passing its type argument to the key __EXPR_FLT128 macro,
it passes x, which is not a macro argument but whatever variable
called x happens to be visible in the calling function. If that
variable has the wrong type, the wrong one of long double and
_Float128 can get chosen. In particular, this applies to some
_Complex long double functions (where x happens to have type _Complex
long double, resulting in min_of_type returning a _Float128 value when
it should return a long double value). For some reason, this only
caused test failures for me on x86_64 with GCC 6 but not GCC 7 (I
suspect it triggers known bugs in conversions from x86 long double to
_Float128 that are present in GCC 6's soft-fp).
Tested for x86_64 (in conjunction with float128 patches).
* sysdeps/generic/math_private.h (__EXPR_FLT128): Do not apply
typeof to argument passed to __builtin_types_compatible_p.
(min_of_type): Pass type argument, not x, to __EXPR_FLT128.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the necessary bits to the private headers to support
building the _Float128 libm functions.
A local override for float.h is provided to include the
missing *FLT128 macros implied by TS 18661-3 for this
type when compiling prior to GCC 7.
* include/complex.h (__kernel_casinhf128): New declaration.
* include/float.h: New file.
* include/math.h (__finitef128): Add a hidden def.
(__isinff128): Likewise.
(__isnanf128): Likewise.
(__fpclassify): Likewise.
(__issignalling): Likewise.
(__expf128): Likewise.
(__expm1f128): Likewise.
* sysdeps/generic/fix-fp-int-convert-overflow.h:
(FIX_FLT128_LONG_CONVERT_OVERFLOW): New macro.
(FIX_FLT128_LLONG_CONVERT_OVERFLOW): Likewise.
* sysdeps/generic/math-type-macros-float128.h: New file.
* sysdeps/generic/math_private.h: Include bits/floatn.h and
math_private_calls.h for _Float128.
(__isinff128): New inline implementation used when GCC < 7.0,
since in this case __builtin_isinf_sign is broken.
(fabsf128): New inline implementation that calls the builtin.
(__EXPR_FLT128): New macro.
(min_of_type): Optionally include _Float128 types too.
* sysdeps/generic/math_private_calls.h (__kernel_sincos):
Declare for _Float128.
(__kernel_rem_pio2): Likewise.
* sysdeps/ieee754/ldbl-opt/s_sin.c:
(__DECL_SIMD_sincos_disablef128): New macro.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This provides a extra macro expansion before invoking
the hidden_def macro. This is necessary to build the
ldbl-128 files as float128 correctly.
* sysdeps/generic/math_private.h:
(mathx_hidden_def): New macro.
* sysdeps/ieee754/ldbl-128/s_finitel.c: Replace hidden_def with
the above.
* sysdeps/ieee754/ldbl-128/s_isinfl.c: Likewise.
* sysdeps/ieee754/ldbl-128/s_isnanl.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch moves the declaration of many floating-point functions from
math_private.h to math_private_calls.h and macroize the declaration to
be dependent on floating-point type. For each of float, double, and
long double, the new header file is included once. This reduces the
amount of repetitive boilerplate that will be required when adding
float128 versions of these functions.
Tested for powerpc64le and s390x.
* sysdeps/generic/math_private.h: Move the declaration of many
functions to sysdeps/generic/math_private_calls.h.
* sysdeps/generic/math_private_calls.h: New file with the
declarations of the functions removed from math_private.h
macroized by floating-point type.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The declarations of many functions in math_private.h are not required
since __MATHDECL and __MATHDECLX, in math.h, already provide the
declarations for these functions. This patch removes the declarations
from math_private.h. It also adds the inclusion of math.h to the files
which depended on the declaration of functions in math_private.h.
Tested for powerpc64le and s390x.
* sysdeps/generic/math_private.h: Remove declarations of
many functions that are already declared in math.h.
* sysdeps/ieee754/ldbl-128/e_logl.c: Include math.h to get the
declaration for __frexpl.
* sysdeps/ieee754/ldbl-128ibm/e_logl.c: Include math.h to get
the declarations for __scalbnl and fabsl.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The implementation of __ieee754_rem_pio2l in ldbl-128, ldbl-128ibm,
and ldbl-96 return the type int32_t, whereas math_private.h declares
it as returning int. This patch changes the declaration to match the
declaration in thoses directories, as well as it changes the stub
implementation in math/e_rem_pio2l.c, similarly.
* math/e_rem_pio2l.c (__ieee754_rem_pio2l): Change return type
to int32_t.
* sysdeps/generic/math_private.h: Declare __ieee754_rem_pio2l
as returning int32_t.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A few 'long double'-related tests include math_private.h just for
their variety of math_ldbl.h, which contains macros for assembling and
disassembling the binary representation of 'long double'. math_ldbl.h
insists on being included from math_private.h, but if we relax this
restriction (and fix some portability sloppiness) we can use it
directly and not have to expose all of math_private.h to the testsuite.
* sysdeps/generic/math_private.h: Use __BIG_ENDIAN and
__LITTLE_ENDIAN, not BIG_ENDIAN and LITTLE_ENDIAN.
* sysdeps/generic/math_ldbl.h
* sysdeps/ia64/fpu/math_ldbl.h
* sysdeps/ieee754/ldbl-128/math_ldbl.h
* sysdeps/ieee754/ldbl-128ibm/math_ldbl.h
* sysdeps/ieee754/ldbl-96/math_ldbl.h
* sysdeps/powerpc/fpu/math_ldbl.h
* sysdeps/x86_64/fpu/math_ldbl.h:
Allow direct inclusion. Use uintNN_t instead of u_intNN_t.
Use __BIG_ENDIAN and __LITTLE_ENDIAN, not BIG_ENDIAN and
LITTLE_ENDIAN. Include endian.h and/or stdint.h if necessary.
Add copyright notices.
* sysdeps/ieee754/ldbl-128ibm/math_ldbl.h (ldbl_canonicalize_int):
Don't use EXTRACT_WORDS64.
* sysdeps/ieee754/ldbl-96/test-canonical-ldbl-96.c
* sysdeps/ieee754/ldbl-96/test-totalorderl-ldbl-96.c
* sysdeps/ieee754/ldbl-128ibm/test-canonical-ldbl-128ibm.c
* sysdeps/ieee754/ldbl-128ibm/test-totalorderl-ldbl-128ibm.c:
Include math_ldbl.h, not math_private.h.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch refactors some type-generic libm macros, in both math.h and
math_private.h, to be based on a common __MATH_TG macro rather than
all replicating similar logic to choose a function to call based on
the type of the argument.
This should serve to illustrate what I think float128 support for such
macros should look like: common macros such as __MATH_TG may need
different definitions depending on whether float128 is supported in
glibc, so that the individual macros themselves do not need
conditionals on float128 support.
Tested for x86_64, x86, mips64 and powerpc.
* math/math.h (__MATH_TG): New macro.
[__USE_ISOC99] (fpclassify): Define using __MATH_TG.
[__USE_ISOC99] (signbit): Likewise.
[__USE_ISOC99] (isfinite): Likewise.
[__USE_ISOC99] (isnan): Likewise.
[__USE_ISOC99] (isinf): Likewise.
[__GLIBC_USE (IEC_60559_BFP_EXT)] (issignaling): Likewise.
[__GLIBC_USE (IEC_60559_BFP_EXT)] (__MATH_EVAL_FMT2): New macro.
[__GLIBC_USE (IEC_60559_BFP_EXT)] (iseqsig): Define using
__MATH_TG and __MATH_EVAL_FMT2.
* sysdeps/generic/math_private.h (fabs_tg): Define using
__MATH_TG.
* sysdeps/ieee754/ldbl-128ibm/bits/iscanonical.h
[!__NO_LONG_DOUBLE_MATH] (__iscanonicalf): New macro.
[!__NO_LONG_DOUBLE_MATH] (__iscanonical): Likewise.
[!__NO_LONG_DOUBLE_MATH] (iscanonical): Define using __MATH_TG.
* sysdeps/ieee754/ldbl-96/bits/iscanonical.h (__iscanonicalf): New
macro.
(__iscanonical): Likewise.
(iscanonical): Define using __MATH_TG.
|
|
|
|
|
|
| |
Use the GCC builtin instead. With the exception of the
files built from a template, they are unused. This
is preparation for making the s_nanF objects generated.
|
|
|
|
|
|
|
|
|
|
| |
This is only used for the float and double variants.
Instead, just add it to the type specific list of files,
and remove all stubs, and remove the declaration from
math_private.h.
I verified x86_64, i486, ia64, m68k, and ppc64 build.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For arguments with X^2 + Y^2 close to 1, clog and clog10 avoid large
errors from log(hypot) by computing X^2 + Y^2 - 1 in a way that avoids
cancellation error and then using log1p.
However, the thresholds for using that approach still result in log
being used on argument as large as sqrt(13/16) > 0.9, leading to
significant errors, in some cases above the 9ulp maximum allowed in
glibc libm. This patch arranges for the approach using log1p to be
used in any cases where |X|, |Y| < 1 and X^2 + Y^2 >= 0.5 (with the
existing allowance for cases where one of X and Y is very small),
adjusting the __x2y2m1 functions to work with the wider range of
inputs. This way, log only gets used on arguments below sqrt(1/2) (or
substantially above 1), where the error involved is much less.
Tested for x86_64, x86, mips64 and powerpc. For the ulps regeneration
I removed the existing clog and clog10 ulps before regenerating to
allow any reduced ulps to appear. Tests added include those found by
random test generation to produce large ulps either before or after
the patch, and some found by trying inputs close to the (0.75, 0.5)
threshold where the potential errors from using log are largest.
[BZ #19016]
* sysdeps/generic/math_private.h (__x2y2m1f): Update comment to
allow more cases with X^2 + Y^2 >= 0.5.
* sysdeps/ieee754/dbl-64/x2y2m1.c (__x2y2m1): Likewise. Add -1 as
normal element in sum instead of special-casing based on values of
arguments.
* sysdeps/ieee754/dbl-64/x2y2m1f.c (__x2y2m1f): Update comment.
* sysdeps/ieee754/ldbl-128/x2y2m1l.c (__x2y2m1l): Likewise. Add
-1 as normal element in sum instead of special-casing based on
values of arguments.
* sysdeps/ieee754/ldbl-128ibm/x2y2m1l.c (__x2y2m1l): Likewise.
* sysdeps/ieee754/ldbl-96/x2y2m1.c [FLT_EVAL_METHOD != 0]
(__x2y2m1): Update comment.
* sysdeps/ieee754/ldbl-96/x2y2m1l.c (__x2y2m1l): Likewise. Add -1
as normal element in sum instead of special-casing based on values
of arguments.
* math/s_clog.c (__clog): Handle more cases using log1p without
hypot.
* math/s_clog10.c (__clog10): Likewise.
* math/s_clog10f.c (__clog10f): Likewise.
* math/s_clog10l.c (__clog10l): Likewise.
* math/s_clogf.c (__clogf): Likewise.
* math/s_clogl.c (__clogl): Likewise.
* math/auto-libm-test-in: Add more tests of clog and clog10.
* math/auto-libm-test-out: Regenerated.
* sysdeps/i386/fpu/libm-test-ulps: Update.
* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Various floating-point functions have code to force underflow
exceptions if a tiny result was computed in a way that might not have
resulted in such exceptions even though the result is inexact. This
typically uses math_force_eval to ensure that the underflowing
expression is evaluated, but sometimes uses volatile.
This patch refactors such code to use three new macros
math_check_force_underflow, math_check_force_underflow_nonneg and
math_check_force_underflow_complex (which in turn use
math_force_eval). In the limited number of cases not suited to a
simple conversion to these macros, existing uses of volatile are
changed to use math_force_eval instead. The converted code does not
always execute exactly the same sequence of operations as the original
code, but the overall effects should be the same.
Tested for x86_64, x86, mips64 and powerpc.
* sysdeps/generic/math_private.h (fabs_tg): New macro.
(min_of_type): Likewise.
(math_check_force_underflow): Likewise.
(math_check_force_underflow_nonneg): Likewise.
(math_check_force_underflow_complex): Likewise.
* math/e_exp2l.c (__ieee754_exp2l): Use
math_check_force_underflow_nonneg.
* math/k_casinh.c (__kernel_casinh): Likewise.
* math/k_casinhf.c (__kernel_casinhf): Likewise.
* math/k_casinhl.c (__kernel_casinhl): Likewise.
* math/s_catan.c (__catan): Use
math_check_force_underflow_complex.
* math/s_catanf.c (__catanf): Likewise.
* math/s_catanh.c (__catanh): Likewise.
* math/s_catanhf.c (__catanhf): Likewise.
* math/s_catanhl.c (__catanhl): Likewise.
* math/s_catanl.c (__catanl): Likewise.
* math/s_ccosh.c (__ccosh): Likewise.
* math/s_ccoshf.c (__ccoshf): Likewise.
* math/s_ccoshl.c (__ccoshl): Likewise.
* math/s_cexp.c (__cexp): Likewise.
* math/s_cexpf.c (__cexpf): Likewise.
* math/s_cexpl.c (__cexpl): Likewise.
* math/s_clog.c (__clog): Use math_check_force_underflow_nonneg.
* math/s_clog10.c (__clog10): Likewise.
* math/s_clog10f.c (__clog10f): Likewise.
* math/s_clog10l.c (__clog10l): Likewise.
* math/s_clogf.c (__clogf): Likewise.
* math/s_clogl.c (__clogl): Likewise.
* math/s_csin.c (__csin): Use math_check_force_underflow_complex.
* math/s_csinf.c (__csinf): Likewise.
* math/s_csinh.c (__csinh): Likewise.
* math/s_csinhf.c (__csinhf): Likewise.
* math/s_csinhl.c (__csinhl): Likewise.
* math/s_csinl.c (__csinl): Likewise.
* math/s_csqrt.c (__csqrt): Use math_check_force_underflow.
* math/s_csqrtf.c (__csqrtf): Likewise.
* math/s_csqrtl.c (__csqrtl): Likewise.
* math/s_ctan.c (__ctan): Use math_check_force_underflow_complex.
* math/s_ctanf.c (__ctanf): Likewise.
* math/s_ctanh.c (__ctanh): Likewise.
* math/s_ctanhf.c (__ctanhf): Likewise.
* math/s_ctanhl.c (__ctanhl): Likewise.
* math/s_ctanl.c (__ctanl): Likewise.
* stdlib/strtod_l.c (round_and_return): Use math_force_eval
instead of volatile.
* sysdeps/ieee754/dbl-64/e_asin.c (__ieee754_asin): Use
math_check_force_underflow.
* sysdeps/ieee754/dbl-64/e_atanh.c (__ieee754_atanh): Likewise.
* sysdeps/ieee754/dbl-64/e_exp.c (__ieee754_exp): Do not use
volatile when forcing underflow.
* sysdeps/ieee754/dbl-64/e_exp2.c (__ieee754_exp2): Use
math_check_force_underflow_nonneg.
* sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r):
Likewise.
* sysdeps/ieee754/dbl-64/e_j1.c (__ieee754_j1): Use
math_check_force_underflow.
* sysdeps/ieee754/dbl-64/e_jn.c (__ieee754_jn): Likewise.
* sysdeps/ieee754/dbl-64/e_sinh.c (__ieee754_sinh): Likewise.
* sysdeps/ieee754/dbl-64/s_asinh.c (__asinh): Likewise.
* sysdeps/ieee754/dbl-64/s_atan.c (atan): Use
math_check_force_underflow_nonneg.
* sysdeps/ieee754/dbl-64/s_erf.c (__erf): Use
math_check_force_underflow.
* sysdeps/ieee754/dbl-64/s_expm1.c (__expm1): Likewise.
* sysdeps/ieee754/dbl-64/s_fma.c (__fma): Use math_force_eval
instead of volatile.
* sysdeps/ieee754/dbl-64/s_log1p.c (__log1p): Use
math_check_force_underflow.
* sysdeps/ieee754/dbl-64/s_sin.c (__sin): Likewise.
* sysdeps/ieee754/dbl-64/s_tan.c (tan): Use
math_check_force_underflow_nonneg.
* sysdeps/ieee754/dbl-64/s_tanh.c (__tanh): Use
math_check_force_underflow.
* sysdeps/ieee754/flt-32/e_asinf.c (__ieee754_asinf): Likewise.
* sysdeps/ieee754/flt-32/e_atanhf.c (__ieee754_atanhf): Likewise.
* sysdeps/ieee754/flt-32/e_exp2f.c (__ieee754_exp2f): Use
math_check_force_underflow_nonneg.
* sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r):
Likewise.
* sysdeps/ieee754/flt-32/e_j1f.c (__ieee754_j1f): Use
math_check_force_underflow.
* sysdeps/ieee754/flt-32/e_jnf.c (__ieee754_jnf): Likewise.
* sysdeps/ieee754/flt-32/e_sinhf.c (__ieee754_sinhf): Likewise.
* sysdeps/ieee754/flt-32/k_sinf.c (__kernel_sinf): Likewise.
* sysdeps/ieee754/flt-32/k_tanf.c (__kernel_tanf): Likewise.
* sysdeps/ieee754/flt-32/s_asinhf.c (__asinhf): Likewise.
* sysdeps/ieee754/flt-32/s_atanf.c (__atanf): Likewise.
* sysdeps/ieee754/flt-32/s_erff.c (__erff): Likewise.
* sysdeps/ieee754/flt-32/s_expm1f.c (__expm1f): Likewise.
* sysdeps/ieee754/flt-32/s_log1pf.c (__log1pf): Likewise.
* sysdeps/ieee754/flt-32/s_tanhf.c (__tanhf): Likewise.
* sysdeps/ieee754/ldbl-128/e_asinl.c (__ieee754_asinl): Likewise.
* sysdeps/ieee754/ldbl-128/e_atanhl.c (__ieee754_atanhl):
Likewise.
* sysdeps/ieee754/ldbl-128/e_expl.c (__ieee754_expl): Use
math_check_force_underflow_nonneg.
* sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r):
Likewise.
* sysdeps/ieee754/ldbl-128/e_j1l.c (__ieee754_j1l): Use
math_check_force_underflow.
* sysdeps/ieee754/ldbl-128/e_jnl.c (__ieee754_jnl): Likewise.
* sysdeps/ieee754/ldbl-128/e_sinhl.c (__ieee754_sinhl): Likewise.
* sysdeps/ieee754/ldbl-128/k_sincosl.c (__kernel_sincosl):
Likewise.
* sysdeps/ieee754/ldbl-128/k_sinl.c (__kernel_sinl): Likewise.
* sysdeps/ieee754/ldbl-128/k_tanl.c (__kernel_tanl): Likewise.
* sysdeps/ieee754/ldbl-128/s_asinhl.c (__asinhl): Likewise.
* sysdeps/ieee754/ldbl-128/s_atanl.c (__atanl): Likewise.
* sysdeps/ieee754/ldbl-128/s_erfl.c (__erfl): Likewise.
* sysdeps/ieee754/ldbl-128/s_expm1l.c (__expm1l): Likewise.
* sysdeps/ieee754/ldbl-128/s_fmal.c (__fmal): Use math_force_eval
instead of volatile.
* sysdeps/ieee754/ldbl-128/s_log1pl.c (__log1pl): Use
math_check_force_underflow.
* sysdeps/ieee754/ldbl-128/s_tanhl.c (__tanhl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_asinl.c (__ieee754_asinl): Use
math_check_force_underflow.
* sysdeps/ieee754/ldbl-128ibm/e_atanhl.c (__ieee754_atanhl):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r):
Use math_check_force_underflow_nonneg.
* sysdeps/ieee754/ldbl-128ibm/e_jnl.c (__ieee754_jnl): Use
math_check_force_underflow.
* sysdeps/ieee754/ldbl-128ibm/e_sinhl.c (__ieee754_sinhl):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/k_sincosl.c (__kernel_sincosl):
Likewise.
* sysdeps/ieee754/ldbl-128ibm/k_sinl.c (__kernel_sinl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/k_tanl.c (__kernel_tanl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_asinhl.c (__asinhl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_atanl.c (__atanl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_erfl.c (__erfl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_tanhl.c (__tanhl): Likewise.
* sysdeps/ieee754/ldbl-96/e_asinl.c (__ieee754_asinl): Likewise.
* sysdeps/ieee754/ldbl-96/e_atanhl.c (__ieee754_atanhl): Likewise.
* sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r): Use
math_check_force_underflow_nonneg.
* sysdeps/ieee754/ldbl-96/e_j1l.c (__ieee754_j1l): Use
math_check_force_underflow.
* sysdeps/ieee754/ldbl-96/e_jnl.c (__ieee754_jnl): Likewise.
* sysdeps/ieee754/ldbl-96/e_sinhl.c (__ieee754_sinhl): Likewise.
* sysdeps/ieee754/ldbl-96/k_sinl.c (__kernel_sinl): Likewise.
* sysdeps/ieee754/ldbl-96/k_tanl.c (__kernel_tanl): Use
math_check_force_underflow_nonneg.
* sysdeps/ieee754/ldbl-96/s_asinhl.c (__asinhl): Use
math_check_force_underflow.
* sysdeps/ieee754/ldbl-96/s_erfl.c (__erfl): Likewise.
* sysdeps/ieee754/ldbl-96/s_fmal.c (__fmal): Use math_force_eval
instead of volatile.
* sysdeps/ieee754/ldbl-96/s_tanhl.c (__tanhl): Use
math_check_force_underflow.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Various i386 libm functions return values with excess range and
precision; Wilco Dijkstra's patches to make isfinite etc. expand
inline cause this pre-existing issue to result in test failures (when
e.g. a result that overflows float but not long double gets counted as
overflowing for some purposes but not others).
This patch addresses those cases arising from functions defined in C,
adding a math_narrow_eval macro that forces values to memory to
eliminate excess precision if FLT_EVAL_METHOD indicates this is
needed, and is a no-op otherwise. I'll convert existing uses of
volatile and asm for this purpose to use the new macro later, once
i386 has clean test results again (which requires fixes for .S files
as well).
Tested for x86_64 and x86. Committed.
[BZ #18980]
* sysdeps/generic/math_private.h: Include <float.h>.
(math_narrow_eval): New macro.
[FLT_EVAL_METHOD != 0] (excess_precision): Likewise.
* sysdeps/ieee754/dbl-64/e_cosh.c (__ieee754_cosh): Use
math_narrow_eval on overflowing return value.
* sysdeps/ieee754/dbl-64/e_lgamma_r.c (__ieee754_lgamma_r):
Likewise.
* sysdeps/ieee754/dbl-64/e_sinh.c (__ieee754_sinh): Likewise.
* sysdeps/ieee754/flt-32/e_coshf.c (__ieee754_coshf): Likewise.
* sysdeps/ieee754/flt-32/e_lgammaf_r.c (__ieee754_lgammaf_r):
Likewise.
* sysdeps/ieee754/flt-32/e_sinhf.c (__ieee754_sinhf): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing implementations of lgamma functions (except for the ia64
versions) use the reflection formula for negative arguments. This
suffers large inaccuracy from cancellation near zeros of lgamma (near
where the gamma function is +/- 1).
This patch fixes this inaccuracy. For arguments above -2, there are
no zeros and no large cancellation, while for sufficiently large
negative arguments the zeros are so close to integers that even for
integers +/- 1ulp the log(gamma(1-x)) term dominates and cancellation
is not significant. Thus, it is only necessary to take special care
about cancellation for arguments around a limited number of zeros.
Accordingly, this patch uses precomputed tables of relevant zeros,
expressed as the sum of two floating-point values. The log of the
ratio of two sines can be computed accurately using log1p in cases
where log would lose accuracy. The log of the ratio of two gamma(1-x)
values can be computed using Stirling's approximation (the difference
between two values of that approximation to lgamma being computable
without computing the two values and then subtracting), with
appropriate adjustments (which don't reduce accuracy too much) in
cases where 1-x is too small to use Stirling's approximation directly.
In the interval from -3 to -2, using the ratios of sines and of
gamma(1-x) can still produce too much cancellation between those two
parts of the computation (and that interval is also the worst interval
for computing the ratio between gamma(1-x) values, which computation
becomes more accurate, while being less critical for the final result,
for larger 1-x). Because this can result in errors slightly above
those accepted in glibc, this interval is instead dealt with by
polynomial approximations. Separate polynomial approximations to
(|gamma(x)|-1)(x-n)/(x-x0) are used for each interval of length 1/8
from -3 to -2, where n (-3 or -2) is the nearest integer to the
1/8-interval and x0 is the zero of lgamma in the relevant half-integer
interval (-3 to -2.5 or -2.5 to -2).
Together, the two approaches are intended to give sufficient accuracy
for all negative arguments in the problem range. Outside that range,
the previous implementation continues to be used.
Tested for x86_64, x86, mips64 and powerpc. The mips64 and powerpc
testing shows up pre-existing problems for ldbl-128 and ldbl-128ibm
with large negative arguments giving spurious "invalid" exceptions
(exposed by newly added tests for cases this patch doesn't affect the
logic for); I'll address those problems separately.
[BZ #2542]
[BZ #2543]
[BZ #2558]
* sysdeps/ieee754/dbl-64/e_lgamma_r.c (__ieee754_lgamma_r): Call
__lgamma_neg for arguments from -28.0 to -2.0.
* sysdeps/ieee754/flt-32/e_lgammaf_r.c (__ieee754_lgammaf_r): Call
__lgamma_negf for arguments from -15.0 to -2.0.
* sysdeps/ieee754/ldbl-128/e_lgammal_r.c (__ieee754_lgammal_r):
Call __lgamma_negl for arguments from -48.0 or -50.0 to -2.0.
* sysdeps/ieee754/ldbl-96/e_lgammal_r.c (__ieee754_lgammal_r):
Call __lgamma_negl for arguments from -33.0 to -2.0.
* sysdeps/ieee754/dbl-64/lgamma_neg.c: New file.
* sysdeps/ieee754/dbl-64/lgamma_product.c: Likewise.
* sysdeps/ieee754/flt-32/lgamma_negf.c: Likewise.
* sysdeps/ieee754/flt-32/lgamma_productf.c: Likewise.
* sysdeps/ieee754/ldbl-128/lgamma_negl.c: Likewise.
* sysdeps/ieee754/ldbl-128/lgamma_productl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/lgamma_negl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/lgamma_productl.c: Likewise.
* sysdeps/ieee754/ldbl-96/lgamma_negl.c: Likewise.
* sysdeps/ieee754/ldbl-96/lgamma_product.c: Likewise.
* sysdeps/ieee754/ldbl-96/lgamma_productl.c: Likewise.
* sysdeps/generic/math_private.h (__lgamma_negf): New prototype.
(__lgamma_neg): Likewise.
(__lgamma_negl): Likewise.
(__lgamma_product): Likewise.
(__lgamma_productl): Likewise.
* math/Makefile (libm-calls): Add lgamma_neg and lgamma_product.
* math/auto-libm-test-in: Add more tests of lgamma.
* math/auto-libm-test-out: Regenerated.
* sysdeps/i386/fpu/libm-test-ulps: Update.
* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Concluding the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of feupdateenv by making it a weak alias for
__feupdateenv and making the affected code call __feupdateenv.
Tested for x86_64 (testsuite, and that installed stripped shared
libraries are unchanged by the patch). Also tested for ARM
(soft-float) that the math.h linknamespace tests now pass.
[BZ #17748]
* include/fenv.h (__feupdateenv): Use libm_hidden_proto.
* math/feupdateenv.c (__feupdateenv): Use libm_hidden_def.
* sysdeps/aarch64/fpu/feupdateenv.c (feupdateenv): Rename to
__feupdateenv and define as weak alias of __feupdateenv. Use
libm_hidden_weak.
* sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Use
libm_hidden_def.
* sysdeps/arm/feupdateenv.c (feupdateenv): Rename to __feupdateenv
and define as weak alias of __feupdateenv. Use libm_hidden_weak.
* sysdeps/hppa/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/i386/fpu/feupdateenv.c (__feupdateenv): Use
libm_hidden_def.
* sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Rename to
__feupdateenv and define as weak alias of __feupdateenv. Use
libm_hidden_weak.
* sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Use
libm_hidden_def.
* sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Rename to
__feupdateenv and define as weak alias of __feupdateenv. Use
libm_hidden_weak.
* sysdeps/powerpc/fpu/feupdateenv.c (__feupdateenv): Use
libm_hidden_def.
* sysdeps/powerpc/nofpu/feupdateenv.c (__feupdateenv): Likewise.
* sysdeps/powerpc/powerpc32/e500/nofpu/feupdateenv.c
(__feupdateenv): Likewise.
* sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Rename to
__feupdateenv and define as weak alias of __feupdateenv. Use
libm_hidden_weak.
* sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Use
libm_hidden_def.
* sysdeps/tile/math_private.h (__feupdateenv): New inline
function.
* sysdeps/x86_64/fpu/feupdateenv.c (__feupdateenv): Use
libm_hidden_def.
* sysdeps/generic/math_private.h (default_libc_feupdateenv): Call
__feupdateenv instead of feupdateenv.
(default_libc_feupdateenv_test): Likewise.
(libc_feresetround_ctx): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Continuing the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of fesetround by making it a weak alias of
__fesetround and making the affected code call __fesetround. An
existing __fesetround function in fenv_libc.h for powerpc is renamed
to __fesetround_inline.
Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch). Also tested for ARM
(soft-float) that fesetround failures disappear from the linknamespace
test results (feupdateenv remains to be addressed to complete fixing
bug 17748).
[BZ #17748]
* include/fenv.h (__fesetround): Declare. Use libm_hidden_proto.
* math/fesetround.c (fesetround): Rename to __fesetround and
define as weak alias of __fesetround. Use libm_hidden_weak.
* sysdeps/aarch64/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/alpha/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/arm/fesetround.c (fesetround): Likewise.
* sysdeps/hppa/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/i386/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/ia64/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/m68k/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/mips/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/powerpc/fpu/fenv_libc.h (__fesetround): Rename to
__fesetround_inline.
* sysdeps/powerpc/fpu/fenv_private.h (libc_fesetround_ppc): Call
__fesetround_inline instead of __fesetround.
* sysdeps/powerpc/fpu/fesetround.c (fesetround): Rename to
__fesetround and define as weak alias of __fesetround. Use
libm_hidden_weak. Call __fesetround_inline instead of
__fesetround.
* sysdeps/powerpc/nofpu/fesetround.c (fesetround): Rename to
__fesetround and define as weak alias of __fesetround. Use
libm_hidden_weak.
* sysdeps/powerpc/powerpc32/e500/nofpu/fesetround.c (fesetround):
Likewise.
* sysdeps/s390/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/sh/sh4/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/sparc/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/tile/math_private.h (__fesetround): New inline function.
* sysdeps/x86_64/fpu/fesetround.c (fesetround): Rename to
__fesetround and define as weak alias of __fesetround. Use
libm_hidden_weak.
* sysdeps/generic/math_private.h (default_libc_fesetround): Call
__fesetround instead of fesetround.
(default_libc_feholdexcept_setround): Likewise.
(libc_feholdsetround_ctx): Likewise.
(libc_feholdsetround_noex_ctx): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Continuing the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of fesetenv by making it a weak alias of
__fesetenv and making the affected code (including various copies of
feupdateenv which also gets called from C90 functions) call
__fesetenv.
Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch). Also tested for ARM
(soft-float) that fesetenv failures disappear from the linknamespace
test results (fsetround and feupdateenv remain to be addressed to
complete fixing bug 17748).
[BZ #17748]
* include/fenv.h (__fesetenv): Use libm_hidden_proto.
* math/fesetenv.c (__fesetenv): Use libm_hidden_def.
* sysdeps/aarch64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv
and define as weak alias of __fesetenv. Use libm_hidden_weak.
* sysdeps/alpha/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
* sysdeps/arm/fesetenv.c (fesetenv): Rename to __fesetenv and
define as weak alias of __fesetenv. Use libm_hidden_weak.
* sysdeps/hppa/fpu/fesetenv.c (fesetenv): Likewise.
* sysdeps/i386/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
* sysdeps/ia64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and
define as weak alias of __fesetenv. Use libm_hidden_weak.
* sysdeps/m68k/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
* sysdeps/mips/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and
define as weak alias of __fesetenv. Use libm_hidden_weak.
* sysdeps/powerpc/fpu/fesetenv.c (__fesetenv): Use
libm_hidden_def.
* sysdeps/powerpc/nofpu/fesetenv.c (__fesetenv): Likewise.
* sysdeps/powerpc/powerpc32/e500/nofpu/fesetenv.c (__fesetenv):
Likewise.
* sysdeps/s390/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and
define as weak alias of __fesetenv. Use libm_hidden_weak.
* sysdeps/sh/sh4/fpu/fesetenv.c (fesetenv): Likewise.
* sysdeps/sparc/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
* sysdeps/tile/math_private.h (__fesetenv): New inline function.
* sysdeps/x86_64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv
and define as weak alias of __fesetenv. Use libm_hidden_weak.
* sysdeps/generic/math_private.h (default_libc_fesetenv): Use
__fesetenv instead of fesetenv.
(libc_feresetround_noex_ctx): Likewise.
* sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Likewise.
* sysdeps/hppa/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/i386/fpu/feupdateenv.c (__feupdateenv): Likewise.
* sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Likewise.
* sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/powerpc/nofpu/feupdateenv.c (__feupdateenv): Likewise.
* sysdeps/powerpc/powerpc32/e500/nofpu/feupdateenv.c
(__feupdateenv): Likewise.
* sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Likewise.
* sysdeps/x86_64/fpu/feupdateenv.c (__feupdateenv): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Continuing the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of feholdexcept by making it a weak alias of
__feholdexcept and making the affected code call __feholdexcept.
Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch). Also tested for ARM
(soft-float) that feholdexcept failures disappear from the
linknamespace test failures (fesetenv, fsetround and feupdateenv
remain to be addressed to complete fixing bug 17748).
[BZ #17748]
* include/fenv.h (__feholdexcept): Declare. Use
libm_hidden_proto.
* math/feholdexcpt.c (feholdexcept): Rename to __feholdexcept and
define as weak alias of __feholdexcept. Use libm_hidden_weak.
* sysdeps/aarch64/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/alpha/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/arm/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/hppa/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/i386/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/ia64/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/m68k/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/mips/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/powerpc/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/powerpc/nofpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/powerpc/powerpc32/e500/nofpu/feholdexcpt.c
(feholdexcept): Likewise.
* sysdeps/s390/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/sh/sh4/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/sparc/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/x86_64/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/generic/math_private.h (default_libc_feholdexcept): Use
__feholdexcept instead of feholdexcept.
(default_libc_feholdexcept_setround): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some C90 libm functions call fegetenv via libc_feholdsetround*
functions in math_private.h. This patch makes them call __fegetenv
instead, making fegetenv into a weak alias for __fegetenv as needed.
Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch). Also tested for ARM
(soft-float) that fegetenv failures disappear from the linknamespace
test failures (however, similar fixes will also be needed for
fegetround, feholdexcept, fesetenv, fesetround and feupdateenv before
this set of namespace issues covered by bug 17748 is fully fixed and
those linknamespace tests start passing).
[BZ #17748]
* include/fenv.h (__fegetenv): Use libm_hidden_proto.
* math/fegetenv.c (__fegetenv): Use libm_hidden_def.
* sysdeps/aarch64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv
and define as weak alias of __fegetenv. Use libm_hidden_weak.
* sysdeps/alpha/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
* sysdeps/arm/fegetenv.c (fegetenv): Rename to __fegetenv and
define as weak alias of __fegetenv. Use libm_hidden_weak.
* sysdeps/hppa/fpu/fegetenv.c (fegetenv): Likewise.
* sysdeps/i386/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
* sysdeps/ia64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and
define as weak alias of __fegetenv. Use libm_hidden_weak.
* sysdeps/m68k/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
* sysdeps/mips/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and
define as weak alias of __fegetenv. Use libm_hidden_weak.
* sysdeps/powerpc/fpu/fegetenv.c (__fegetenv): Use
libm_hidden_def.
* sysdeps/powerpc/nofpu/fegetenv.c (__fegetenv): Likewise.
* sysdeps/powerpc/powerpc32/e500/nofpu/fegetenv.c (__fegetenv):
Likewise.
* sysdeps/s390/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and
define as weak alias of __fegetenv. Use libm_hidden_weak.
* sysdeps/sh/sh4/fpu/fegetenv.c (fegetenv): Likewise.
* sysdeps/sparc/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
* sysdeps/tile/math_private.h (__fegetenv): New inline function.
* sysdeps/x86_64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv
and define as weak alias of __fegetenv. Use libm_hidden_weak.
* sysdeps/generic/math_private.h (libc_feholdsetround_ctx): Use
__fegetenv instead of fegetenv.
(libc_feholdsetround_noex_ctx): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds a generic implementation of HAVE_RM_CTX using standard
fenv calls. As a result math functions using SET_RESTORE_ROUND* macros
do not suffer from a large slowdown on targets which do not implement
optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline
functions are now unused and could be removed in the future (there are
a few math functions left which use a mixture of standard fenv calls
and libc_fe* inline functions - they could be updated to use
SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations
across just a few FP instructions).
libc_feholdsetround*_noex_ctx is added to enable better optimization of
SET_RESTORE_ROUND_NOEX* implementations.
Performance measurements on ARM and x86 of sin() show significant gains
over the current default, fairly close to a highly optimized fenv_private:
ARM x86
no fenv_private : 100% 100%
generic HAVE_RM_CTX : 250% 350%
fenv_private (CTX) : 250% 450%
2014-06-23 Will Newton <will.newton@linaro.org>
Wilco <wdijkstr@arm.com>
* sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX
implementation. Include get-rounding-mode.h.
[!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero.
[!libc_feholdsetround_noex_ctx]: Define
libc_feholdsetround_noex_ctx.
[!libc_feholdsetround_noexf_ctx]: Define
libc_feholdsetround_noexf_ctx.
[!libc_feholdsetround_noexl_ctx]: Define
libc_feholdsetround_noexl_ctx.
(libc_feholdsetround_ctx): New function.
(libc_feresetround_ctx): New function.
(libc_feholdsetround_noex_ctx): New function.
(libc_feresetround_noex_ctx): New function.
|
|
|
|
| |
This reverts commit 9290130a8de3f30970f741c79dfe8459f798e05f.
|
|
|
|
|
|
|
|
|
|
| |
ChangeLog:
2014-03-17 Will Newton <will.newton@linaro.org>
* sysdeps/generic/math_private.h: Check whether
HAVE_RM_CTX is defined with #ifdef rather
than #if.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The most common use case of math functions is with default rounding
mode, i.e. rounding to nearest. Setting and restoring rounding mode
is an unnecessary overhead for this, so I've added support for a
context, which does the set/restore only if the FP status needs a
change. The code is written such that only x86 uses these. Other
architectures should be unaffected by it, but would definitely benefit
if the set/restore has as much overhead relative to the rest of the
code, as the x86 bits do.
Here's a summary of the performance improvement due to these
improvements; I've only mentioned functions that use the set/restore
and have benchmark inputs for x86_64:
Before:
cos(): ITERS:4.69335e+08: TOTAL:28884.6Mcy, MAX:4080.28cy, MIN:57.562cy, 16248.6 calls/Mcy
exp(): ITERS:4.47604e+08: TOTAL:28796.2Mcy, MAX:207.721cy, MIN:62.385cy, 15543.9 calls/Mcy
pow(): ITERS:1.63485e+08: TOTAL:28879.9Mcy, MAX:362.255cy, MIN:172.469cy, 5660.86 calls/Mcy
sin(): ITERS:3.89578e+08: TOTAL:28900Mcy, MAX:704.859cy, MIN:47.583cy, 13480.2 calls/Mcy
tan(): ITERS:7.0971e+07: TOTAL:28902.2Mcy, MAX:1357.79cy, MIN:388.58cy, 2455.55 calls/Mcy
After:
cos(): ITERS:6.0014e+08: TOTAL:28875.9Mcy, MAX:364.283cy, MIN:45.716cy, 20783.4 calls/Mcy
exp(): ITERS:5.48578e+08: TOTAL:28764.9Mcy, MAX:191.617cy, MIN:51.011cy, 19071.1 calls/Mcy
pow(): ITERS:1.70013e+08: TOTAL:28873.6Mcy, MAX:689.522cy, MIN:163.989cy, 5888.18 calls/Mcy
sin(): ITERS:4.64079e+08: TOTAL:28891.5Mcy, MAX:6959.3cy, MIN:36.189cy, 16062.8 calls/Mcy
tan(): ITERS:7.2354e+07: TOTAL:28898.9Mcy, MAX:1295.57cy, MIN:380.698cy, 2503.7 calls/Mcy
So the improvements are:
cos: 27.9089%
exp: 22.6919%
pow: 4.01564%
sin: 19.1585%
tan: 1.96086%
The downside of the change is that it will have an adverse performance
impact on non-default rounding modes, but I think the tradeoff is
justified.
|
|
|
|
|
| |
We only need to set/restore rounding mode to ensure correct
computation for non-default rounding modes.
|