about summary refs log tree commit diff
path: root/math
diff options
context:
space:
mode:
authorJoseph Myers <joseph@codesourcery.com>2015-09-23 22:42:30 +0000
committerJoseph Myers <joseph@codesourcery.com>2015-09-23 22:42:30 +0000
commitd96164c33012fccc7ba3ebb4d324c7fd0c6b5836 (patch)
tree5094cc63822fbfda984f285bbf109f71ed14fa3d /math
parent54142c44e963f410262fa868c159b6df858f3c53 (diff)
downloadglibc-d96164c33012fccc7ba3ebb4d324c7fd0c6b5836.tar.gz
glibc-d96164c33012fccc7ba3ebb4d324c7fd0c6b5836.tar.xz
glibc-d96164c33012fccc7ba3ebb4d324c7fd0c6b5836.zip
Refactor code forcing underflow exceptions.
Various floating-point functions have code to force underflow
exceptions if a tiny result was computed in a way that might not have
resulted in such exceptions even though the result is inexact.  This
typically uses math_force_eval to ensure that the underflowing
expression is evaluated, but sometimes uses volatile.

This patch refactors such code to use three new macros
math_check_force_underflow, math_check_force_underflow_nonneg and
math_check_force_underflow_complex (which in turn use
math_force_eval).  In the limited number of cases not suited to a
simple conversion to these macros, existing uses of volatile are
changed to use math_force_eval instead.  The converted code does not
always execute exactly the same sequence of operations as the original
code, but the overall effects should be the same.

Tested for x86_64, x86, mips64 and powerpc.

	* sysdeps/generic/math_private.h (fabs_tg): New macro.
	(min_of_type): Likewise.
	(math_check_force_underflow): Likewise.
	(math_check_force_underflow_nonneg): Likewise.
	(math_check_force_underflow_complex): Likewise.
	* math/e_exp2l.c (__ieee754_exp2l): Use
	math_check_force_underflow_nonneg.
	* math/k_casinh.c (__kernel_casinh): Likewise.
	* math/k_casinhf.c (__kernel_casinhf): Likewise.
	* math/k_casinhl.c (__kernel_casinhl): Likewise.
	* math/s_catan.c (__catan): Use
	math_check_force_underflow_complex.
	* math/s_catanf.c (__catanf): Likewise.
	* math/s_catanh.c (__catanh): Likewise.
	* math/s_catanhf.c (__catanhf): Likewise.
	* math/s_catanhl.c (__catanhl): Likewise.
	* math/s_catanl.c (__catanl): Likewise.
	* math/s_ccosh.c (__ccosh): Likewise.
	* math/s_ccoshf.c (__ccoshf): Likewise.
	* math/s_ccoshl.c (__ccoshl): Likewise.
	* math/s_cexp.c (__cexp): Likewise.
	* math/s_cexpf.c (__cexpf): Likewise.
	* math/s_cexpl.c (__cexpl): Likewise.
	* math/s_clog.c (__clog): Use math_check_force_underflow_nonneg.
	* math/s_clog10.c (__clog10): Likewise.
	* math/s_clog10f.c (__clog10f): Likewise.
	* math/s_clog10l.c (__clog10l): Likewise.
	* math/s_clogf.c (__clogf): Likewise.
	* math/s_clogl.c (__clogl): Likewise.
	* math/s_csin.c (__csin): Use math_check_force_underflow_complex.
	* math/s_csinf.c (__csinf): Likewise.
	* math/s_csinh.c (__csinh): Likewise.
	* math/s_csinhf.c (__csinhf): Likewise.
	* math/s_csinhl.c (__csinhl): Likewise.
	* math/s_csinl.c (__csinl): Likewise.
	* math/s_csqrt.c (__csqrt): Use math_check_force_underflow.
	* math/s_csqrtf.c (__csqrtf): Likewise.
	* math/s_csqrtl.c (__csqrtl): Likewise.
	* math/s_ctan.c (__ctan): Use math_check_force_underflow_complex.
	* math/s_ctanf.c (__ctanf): Likewise.
	* math/s_ctanh.c (__ctanh): Likewise.
	* math/s_ctanhf.c (__ctanhf): Likewise.
	* math/s_ctanhl.c (__ctanhl): Likewise.
	* math/s_ctanl.c (__ctanl): Likewise.
	* stdlib/strtod_l.c (round_and_return): Use math_force_eval
	instead of volatile.
	* sysdeps/ieee754/dbl-64/e_asin.c (__ieee754_asin): Use
	math_check_force_underflow.
	* sysdeps/ieee754/dbl-64/e_atanh.c (__ieee754_atanh): Likewise.
	* sysdeps/ieee754/dbl-64/e_exp.c (__ieee754_exp): Do not use
	volatile when forcing underflow.
	* sysdeps/ieee754/dbl-64/e_exp2.c (__ieee754_exp2): Use
	math_check_force_underflow_nonneg.
	* sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r):
	Likewise.
	* sysdeps/ieee754/dbl-64/e_j1.c (__ieee754_j1): Use
	math_check_force_underflow.
	* sysdeps/ieee754/dbl-64/e_jn.c (__ieee754_jn): Likewise.
	* sysdeps/ieee754/dbl-64/e_sinh.c (__ieee754_sinh): Likewise.
	* sysdeps/ieee754/dbl-64/s_asinh.c (__asinh): Likewise.
	* sysdeps/ieee754/dbl-64/s_atan.c (atan): Use
	math_check_force_underflow_nonneg.
	* sysdeps/ieee754/dbl-64/s_erf.c (__erf): Use
	math_check_force_underflow.
	* sysdeps/ieee754/dbl-64/s_expm1.c (__expm1): Likewise.
	* sysdeps/ieee754/dbl-64/s_fma.c (__fma): Use math_force_eval
	instead of volatile.
	* sysdeps/ieee754/dbl-64/s_log1p.c (__log1p): Use
	math_check_force_underflow.
	* sysdeps/ieee754/dbl-64/s_sin.c (__sin): Likewise.
	* sysdeps/ieee754/dbl-64/s_tan.c (tan): Use
	math_check_force_underflow_nonneg.
	* sysdeps/ieee754/dbl-64/s_tanh.c (__tanh): Use
	math_check_force_underflow.
	* sysdeps/ieee754/flt-32/e_asinf.c (__ieee754_asinf): Likewise.
	* sysdeps/ieee754/flt-32/e_atanhf.c (__ieee754_atanhf): Likewise.
	* sysdeps/ieee754/flt-32/e_exp2f.c (__ieee754_exp2f): Use
	math_check_force_underflow_nonneg.
	* sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r):
	Likewise.
	* sysdeps/ieee754/flt-32/e_j1f.c (__ieee754_j1f): Use
	math_check_force_underflow.
	* sysdeps/ieee754/flt-32/e_jnf.c (__ieee754_jnf): Likewise.
	* sysdeps/ieee754/flt-32/e_sinhf.c (__ieee754_sinhf): Likewise.
	* sysdeps/ieee754/flt-32/k_sinf.c (__kernel_sinf): Likewise.
	* sysdeps/ieee754/flt-32/k_tanf.c (__kernel_tanf): Likewise.
	* sysdeps/ieee754/flt-32/s_asinhf.c (__asinhf): Likewise.
	* sysdeps/ieee754/flt-32/s_atanf.c (__atanf): Likewise.
	* sysdeps/ieee754/flt-32/s_erff.c (__erff): Likewise.
	* sysdeps/ieee754/flt-32/s_expm1f.c (__expm1f): Likewise.
	* sysdeps/ieee754/flt-32/s_log1pf.c (__log1pf): Likewise.
	* sysdeps/ieee754/flt-32/s_tanhf.c (__tanhf): Likewise.
	* sysdeps/ieee754/ldbl-128/e_asinl.c (__ieee754_asinl): Likewise.
	* sysdeps/ieee754/ldbl-128/e_atanhl.c (__ieee754_atanhl):
	Likewise.
	* sysdeps/ieee754/ldbl-128/e_expl.c (__ieee754_expl): Use
	math_check_force_underflow_nonneg.
	* sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r):
	Likewise.
	* sysdeps/ieee754/ldbl-128/e_j1l.c (__ieee754_j1l): Use
	math_check_force_underflow.
	* sysdeps/ieee754/ldbl-128/e_jnl.c (__ieee754_jnl): Likewise.
	* sysdeps/ieee754/ldbl-128/e_sinhl.c (__ieee754_sinhl): Likewise.
	* sysdeps/ieee754/ldbl-128/k_sincosl.c (__kernel_sincosl):
	Likewise.
	* sysdeps/ieee754/ldbl-128/k_sinl.c (__kernel_sinl): Likewise.
	* sysdeps/ieee754/ldbl-128/k_tanl.c (__kernel_tanl): Likewise.
	* sysdeps/ieee754/ldbl-128/s_asinhl.c (__asinhl): Likewise.
	* sysdeps/ieee754/ldbl-128/s_atanl.c (__atanl): Likewise.
	* sysdeps/ieee754/ldbl-128/s_erfl.c (__erfl): Likewise.
	* sysdeps/ieee754/ldbl-128/s_expm1l.c (__expm1l): Likewise.
	* sysdeps/ieee754/ldbl-128/s_fmal.c (__fmal): Use math_force_eval
	instead of volatile.
	* sysdeps/ieee754/ldbl-128/s_log1pl.c (__log1pl): Use
	math_check_force_underflow.
	* sysdeps/ieee754/ldbl-128/s_tanhl.c (__tanhl): Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_asinl.c (__ieee754_asinl): Use
	math_check_force_underflow.
	* sysdeps/ieee754/ldbl-128ibm/e_atanhl.c (__ieee754_atanhl):
	Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r):
	Use math_check_force_underflow_nonneg.
	* sysdeps/ieee754/ldbl-128ibm/e_jnl.c (__ieee754_jnl): Use
	math_check_force_underflow.
	* sysdeps/ieee754/ldbl-128ibm/e_sinhl.c (__ieee754_sinhl):
	Likewise.
	* sysdeps/ieee754/ldbl-128ibm/k_sincosl.c (__kernel_sincosl):
	Likewise.
	* sysdeps/ieee754/ldbl-128ibm/k_sinl.c (__kernel_sinl): Likewise.
	* sysdeps/ieee754/ldbl-128ibm/k_tanl.c (__kernel_tanl): Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_asinhl.c (__asinhl): Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_atanl.c (__atanl): Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_erfl.c (__erfl): Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_tanhl.c (__tanhl): Likewise.
	* sysdeps/ieee754/ldbl-96/e_asinl.c (__ieee754_asinl): Likewise.
	* sysdeps/ieee754/ldbl-96/e_atanhl.c (__ieee754_atanhl): Likewise.
	* sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r): Use
	math_check_force_underflow_nonneg.
	* sysdeps/ieee754/ldbl-96/e_j1l.c (__ieee754_j1l): Use
	math_check_force_underflow.
	* sysdeps/ieee754/ldbl-96/e_jnl.c (__ieee754_jnl): Likewise.
	* sysdeps/ieee754/ldbl-96/e_sinhl.c (__ieee754_sinhl): Likewise.
	* sysdeps/ieee754/ldbl-96/k_sinl.c (__kernel_sinl): Likewise.
	* sysdeps/ieee754/ldbl-96/k_tanl.c (__kernel_tanl): Use
	math_check_force_underflow_nonneg.
	* sysdeps/ieee754/ldbl-96/s_asinhl.c (__asinhl): Use
	math_check_force_underflow.
	* sysdeps/ieee754/ldbl-96/s_erfl.c (__erfl): Likewise.
	* sysdeps/ieee754/ldbl-96/s_fmal.c (__fmal): Use math_force_eval
	instead of volatile.
	* sysdeps/ieee754/ldbl-96/s_tanhl.c (__tanhl): Use
	math_check_force_underflow.
Diffstat (limited to 'math')
-rw-r--r--math/e_exp2l.c6
-rw-r--r--math/k_casinh.c6
-rw-r--r--math/k_casinhf.c6
-rw-r--r--math/k_casinhl.c6
-rw-r--r--math/s_catan.c11
-rw-r--r--math/s_catanf.c11
-rw-r--r--math/s_catanh.c11
-rw-r--r--math/s_catanhf.c11
-rw-r--r--math/s_catanhl.c11
-rw-r--r--math/s_catanl.c11
-rw-r--r--math/s_ccosh.c13
-rw-r--r--math/s_ccoshf.c13
-rw-r--r--math/s_ccoshl.c13
-rw-r--r--math/s_cexp.c13
-rw-r--r--math/s_cexpf.c13
-rw-r--r--math/s_cexpl.c13
-rw-r--r--math/s_clog.c11
-rw-r--r--math/s_clog10.c11
-rw-r--r--math/s_clog10f.c11
-rw-r--r--math/s_clog10l.c11
-rw-r--r--math/s_clogf.c11
-rw-r--r--math/s_clogl.c11
-rw-r--r--math/s_csin.c13
-rw-r--r--math/s_csinf.c13
-rw-r--r--math/s_csinh.c13
-rw-r--r--math/s_csinhf.c13
-rw-r--r--math/s_csinhl.c13
-rw-r--r--math/s_csinl.c13
-rw-r--r--math/s_csqrt.c12
-rw-r--r--math/s_csqrtf.c12
-rw-r--r--math/s_csqrtl.c12
-rw-r--r--math/s_ctan.c11
-rw-r--r--math/s_ctanf.c11
-rw-r--r--math/s_ctanh.c11
-rw-r--r--math/s_ctanhf.c11
-rw-r--r--math/s_ctanhl.c11
-rw-r--r--math/s_ctanl.c11
37 files changed, 46 insertions, 368 deletions
diff --git a/math/e_exp2l.c b/math/e_exp2l.c
index b8cd158b88..19c927811d 100644
--- a/math/e_exp2l.c
+++ b/math/e_exp2l.c
@@ -43,11 +43,7 @@ __ieee754_exp2l (long double x)
 	    result = __scalbnl (1.0L + fractx, intx);
 	  else
 	    result = __scalbnl (__ieee754_expl (M_LN2l * fractx), intx);
-	  if (result < LDBL_MIN)
-	    {
-	      long double force_underflow = result * result;
-	      math_force_eval (force_underflow);
-	    }
+	  math_check_force_underflow_nonneg (result);
 	  return result;
 	}
       else
diff --git a/math/k_casinh.c b/math/k_casinh.c
index cd089ab8f2..69c03e3ae1 100644
--- a/math/k_casinh.c
+++ b/math/k_casinh.c
@@ -180,11 +180,7 @@ __kernel_casinh (__complex__ double x, int adj)
 	  else
 	    __imag__ res = __ieee754_atan2 (ix, s);
 	}
-      if (__real__ res < DBL_MIN)
-	{
-	  volatile double force_underflow = __real__ res * __real__ res;
-	  (void) force_underflow;
-	}
+      math_check_force_underflow_nonneg (__real__ res);
     }
   else
     {
diff --git a/math/k_casinhf.c b/math/k_casinhf.c
index 04c1a21f25..088b61fe0a 100644
--- a/math/k_casinhf.c
+++ b/math/k_casinhf.c
@@ -182,11 +182,7 @@ __kernel_casinhf (__complex__ float x, int adj)
 	  else
 	    __imag__ res = __ieee754_atan2f (ix, s);
 	}
-      if (__real__ res < FLT_MIN)
-	{
-	  volatile float force_underflow = __real__ res * __real__ res;
-	  (void) force_underflow;
-	}
+      math_check_force_underflow_nonneg (__real__ res);
     }
   else
     {
diff --git a/math/k_casinhl.c b/math/k_casinhl.c
index 496c0641cd..8963716003 100644
--- a/math/k_casinhl.c
+++ b/math/k_casinhl.c
@@ -189,11 +189,7 @@ __kernel_casinhl (__complex__ long double x, int adj)
 	  else
 	    __imag__ res = __ieee754_atan2l (ix, s);
 	}
-      if (__real__ res < LDBL_MIN)
-	{
-	  volatile long double force_underflow = __real__ res * __real__ res;
-	  (void) force_underflow;
-	}
+      math_check_force_underflow_nonneg (__real__ res);
     }
   else
     {
diff --git a/math/s_catan.c b/math/s_catan.c
index 880473ba44..d9dc149088 100644
--- a/math/s_catan.c
+++ b/math/s_catan.c
@@ -131,16 +131,7 @@ __catan (__complex__ double x)
 	    }
 	}
 
-      if (fabs (__real__ res) < DBL_MIN)
-	{
-	  volatile double force_underflow = __real__ res * __real__ res;
-	  (void) force_underflow;
-	}
-      if (fabs (__imag__ res) < DBL_MIN)
-	{
-	  volatile double force_underflow = __imag__ res * __imag__ res;
-	  (void) force_underflow;
-	}
+      math_check_force_underflow_complex (res);
     }
 
   return res;
diff --git a/math/s_catanf.c b/math/s_catanf.c
index 64f11528ce..37ae4bfd15 100644
--- a/math/s_catanf.c
+++ b/math/s_catanf.c
@@ -133,16 +133,7 @@ __catanf (__complex__ float x)
 	    }
 	}
 
-      if (fabsf (__real__ res) < FLT_MIN)
-	{
-	  volatile float force_underflow = __real__ res * __real__ res;
-	  (void) force_underflow;
-	}
-      if (fabsf (__imag__ res) < FLT_MIN)
-	{
-	  volatile float force_underflow = __imag__ res * __imag__ res;
-	  (void) force_underflow;
-	}
+      math_check_force_underflow_complex (res);
     }
 
   return res;
diff --git a/math/s_catanh.c b/math/s_catanh.c
index 281183efdd..a9ad614a63 100644
--- a/math/s_catanh.c
+++ b/math/s_catanh.c
@@ -125,16 +125,7 @@ __catanh (__complex__ double x)
 	  __imag__ res = 0.5 * __ieee754_atan2 (2.0 * __imag__ x, den);
 	}
 
-      if (fabs (__real__ res) < DBL_MIN)
-	{
-	  volatile double force_underflow = __real__ res * __real__ res;
-	  (void) force_underflow;
-	}
-      if (fabs (__imag__ res) < DBL_MIN)
-	{
-	  volatile double force_underflow = __imag__ res * __imag__ res;
-	  (void) force_underflow;
-	}
+      math_check_force_underflow_complex (res);
     }
 
   return res;
diff --git a/math/s_catanhf.c b/math/s_catanhf.c
index ea606c5ceb..68296bb060 100644
--- a/math/s_catanhf.c
+++ b/math/s_catanhf.c
@@ -127,16 +127,7 @@ __catanhf (__complex__ float x)
 	  __imag__ res = 0.5f * __ieee754_atan2f (2.0f * __imag__ x, den);
 	}
 
-      if (fabsf (__real__ res) < FLT_MIN)
-	{
-	  volatile float force_underflow = __real__ res * __real__ res;
-	  (void) force_underflow;
-	}
-      if (fabsf (__imag__ res) < FLT_MIN)
-	{
-	  volatile float force_underflow = __imag__ res * __imag__ res;
-	  (void) force_underflow;
-	}
+      math_check_force_underflow_complex (res);
     }
 
   return res;
diff --git a/math/s_catanhl.c b/math/s_catanhl.c
index a0b760082e..c97a39b280 100644
--- a/math/s_catanhl.c
+++ b/math/s_catanhl.c
@@ -133,16 +133,7 @@ __catanhl (__complex__ long double x)
 	  __imag__ res = 0.5L * __ieee754_atan2l (2.0L * __imag__ x, den);
 	}
 
-      if (fabsl (__real__ res) < LDBL_MIN)
-	{
-	  volatile long double force_underflow = __real__ res * __real__ res;
-	  (void) force_underflow;
-	}
-      if (fabsl (__imag__ res) < LDBL_MIN)
-	{
-	  volatile long double force_underflow = __imag__ res * __imag__ res;
-	  (void) force_underflow;
-	}
+      math_check_force_underflow_complex (res);
     }
 
   return res;
diff --git a/math/s_catanl.c b/math/s_catanl.c
index 1c8d9768ef..ea75642deb 100644
--- a/math/s_catanl.c
+++ b/math/s_catanl.c
@@ -139,16 +139,7 @@ __catanl (__complex__ long double x)
 	    }
 	}
 
-      if (fabsl (__real__ res) < LDBL_MIN)
-	{
-	  volatile long double force_underflow = __real__ res * __real__ res;
-	  (void) force_underflow;
-	}
-      if (fabsl (__imag__ res) < LDBL_MIN)
-	{
-	  volatile long double force_underflow = __imag__ res * __imag__ res;
-	  (void) force_underflow;
-	}
+      math_check_force_underflow_complex (res);
     }
 
   return res;
diff --git a/math/s_ccosh.c b/math/s_ccosh.c
index 664731afeb..6192ecfb95 100644
--- a/math/s_ccosh.c
+++ b/math/s_ccosh.c
@@ -83,18 +83,7 @@ __ccosh (__complex__ double x)
 	      __imag__ retval = __ieee754_sinh (__real__ x) * sinix;
 	    }
 
-	  if (fabs (__real__ retval) < DBL_MIN)
-	    {
-	      volatile double force_underflow
-		= __real__ retval * __real__ retval;
-	      (void) force_underflow;
-	    }
-	  if (fabs (__imag__ retval) < DBL_MIN)
-	    {
-	      volatile double force_underflow
-		= __imag__ retval * __imag__ retval;
-	      (void) force_underflow;
-	    }
+	  math_check_force_underflow_complex (retval);
 	}
       else
 	{
diff --git a/math/s_ccoshf.c b/math/s_ccoshf.c
index 6f61efc8c6..8acbbbbcd0 100644
--- a/math/s_ccoshf.c
+++ b/math/s_ccoshf.c
@@ -83,18 +83,7 @@ __ccoshf (__complex__ float x)
 	      __imag__ retval = __ieee754_sinhf (__real__ x) * sinix;
 	    }
 
-	  if (fabsf (__real__ retval) < FLT_MIN)
-	    {
-	      volatile float force_underflow
-		= __real__ retval * __real__ retval;
-	      (void) force_underflow;
-	    }
-	  if (fabsf (__imag__ retval) < FLT_MIN)
-	    {
-	      volatile float force_underflow
-		= __imag__ retval * __imag__ retval;
-	      (void) force_underflow;
-	    }
+	  math_check_force_underflow_complex (retval);
 	}
       else
 	{
diff --git a/math/s_ccoshl.c b/math/s_ccoshl.c
index 874786d17a..afc304d800 100644
--- a/math/s_ccoshl.c
+++ b/math/s_ccoshl.c
@@ -83,18 +83,7 @@ __ccoshl (__complex__ long double x)
 	      __imag__ retval = __ieee754_sinhl (__real__ x) * sinix;
 	    }
 
-	  if (fabsl (__real__ retval) < LDBL_MIN)
-	    {
-	      volatile long double force_underflow
-		= __real__ retval * __real__ retval;
-	      (void) force_underflow;
-	    }
-	  if (fabsl (__imag__ retval) < LDBL_MIN)
-	    {
-	      volatile long double force_underflow
-		= __imag__ retval * __imag__ retval;
-	      (void) force_underflow;
-	    }
+	  math_check_force_underflow_complex (retval);
 	}
       else
 	{
diff --git a/math/s_cexp.c b/math/s_cexp.c
index 06c5b63c7a..5cb42d1ec0 100644
--- a/math/s_cexp.c
+++ b/math/s_cexp.c
@@ -74,18 +74,7 @@ __cexp (__complex__ double x)
 	      __real__ retval = exp_val * cosix;
 	      __imag__ retval = exp_val * sinix;
 	    }
-	  if (fabs (__real__ retval) < DBL_MIN)
-	    {
-	      volatile double force_underflow
-		= __real__ retval * __real__ retval;
-	      (void) force_underflow;
-	    }
-	  if (fabs (__imag__ retval) < DBL_MIN)
-	    {
-	      volatile double force_underflow
-		= __imag__ retval * __imag__ retval;
-	      (void) force_underflow;
-	    }
+	  math_check_force_underflow_complex (retval);
 	}
       else
 	{
diff --git a/math/s_cexpf.c b/math/s_cexpf.c
index 35ff36ceca..7df203829d 100644
--- a/math/s_cexpf.c
+++ b/math/s_cexpf.c
@@ -74,18 +74,7 @@ __cexpf (__complex__ float x)
 	      __real__ retval = exp_val * cosix;
 	      __imag__ retval = exp_val * sinix;
 	    }
-	  if (fabsf (__real__ retval) < FLT_MIN)
-	    {
-	      volatile float force_underflow
-		= __real__ retval * __real__ retval;
-	      (void) force_underflow;
-	    }
-	  if (fabsf (__imag__ retval) < FLT_MIN)
-	    {
-	      volatile float force_underflow
-		= __imag__ retval * __imag__ retval;
-	      (void) force_underflow;
-	    }
+	  math_check_force_underflow_complex (retval);
 	}
       else
 	{
diff --git a/math/s_cexpl.c b/math/s_cexpl.c
index e2e703f8fd..264a5742bc 100644
--- a/math/s_cexpl.c
+++ b/math/s_cexpl.c
@@ -74,18 +74,7 @@ __cexpl (__complex__ long double x)
 	      __real__ retval = exp_val * cosix;
 	      __imag__ retval = exp_val * sinix;
 	    }
-	  if (fabsl (__real__ retval) < LDBL_MIN)
-	    {
-	      volatile long double force_underflow
-		= __real__ retval * __real__ retval;
-	      (void) force_underflow;
-	    }
-	  if (fabsl (__imag__ retval) < LDBL_MIN)
-	    {
-	      volatile long double force_underflow
-		= __imag__ retval * __imag__ retval;
-	      (void) force_underflow;
-	    }
+	  math_check_force_underflow_complex (retval);
 	}
       else
 	{
diff --git a/math/s_clog.c b/math/s_clog.c
index 15f04594e9..b010e89e67 100644
--- a/math/s_clog.c
+++ b/math/s_clog.c
@@ -65,15 +65,8 @@ __clog (__complex__ double x)
 
       if (absx == 1.0 && scale == 0)
 	{
-	  double absy2 = absy * absy;
-	  if (absy2 <= DBL_MIN * 2.0)
-	    {
-	      double force_underflow = absy2 * absy2;
-	      __real__ result = absy2 / 2.0;
-	      math_force_eval (force_underflow);
-	    }
-	  else
-	    __real__ result = __log1p (absy2) / 2.0;
+	  __real__ result = __log1p (absy * absy) / 2.0;
+	  math_check_force_underflow_nonneg (__real__ result);
 	}
       else if (absx > 1.0 && absx < 2.0 && absy < 1.0 && scale == 0)
 	{
diff --git a/math/s_clog10.c b/math/s_clog10.c
index 79909383a0..b6a434225a 100644
--- a/math/s_clog10.c
+++ b/math/s_clog10.c
@@ -71,15 +71,8 @@ __clog10 (__complex__ double x)
 
       if (absx == 1.0 && scale == 0)
 	{
-	  double absy2 = absy * absy;
-	  if (absy2 <= DBL_MIN * 2.0 * M_LN10)
-	    {
-	      double force_underflow = absy2 * absy2;
-	      __real__ result = absy2 * (M_LOG10E / 2.0);
-	      math_force_eval (force_underflow);
-	    }
-	  else
-	    __real__ result = __log1p (absy2) * (M_LOG10E / 2.0);
+	  __real__ result = __log1p (absy * absy) * (M_LOG10E / 2.0);
+	  math_check_force_underflow_nonneg (__real__ result);
 	}
       else if (absx > 1.0 && absx < 2.0 && absy < 1.0 && scale == 0)
 	{
diff --git a/math/s_clog10f.c b/math/s_clog10f.c
index b30ad3a2e7..b77a849d4e 100644
--- a/math/s_clog10f.c
+++ b/math/s_clog10f.c
@@ -71,15 +71,8 @@ __clog10f (__complex__ float x)
 
       if (absx == 1.0f && scale == 0)
 	{
-	  float absy2 = absy * absy;
-	  if (absy2 <= FLT_MIN * 2.0f * (float) M_LN10)
-	    {
-	      float force_underflow = absy2 * absy2;
-	      __real__ result = absy2 * ((float) M_LOG10E / 2.0f);
-	      math_force_eval (force_underflow);
-	    }
-	  else
-	    __real__ result = __log1pf (absy2) * ((float) M_LOG10E / 2.0f);
+	  __real__ result = __log1pf (absy * absy) * ((float) M_LOG10E / 2.0f);
+	  math_check_force_underflow_nonneg (__real__ result);
 	}
       else if (absx > 1.0f && absx < 2.0f && absy < 1.0f && scale == 0)
 	{
diff --git a/math/s_clog10l.c b/math/s_clog10l.c
index 8481e45d4e..86ec512663 100644
--- a/math/s_clog10l.c
+++ b/math/s_clog10l.c
@@ -78,15 +78,8 @@ __clog10l (__complex__ long double x)
 
       if (absx == 1.0L && scale == 0)
 	{
-	  long double absy2 = absy * absy;
-	  if (absy2 <= LDBL_MIN * 2.0L * M_LN10l)
-	    {
-	      long double force_underflow = absy2 * absy2;
-	      __real__ result = absy2 * (M_LOG10El / 2.0);
-	      math_force_eval (force_underflow);
-	    }
-	  else
-	    __real__ result = __log1pl (absy2) * (M_LOG10El / 2.0L);
+	  __real__ result = __log1pl (absy * absy) * (M_LOG10El / 2.0L);
+	  math_check_force_underflow_nonneg (__real__ result);
 	}
       else if (absx > 1.0L && absx < 2.0L && absy < 1.0L && scale == 0)
 	{
diff --git a/math/s_clogf.c b/math/s_clogf.c
index bae0fe60ac..ffec7ce0a4 100644
--- a/math/s_clogf.c
+++ b/math/s_clogf.c
@@ -65,15 +65,8 @@ __clogf (__complex__ float x)
 
       if (absx == 1.0f && scale == 0)
 	{
-	  float absy2 = absy * absy;
-	  if (absy2 <= FLT_MIN * 2.0f)
-	    {
-	      float force_underflow = absy2 * absy2;
-	      __real__ result = absy2 / 2.0f;
-	      math_force_eval (force_underflow);
-	    }
-	  else
-	    __real__ result = __log1pf (absy2) / 2.0f;
+	  __real__ result = __log1pf (absy * absy) / 2.0f;
+	  math_check_force_underflow_nonneg (__real__ result);
 	}
       else if (absx > 1.0f && absx < 2.0f && absy < 1.0f && scale == 0)
 	{
diff --git a/math/s_clogl.c b/math/s_clogl.c
index aebff2adc2..6325df4662 100644
--- a/math/s_clogl.c
+++ b/math/s_clogl.c
@@ -72,15 +72,8 @@ __clogl (__complex__ long double x)
 
       if (absx == 1.0L && scale == 0)
 	{
-	  long double absy2 = absy * absy;
-	  if (absy2 <= LDBL_MIN * 2.0L)
-	    {
-	      long double force_underflow = absy2 * absy2;
-	      __real__ result = absy2 / 2.0L;
-	      math_force_eval (force_underflow);
-	    }
-	  else
-	    __real__ result = __log1pl (absy2) / 2.0L;
+	  __real__ result = __log1pl (absy * absy) / 2.0L;
+	  math_check_force_underflow_nonneg (__real__ result);
 	}
       else if (absx > 1.0L && absx < 2.0L && absy < 1.0L && scale == 0)
 	{
diff --git a/math/s_csin.c b/math/s_csin.c
index e926d7e185..e6583bc81b 100644
--- a/math/s_csin.c
+++ b/math/s_csin.c
@@ -89,18 +89,7 @@ __csin (__complex__ double x)
 	      __imag__ retval = __ieee754_sinh (__imag__ x) * cosix;
 	    }
 
-	  if (fabs (__real__ retval) < DBL_MIN)
-	    {
-	      volatile double force_underflow
-		= __real__ retval * __real__ retval;
-	      (void) force_underflow;
-	    }
-	  if (fabs (__imag__ retval) < DBL_MIN)
-	    {
-	      volatile double force_underflow
-		= __imag__ retval * __imag__ retval;
-	      (void) force_underflow;
-	    }
+	  math_check_force_underflow_complex (retval);
 	}
       else
 	{
diff --git a/math/s_csinf.c b/math/s_csinf.c
index 52cce4b46b..c20fd0bd5f 100644
--- a/math/s_csinf.c
+++ b/math/s_csinf.c
@@ -89,18 +89,7 @@ __csinf (__complex__ float x)
 	      __imag__ retval = __ieee754_sinhf (__imag__ x) * cosix;
 	    }
 
-	  if (fabsf (__real__ retval) < FLT_MIN)
-	    {
-	      volatile float force_underflow
-		= __real__ retval * __real__ retval;
-	      (void) force_underflow;
-	    }
-	  if (fabsf (__imag__ retval) < FLT_MIN)
-	    {
-	      volatile float force_underflow
-		= __imag__ retval * __imag__ retval;
-	      (void) force_underflow;
-	    }
+	  math_check_force_underflow_complex (retval);
 	}
       else
 	{
diff --git a/math/s_csinh.c b/math/s_csinh.c
index 7aa69e7a61..c944d80b63 100644
--- a/math/s_csinh.c
+++ b/math/s_csinh.c
@@ -89,18 +89,7 @@ __csinh (__complex__ double x)
 	      __imag__ retval = __ieee754_cosh (__real__ x) * sinix;
 	    }
 
-	  if (fabs (__real__ retval) < DBL_MIN)
-	    {
-	      volatile double force_underflow
-		= __real__ retval * __real__ retval;
-	      (void) force_underflow;
-	    }
-	  if (fabs (__imag__ retval) < DBL_MIN)
-	    {
-	      volatile double force_underflow
-		= __imag__ retval * __imag__ retval;
-	      (void) force_underflow;
-	    }
+	  math_check_force_underflow_complex (retval);
 	}
       else
 	{
diff --git a/math/s_csinhf.c b/math/s_csinhf.c
index 72e4800cb7..16c2a0a961 100644
--- a/math/s_csinhf.c
+++ b/math/s_csinhf.c
@@ -89,18 +89,7 @@ __csinhf (__complex__ float x)
 	      __imag__ retval = __ieee754_coshf (__real__ x) * sinix;
 	    }
 
-	  if (fabsf (__real__ retval) < FLT_MIN)
-	    {
-	      volatile float force_underflow
-		= __real__ retval * __real__ retval;
-	      (void) force_underflow;
-	    }
-	  if (fabsf (__imag__ retval) < FLT_MIN)
-	    {
-	      volatile float force_underflow
-		= __imag__ retval * __imag__ retval;
-	      (void) force_underflow;
-	    }
+	  math_check_force_underflow_complex (retval);
 	}
       else
 	{
diff --git a/math/s_csinhl.c b/math/s_csinhl.c
index 62243dab62..e7e3dcd1dd 100644
--- a/math/s_csinhl.c
+++ b/math/s_csinhl.c
@@ -89,18 +89,7 @@ __csinhl (__complex__ long double x)
 	      __imag__ retval = __ieee754_coshl (__real__ x) * sinix;
 	    }
 
-	  if (fabsl (__real__ retval) < LDBL_MIN)
-	    {
-	      volatile long double force_underflow
-		= __real__ retval * __real__ retval;
-	      (void) force_underflow;
-	    }
-	  if (fabsl (__imag__ retval) < LDBL_MIN)
-	    {
-	      volatile long double force_underflow
-		= __imag__ retval * __imag__ retval;
-	      (void) force_underflow;
-	    }
+	  math_check_force_underflow_complex (retval);
 	}
       else
 	{
diff --git a/math/s_csinl.c b/math/s_csinl.c
index 7908aee97c..7391f2c557 100644
--- a/math/s_csinl.c
+++ b/math/s_csinl.c
@@ -89,18 +89,7 @@ __csinl (__complex__ long double x)
 	      __imag__ retval = __ieee754_sinhl (__imag__ x) * cosix;
 	    }
 
-	  if (fabsl (__real__ retval) < LDBL_MIN)
-	    {
-	      volatile long double force_underflow
-		= __real__ retval * __real__ retval;
-	      (void) force_underflow;
-	    }
-	  if (fabsl (__imag__ retval) < LDBL_MIN)
-	    {
-	      volatile long double force_underflow
-		= __imag__ retval * __imag__ retval;
-	      (void) force_underflow;
-	    }
+	  math_check_force_underflow_complex (retval);
 	}
       else
 	{
diff --git a/math/s_csqrt.c b/math/s_csqrt.c
index b86f53322e..9a3d5d6dd2 100644
--- a/math/s_csqrt.c
+++ b/math/s_csqrt.c
@@ -148,16 +148,8 @@ __csqrt (__complex__ double x)
 	      s = __scalbn (s, scale);
 	    }
 
-	  if (fabs (r) < DBL_MIN)
-	    {
-	      double force_underflow = r * r;
-	      math_force_eval (force_underflow);
-	    }
-	  if (fabs (s) < DBL_MIN)
-	    {
-	      double force_underflow = s * s;
-	      math_force_eval (force_underflow);
-	    }
+	  math_check_force_underflow (r);
+	  math_check_force_underflow (s);
 
 	  __real__ res = r;
 	  __imag__ res = __copysign (s, __imag__ x);
diff --git a/math/s_csqrtf.c b/math/s_csqrtf.c
index e433f476c2..597f0a224f 100644
--- a/math/s_csqrtf.c
+++ b/math/s_csqrtf.c
@@ -148,16 +148,8 @@ __csqrtf (__complex__ float x)
 	      s = __scalbnf (s, scale);
 	    }
 
-	  if (fabsf (r) < FLT_MIN)
-	    {
-	      float force_underflow = r * r;
-	      math_force_eval (force_underflow);
-	    }
-	  if (fabsf (s) < FLT_MIN)
-	    {
-	      float force_underflow = s * s;
-	      math_force_eval (force_underflow);
-	    }
+	  math_check_force_underflow (r);
+	  math_check_force_underflow (s);
 
 	  __real__ res = r;
 	  __imag__ res = __copysignf (s, __imag__ x);
diff --git a/math/s_csqrtl.c b/math/s_csqrtl.c
index 003d614f60..f9f31b28fc 100644
--- a/math/s_csqrtl.c
+++ b/math/s_csqrtl.c
@@ -148,16 +148,8 @@ __csqrtl (__complex__ long double x)
 	      s = __scalbnl (s, scale);
 	    }
 
-	  if (fabsl (r) < LDBL_MIN)
-	    {
-	      long double force_underflow = r * r;
-	      math_force_eval (force_underflow);
-	    }
-	  if (fabsl (s) < LDBL_MIN)
-	    {
-	      long double force_underflow = s * s;
-	      math_force_eval (force_underflow);
-	    }
+	  math_check_force_underflow (r);
+	  math_check_force_underflow (s);
 
 	  __real__ res = r;
 	  __imag__ res = __copysignl (s, __imag__ x);
diff --git a/math/s_ctan.c b/math/s_ctan.c
index 674c3b63b4..2ab1630a94 100644
--- a/math/s_ctan.c
+++ b/math/s_ctan.c
@@ -117,16 +117,7 @@ __ctan (__complex__ double x)
 	  __real__ res = sinrx * cosrx / den;
 	  __imag__ res = sinhix * coshix / den;
 	}
-      if (fabs (__real__ res) < DBL_MIN)
-	{
-	  double force_underflow = __real__ res * __real__ res;
-	  math_force_eval (force_underflow);
-	}
-      if (fabs (__imag__ res) < DBL_MIN)
-	{
-	  double force_underflow = __imag__ res * __imag__ res;
-	  math_force_eval (force_underflow);
-	}
+      math_check_force_underflow_complex (res);
     }
 
   return res;
diff --git a/math/s_ctanf.c b/math/s_ctanf.c
index e0ebe43d31..1606b058ea 100644
--- a/math/s_ctanf.c
+++ b/math/s_ctanf.c
@@ -117,16 +117,7 @@ __ctanf (__complex__ float x)
 	  __real__ res = sinrx * cosrx / den;
 	  __imag__ res = sinhix * coshix / den;
 	}
-      if (fabsf (__real__ res) < FLT_MIN)
-	{
-	  float force_underflow = __real__ res * __real__ res;
-	  math_force_eval (force_underflow);
-	}
-      if (fabsf (__imag__ res) < FLT_MIN)
-	{
-	  float force_underflow = __imag__ res * __imag__ res;
-	  math_force_eval (force_underflow);
-	}
+      math_check_force_underflow_complex (res);
     }
 
   return res;
diff --git a/math/s_ctanh.c b/math/s_ctanh.c
index 58607b1367..486545db27 100644
--- a/math/s_ctanh.c
+++ b/math/s_ctanh.c
@@ -117,16 +117,7 @@ __ctanh (__complex__ double x)
 	  __real__ res = sinhrx * coshrx / den;
 	  __imag__ res = sinix * cosix / den;
 	}
-      if (fabs (__real__ res) < DBL_MIN)
-	{
-	  double force_underflow = __real__ res * __real__ res;
-	  math_force_eval (force_underflow);
-	}
-      if (fabs (__imag__ res) < DBL_MIN)
-	{
-	  double force_underflow = __imag__ res * __imag__ res;
-	  math_force_eval (force_underflow);
-	}
+      math_check_force_underflow_complex (res);
     }
 
   return res;
diff --git a/math/s_ctanhf.c b/math/s_ctanhf.c
index a4fd2301cd..000820a6d9 100644
--- a/math/s_ctanhf.c
+++ b/math/s_ctanhf.c
@@ -117,16 +117,7 @@ __ctanhf (__complex__ float x)
 	  __real__ res = sinhrx * coshrx / den;
 	  __imag__ res = sinix * cosix / den;
 	}
-      if (fabsf (__real__ res) < FLT_MIN)
-	{
-	  float force_underflow = __real__ res * __real__ res;
-	  math_force_eval (force_underflow);
-	}
-      if (fabsf (__imag__ res) < FLT_MIN)
-	{
-	  float force_underflow = __imag__ res * __imag__ res;
-	  math_force_eval (force_underflow);
-	}
+      math_check_force_underflow_complex (res);
     }
 
   return res;
diff --git a/math/s_ctanhl.c b/math/s_ctanhl.c
index fb67b2bcc8..cc80cae516 100644
--- a/math/s_ctanhl.c
+++ b/math/s_ctanhl.c
@@ -124,16 +124,7 @@ __ctanhl (__complex__ long double x)
 	  __real__ res = sinhrx * coshrx / den;
 	  __imag__ res = sinix * cosix / den;
 	}
-      if (fabsl (__real__ res) < LDBL_MIN)
-	{
-	  long double force_underflow = __real__ res * __real__ res;
-	  math_force_eval (force_underflow);
-	}
-      if (fabsl (__imag__ res) < LDBL_MIN)
-	{
-	  long double force_underflow = __imag__ res * __imag__ res;
-	  math_force_eval (force_underflow);
-	}
+      math_check_force_underflow_complex (res);
     }
 
   return res;
diff --git a/math/s_ctanl.c b/math/s_ctanl.c
index 4783dcbeb9..8b04910846 100644
--- a/math/s_ctanl.c
+++ b/math/s_ctanl.c
@@ -124,16 +124,7 @@ __ctanl (__complex__ long double x)
 	  __real__ res = sinrx * cosrx / den;
 	  __imag__ res = sinhix * coshix / den;
 	}
-      if (fabsl (__real__ res) < LDBL_MIN)
-	{
-	  long double force_underflow = __real__ res * __real__ res;
-	  math_force_eval (force_underflow);
-	}
-      if (fabsl (__imag__ res) < LDBL_MIN)
-	{
-	  long double force_underflow = __imag__ res * __imag__ res;
-	  math_force_eval (force_underflow);
-	}
+      math_check_force_underflow_complex (res);
     }
 
   return res;