diff options
author | Adhemerval Zanella Netto <adhemerval.zanella@linaro.org> | 2023-03-20 13:01:17 -0300 |
---|---|---|
committer | Adhemerval Zanella <adhemerval.zanella@linaro.org> | 2023-04-03 16:45:18 -0300 |
commit | cf9cf33199fdd6550920ad43f19ad8b2435fc0c6 (patch) | |
tree | e03b3fb7d42424e4b7095a01b334e8db8296c1e8 /sysdeps/ieee754/flt-32/math_config.h | |
parent | 34b9f8bc170810c44184ad57ecf1800587e752a6 (diff) | |
download | glibc-cf9cf33199fdd6550920ad43f19ad8b2435fc0c6.tar.gz glibc-cf9cf33199fdd6550920ad43f19ad8b2435fc0c6.tar.xz glibc-cf9cf33199fdd6550920ad43f19ad8b2435fc0c6.zip |
math: Improve fmodf
This uses a new algorithm similar to already proposed earlier [1]. With x = mx * 2^ex and y = my * 2^ey (mx, my, ex, ey being integers), the simplest implementation is: mx * 2^ex == 2 * mx * 2^(ex - 1) while (ex > ey) { mx *= 2; --ex; mx %= my; } With mx/my being mantissa of double floating pointer, on each step the argument reduction can be improved 8 (which is sizeof of uint32_t minus MANTISSA_WIDTH plus the signal bit): while (ex > ey) { mx << 8; ex -= 8; mx %= my; } */ The implementation uses builtin clz and ctz, along with shifts to convert hx/hy back to doubles. Different than the original patch, this path assume modulo/divide operation is slow, so use multiplication with invert values. I see the following performance improvements using fmod benchtests (result only show the 'mean' result): Architecture | Input | master | patch -----------------|-----------------|----------|-------- x86_64 (Ryzen 9) | subnormals | 17.2549 | 12.0318 x86_64 (Ryzen 9) | normal | 85.4096 | 49.9641 x86_64 (Ryzen 9) | close-exponents | 19.1072 | 15.8224 aarch64 (N1) | subnormal | 10.2182 | 6.81778 aarch64 (N1) | normal | 60.0616 | 20.3667 aarch64 (N1) | close-exponents | 11.5256 | 8.39685 I also see similar improvements on arm-linux-gnueabihf when running on the N1 aarch64 chips, where it a lot of soft-fp implementation (for modulo, and multiplication): Architecture | Input | master | patch -----------------|-----------------|----------|-------- armhf (N1) | subnormal | 11.6662 | 10.8955 armhf (N1) | normal | 69.2759 | 34.1524 armhf (N1) | close-exponents | 13.6472 | 18.2131 Instead of using the math_private.h definitions, I used the math_config.h instead which is used on newer math implementations. Co-authored-by: kirill <kirill.okhotnikov@gmail.com> [1] https://sourceware.org/pipermail/libc-alpha/2020-November/119794.html Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Diffstat (limited to 'sysdeps/ieee754/flt-32/math_config.h')
-rw-r--r-- | sysdeps/ieee754/flt-32/math_config.h | 41 |
1 files changed, 41 insertions, 0 deletions
diff --git a/sysdeps/ieee754/flt-32/math_config.h b/sysdeps/ieee754/flt-32/math_config.h index 23045f59d6..829430ea28 100644 --- a/sysdeps/ieee754/flt-32/math_config.h +++ b/sysdeps/ieee754/flt-32/math_config.h @@ -110,6 +110,47 @@ issignalingf_inline (float x) return 2 * (ix ^ 0x00400000) > 2 * 0x7fc00000UL; } +#define BIT_WIDTH 32 +#define MANTISSA_WIDTH 23 +#define EXPONENT_WIDTH 8 +#define MANTISSA_MASK 0x007fffff +#define EXPONENT_MASK 0x7f800000 +#define EXP_MANT_MASK 0x7fffffff +#define QUIET_NAN_MASK 0x00400000 +#define SIGN_MASK 0x80000000 + +static inline bool +is_nan (uint32_t x) +{ + return (x & EXP_MANT_MASK) > EXPONENT_MASK; +} + +static inline uint32_t +get_mantissa (uint32_t x) +{ + return x & MANTISSA_MASK; +} + +/* Convert integer number X, unbiased exponent EP, and sign S to double: + + result = X * 2^(EP+1 - exponent_bias) + + NB: zero is not supported. */ +static inline double +make_float (uint32_t x, int ep, uint32_t s) +{ + int lz = __builtin_clz (x) - EXPONENT_WIDTH; + x <<= lz; + ep -= lz; + + if (__glibc_unlikely (ep < 0 || x == 0)) + { + x >>= -ep; + ep = 0; + } + return asfloat (s + x + (ep << MANTISSA_WIDTH)); +} + #define NOINLINE __attribute__ ((noinline)) attribute_hidden float __math_oflowf (uint32_t); |