about summary refs log tree commit diff
path: root/manual/probes.texi
Commit message (Collapse)AuthorAgeFilesLines
* ieee754: Remove unused __sin32 and __cos32Anssi Hannula2020-12-181-14/+0
| | | | | The __sin32 and __cos32 functions were only used in the now removed slow path of asin and acos.
* malloc: tcache double free checkDJ Delorie2018-11-201-0/+12
| | | | | | | | | | | | | * malloc/malloc.c (tcache_entry): Add key field. (tcache_put): Set it. (tcache_get): Likewise. (_int_free): Check for double free in tcache. * malloc/tst-tcfree1.c: New. * malloc/tst-tcfree2.c: New. * malloc/Makefile: Run the new tests. * manual/probes.texi: Document memory_tcache_double_free probe. * dlfcn/dlerror.c (check_free): Prevent double frees.
* Add manual documentation for threads.hRical Jasan2018-07-241-1/+1
| | | | | | | | | | | | This patch updates the manual and adds a new chapter to the manual, explaining types macros, constants and functions defined by ISO C11 threads.h standard. [BZ# 14092] * manual/debug.texi: Update adjacent chapter name. * manual/probes.texi: Likewise. * manual/threads.texi (ISO C Threads): New section. (POSIX Threads): Convert to a section.
* Remove slow paths from expSzabolcs Nagy2018-02-121-14/+0
| | | | | | | | | | | | | | | | | | | | | | | | | Remove the __slowexp code, so exp is no longer correctly rounded. The result is computed to about 70 bits precision so the worst case ulp error is about 0.500007 in nearest rounding mode. * manual/probes.texi: Remove slowexp probes. * math/Makefile: Remove slowexp. * sysdeps/generic/math_private.h (__slowexp): Remove. * sysdeps/ieee754/dbl-64/e_exp.c (__ieee754_exp): Remove __slowexp and document error bounds. * sysdeps/i386/fpu/slowexp.c: Remove. * sysdeps/ia64/fpu/slowexp.c: Remove. * sysdeps/ieee754/dbl-64/slowexp.c: Remove. * sysdeps/ieee754/dbl-64/uexp.h (err_0): Remove. * sysdeps/m68k/m680x0/fpu/slowexp.c: Remove. * sysdeps/powerpc/power4/fpu/Makefile (CPPFLAGS-slowexp.c): Remove. * sysdeps/x86_64/fpu/multiarch/Makefile: Remove slowexp-fma. * sysdeps/x86_64/fpu/multiarch/e_exp-avx.c (__slowexp): Remove. * sysdeps/x86_64/fpu/multiarch/e_exp-fma.c (__slowexp): Remove. * sysdeps/x86_64/fpu/multiarch/e_exp-fma4.c (__slowexp): Remove. * sysdeps/x86_64/fpu/multiarch/slowexp-avx.c: Remove. * sysdeps/x86_64/fpu/multiarch/slowexp-fma.c: Remove. * sysdeps/x86_64/fpu/multiarch/slowexp-fma4.c: Remove.
* Remove slow paths from powWilco Dijkstra2018-02-121-16/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the slow paths from pow. Like several other double precision math functions, pow is exactly rounded. This is not required from math functions and causes major overheads as it requires multiple fallbacks using higher precision arithmetic if a result is close to 0.5ULP. Ridiculous slowdowns of up to 100000x have been reported when the highest precision path triggers. All GLIBC math tests pass on AArch64 and x64 (with ULP of pow set to 1). The worst case error is ~0.506ULP. A simple test over a few hundred million values shows pow is 10% faster on average. This fixes BZ #13932. [BZ #13932] * sysdeps/ieee754/dbl-64/uexp.h (err_1): Remove. * benchtests/pow-inputs: Update comment for slow path cases. * manual/probes.texi (slowpow_p10): Delete removed probe. (slowpow_p10): Likewise. * math/Makefile: Remove halfulp.c and slowpow.c. * sysdeps/aarch64/libm-test-ulps: Set ULP of pow to 1. * sysdeps/generic/math_private.h (__exp1): Remove error argument. (__halfulp): Remove. (__slowpow): Remove. * sysdeps/i386/fpu/halfulp.c: Delete file. * sysdeps/i386/fpu/slowpow.c: Likewise. * sysdeps/ia64/fpu/halfulp.c: Likewise. * sysdeps/ia64/fpu/slowpow.c: Likewise. * sysdeps/ieee754/dbl-64/e_exp.c (__exp1): Remove error argument, improve comments and add error analysis. * sysdeps/ieee754/dbl-64/e_pow.c (__ieee754_pow): Add error analysis. (power1): Remove function: (log1): Remove error argument, add error analysis. (my_log2): Remove function. * sysdeps/ieee754/dbl-64/halfulp.c: Delete file. * sysdeps/ieee754/dbl-64/slowpow.c: Likewise. * sysdeps/m68k/m680x0/fpu/halfulp.c: Likewise. * sysdeps/m68k/m680x0/fpu/slowpow.c: Likewise. * sysdeps/powerpc/power4/fpu/Makefile: Remove CPPFLAGS-slowpow.c. * sysdeps/x86_64/fpu/libm-test-ulps: Set ULP of pow to 1. * sysdeps/x86_64/fpu/multiarch/Makefile: Remove slowpow-fma.c, slowpow-fma4.c, halfulp-fma.c, halfulp-fma4.c. * sysdeps/x86_64/fpu/multiarch/e_pow-fma.c (__slowpow): Remove define. * sysdeps/x86_64/fpu/multiarch/e_pow-fma4.c (__slowpow): Likewise. * sysdeps/x86_64/fpu/multiarch/halfulp-fma.c: Delete file. * sysdeps/x86_64/fpu/multiarch/halfulp-fma4.c: Likewise. * sysdeps/x86_64/fpu/multiarch/slowpow-fma.c: Likewise. * sysdeps/x86_64/fpu/multiarch/slowpow-fma4.c: Likewise.
* Remove slow paths from logWilco Dijkstra2018-02-071-17/+0
| | | | | | | | | | | | | | | | | | | Remove the slow paths from log. Like several other double precision math functions, log is exactly rounded. This is not required from math functions and causes major overheads as it requires multiple fallbacks using higher precision arithmetic if a result is close to 0.5ULP. Ridiculous slowdowns of up to 100000x have been reported when the highest precision path triggers. Interestingly removing the slow paths makes hardly any difference in practice: the worst case error is still ~0.502ULP, and exp(log(x)) shows identical results before/after on many millions of random cases. All GLIBC math tests pass on AArch64 and x64 with no change in ULP error. A simple test over a few hundred million values shows log is now 18% faster on average. * manual/probes.texi (slowlog): Delete documentation of removed probe. (slowlog_inexact): Likewise * sysdeps/ieee754/dbl-64/e_log.c (__ieee754_log): Remove slow paths. * sysdeps/ieee754/dbl-64/ulog.h: Remove unused declarations.
* Revert exp reimplementation (causes test failures).Joseph Myers2017-12-191-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Revert: 2017-12-19 Joseph Myers <joseph@codesourcery.com> * sysdeps/x86_64/fpu/libm-test-ulps: Update. 2017-12-19 Patrick McGehearty <patrick.mcgehearty@oracle.com> * sysdeps/ieee754/dbl-64/e_exp.c: Include <math-svid-compat.h> and <errno.h>. Include "eexp.tbl". (half): New constant. (one): Likewise. (__ieee754_exp): Rewrite. (__slowexp): Remove prototype. * sysdeps/ieee754/dbl-64/eexp.tbl: New file. * sysdeps/ieee754/dbl-64/slowexp.c: Remove file. * sysdeps/i386/fpu/slowexp.c: Likewise. * sysdeps/ia64/fpu/slowexp.c: Likewise. * sysdeps/m68k/m680x0/fpu/slowexp.c: Likewise. * sysdeps/x86_64/fpu/multiarch/slowexp-avx.c: Likewise. * sysdeps/x86_64/fpu/multiarch/slowexp-fma.c: Likewise. * sysdeps/x86_64/fpu/multiarch/slowexp-fma4.c: Likewise. * sysdeps/generic/math_private.h (__slowexp): Remove prototype. * sysdeps/ieee754/dbl-64/e_pow.c: Remove mention of slowexp.c in comment. * sysdeps/powerpc/power4/fpu/Makefile [$(subdir) = math] (CPPFLAGS-slowexp.c): Remove variable. * sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines): Remove slowexp-fma, slowexp-fma4 and slowexp-avx. (CFLAGS-slowexp-fma.c): Remove variable. (CFLAGS-slowexp-fma4.c): Likewise. (CFLAGS-slowexp-avx.c): Likewise. * sysdeps/x86_64/fpu/multiarch/e_exp-avx.c (__slowexp): Do not define as macro. * sysdeps/x86_64/fpu/multiarch/e_exp-fma.c (__slowexp): Likewise. * sysdeps/x86_64/fpu/multiarch/e_exp-fma4.c (__slowexp): Likewise. * math/Makefile (type-double-routines): Remove slowexp. * manual/probes.texi (slowexp_p6): Remove. (slowexp_p32): Likewise.
* Improve __ieee754_exp() performance by greater than 5x on sparc/x86.Patrick McGehearty2017-12-191-14/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These changes will be active for all platforms that don't provide their own exp() routines. They will also be active for ieee754 versions of ccos, ccosh, cosh, csin, csinh, sinh, exp10, gamma, and erf. Typical performance gains is typically around 5x when measured on Sparc s7 for common values between exp(1) and exp(40). Using the glibc perf tests on sparc, sparc (nsec) x86 (nsec) old new old new max 17629 395 5173 144 min 399 54 15 13 mean 5317 200 1349 23 The extreme max times for the old (ieee754) exp are due to the multiprecision computation in the old algorithm when the true value is very near 0.5 ulp away from an value representable in double precision. The new algorithm does not take special measures for those cases. The current glibc exp perf tests overrepresent those values. Informal testing suggests approximately one in 200 cases might invoke the high cost computation. The performance advantage of the new algorithm for other values is still large but not as large as indicated by the chart above. Glibc correctness tests for exp() and expf() were run. Within the test suite 3 input values were found to cause 1 bit differences (ulp) when "FE_TONEAREST" rounding mode is set. No differences in exp() were seen for the tested values for the other rounding modes. Typical example: exp(-0x1.760cd2p+0) (-1.46113312244415283203125) new code: 2.31973271630014299393707e-01 0x1.db14cd799387ap-3 old code: 2.31973271630014271638132e-01 0x1.db14cd7993879p-3 exp = 2.31973271630014285508337 (high precision) Old delta: off by 0.49 ulp New delta: off by 0.51 ulp In addition, because ieee754_exp() is used by other routines, cexp() showed test results with very small imaginary input values where the imaginary portion of the result was off by 3 ulp when in upward rounding mode, but not in the other rounding modes. For x86, tgamma showed a few values where the ulp increased to 6 (max ulp for tgamma is 5). Sparc tgamma did not show these failures. I presume the tgamma differences are due to compiler optimization differences within the gamma function.The gamma function is known to be difficult to compute accurately. * sysdeps/ieee754/dbl-64/e_exp.c: Include <math-svid-compat.h> and <errno.h>. Include "eexp.tbl". (half): New constant. (one): Likewise. (__ieee754_exp): Rewrite. (__slowexp): Remove prototype. * sysdeps/ieee754/dbl-64/eexp.tbl: New file. * sysdeps/ieee754/dbl-64/slowexp.c: Remove file. * sysdeps/i386/fpu/slowexp.c: Likewise. * sysdeps/ia64/fpu/slowexp.c: Likewise. * sysdeps/m68k/m680x0/fpu/slowexp.c: Likewise. * sysdeps/x86_64/fpu/multiarch/slowexp-avx.c: Likewise. * sysdeps/x86_64/fpu/multiarch/slowexp-fma.c: Likewise. * sysdeps/x86_64/fpu/multiarch/slowexp-fma4.c: Likewise. * sysdeps/generic/math_private.h (__slowexp): Remove prototype. * sysdeps/ieee754/dbl-64/e_pow.c: Remove mention of slowexp.c in comment. * sysdeps/powerpc/power4/fpu/Makefile [$(subdir) = math] (CPPFLAGS-slowexp.c): Remove variable. * sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines): Remove slowexp-fma, slowexp-fma4 and slowexp-avx. (CFLAGS-slowexp-fma.c): Remove variable. (CFLAGS-slowexp-fma4.c): Likewise. (CFLAGS-slowexp-avx.c): Likewise. * sysdeps/x86_64/fpu/multiarch/e_exp-avx.c (__slowexp): Do not define as macro. * sysdeps/x86_64/fpu/multiarch/e_exp-fma.c (__slowexp): Likewise. * sysdeps/x86_64/fpu/multiarch/e_exp-fma4.c (__slowexp): Likewise. * math/Makefile (type-double-routines): Remove slowexp. * manual/probes.texi (slowexp_p6): Remove. (slowexp_p32): Likewise.
* malloc: Remove check_action variable [BZ #21754]Florian Weimer2017-08-301-7/+0
| | | | | | | | | | | Clean up calls to malloc_printerr and trim its argument list. This also removes a few bits of work done before calling malloc_printerr (such as unlocking operations). The tunable/environment variable still enables the lightweight additional malloc checking, but mallopt (M_CHECK_ACTION) no longer has any effect.
* Add per-thread cache to mallocDJ Delorie2017-07-061-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | * config.make.in: Enable experimental malloc option. * configure.ac: Likewise. * configure: Regenerate. * manual/install.texi: Document it. * INSTALL: Regenerate. * malloc/Makefile: Likewise. * malloc/malloc.c: Add per-thread cache (tcache). (tcache_put): New. (tcache_get): New. (tcache_thread_freeres): New. (tcache_init): New. (__libc_malloc): Use cached chunks if available. (__libc_free): Initialize tcache if needed. (__libc_realloc): Likewise. (__libc_calloc): Likewise. (_int_malloc): Prefill tcache when appropriate. (_int_free): Likewise. (do_set_tcache_max): New. (do_set_tcache_count): New. (do_set_tcache_unsorted_limit): New. * manual/probes.texi: Document new probes. * malloc/arena.c: Add new tcache tunables. * elf/dl-tunables.list: Likewise. * manual/tunables.texi: Document them. * NEWS: Mention the per-thread cache.
* User manual documentation for tunablesSiddhesh Poyarekar2016-12-311-1/+1
| | | | | | | | | Create a new node for tunables documentation and add notes for the malloc tunables. * manual/tunables.texi: New chapter. * manual/Makefile (chapters): Add it. * manual/probes.texi (@node): Point to the Tunables chapter.
* Manual typos: Internal probesRical Jasan2016-10-061-2/+2
| | | | | | 2016-05-06 Rical Jasan <ricaljasan@pacific.net> * manual/probes.texi: Fix typos in the manual.
* Fix two spaces after sentence.Ondřej Bílka2014-02-261-5/+5
| | | | | Minor formatting fix that was carried by issuing sed -e"s/\. \([A-Z]\)/. \1/" followed by editing result.
* manual/probes.texi: Use "triggered" instead of "hit"Will Newton2014-02-111-73/+79
| | | | | | | | | | | | Use the term "triggered" instead of "hit" when talking about probe points. ChangeLog: 2014-02-11 Will Newton <will.newton@linaro.org> * manual/probes.texi (Mathematical Function Probes): Use "triggered" instead of "hit".
* manual/probes.texi: Add documentation of setjmp/longjmp probesWill Newton2014-02-111-0/+40
| | | | | | | | | | | | Add some documentation of the setjmp, longjmp and longjmp_target Systemtap probe points. ChangeLog: 2014-02-11 Will Newton <will.newton@linaro.org> * manual/probes.texi (Internal Probes): Add documentation of setjmp, longjmp and longjmp_target probes.
* Add missing deftp to fix commit 4d84e6addd62bdc256627af.Ondřej Bílka2013-12-181-0/+2
|
* Update documentation after dropping PER_THREAD conditional.Ondřej Bílka2013-12-181-34/+6
| | | | | In probes documentation we described what happens when PER_THREAD is disabled which is now not relevant.
* Consolidate valloc/pvalloc code.Ondřej Bílka2013-11-201-3/+2
| | | | | To make malloc code more maintainable we make malloc and pvalloc share logic with memalign.
* Add systemtap probe markers for sin, cos, asin and acosSiddhesh Poyarekar2013-11-201-0/+42
|
* Add systemtap markers to math function slow pathsSiddhesh Poyarekar2013-10-111-0/+98
| | | | | | | Add systemtap probes to various slow paths in libm so that application developers may use systemtap to find out if their applications are hitting these slow paths. We have added probes for pow, exp, log, tan, atan and atan2.
* Add malloc probes for sbrk and heap resizing.Alexandre Oliva2013-09-201-0/+41
| | | | | | | | | | | | for ChangeLog * malloc/arena.c (new_heap): New memory_heap_new probe. (grow_heap): New memory_heap_more probe. (shrink_heap): New memory_heap_less probe. (heap_trim): New memory_heap_free probe. * malloc/malloc.c (sysmalloc): New memory_sbrk_more probe. (systrim): New memory_sbrk_less probe. * manual/probes.texi: Document them.
* Add catch-all alloc retry probe.Alexandre Oliva2013-09-201-0/+12
| | | | | | | for ChangeLog * malloc/arena.c (arena_get_retry): Add memory_arena_retry probe. * manual/probes.texi: Document it.
* Add probes for malloc retries.Alexandre Oliva2013-09-201-0/+22
| | | | | | | | | | | | for ChangeLog * malloc/malloc.c (__libc_malloc): Add memory_malloc_retry probe. (__libc_realloc): Add memory_realloc_retry probe. (__libc_memalign): Add memory_memalign_retry probe. (__libc_valloc): Add memory_valloc_retry probe. (__libc_pvalloc): Add memory_pvalloc_retry probe. (__libc_calloc): Add memory_calloc_retry probe. * manual/probes.texi: Document them.
* Add probes for malloc arena changes.Alexandre Oliva2013-09-201-0/+60
| | | | | | | | | | | | | for ChangeLog * malloc/arena.c (get_free_list): Add probe memory_arena_reuse_free_list. (reused_arena) [PER_THREAD]: Add probes memory_arena_reuse_wait and memory_arena_reuse. (arena_get2) [!PER_THREAD]: Likewise. * malloc/malloc.c (__libc_realloc) [!PER_THREAD]: Add probe memory_arena_reuse_realloc. * manual/probes.texi: Document them.
* Add probes for all changes to malloc options.Alexandre Oliva2013-09-201-1/+82
| | | | | | | | | for ChangeLog * malloc/malloc.c (__libc_free): Add memory_mallopt_free_dyn_thresholds probe. (__libc_mallopt): Add multiple memory_mallopt probes. * manual/probes.texi: Document them.
* Add first set of memory probes.Alexandre Oliva2013-09-201-0/+41
for ChangeLog * malloc/malloc.c: Include stap-probe.h. (__libc_mallopt): Add memory_mallopt probe. * malloc/arena.c (_int_new_arena): Add memory_arena_new probe. * manual/probes.texi: New. * manual/Makefile (chapters): Add probes. * manual/threads.texi: Set next node.