about summary refs log tree commit diff
path: root/benchtests/scripts
Commit message (Collapse)AuthorAgeFilesLines
* Update copyright dates with scripts/update-copyrightsPaul Eggert2024-01-016-6/+6
|
* Fix all the remaining misspellings -- BZ 25337Paul Pluzhnikov2023-06-021-1/+1
|
* compare_strings.py : Add --gmean flagNisha Menon2023-04-041-2/+18
| | | | | | To calculate geometric mean for string benchmark results. Signed-off-by: Nisha Poyarekar <nisha.s.menon@gmail.com>
* Update copyright dates with scripts/update-copyrightsJoseph Myers2023-01-066-6/+6
|
* benchtests: make compare_strings.py accept string as attribute valueSu Lifan2022-03-081-1/+6
| | | | | | | | | | Commit ac759b1fbf28a82d99afde9046f8b72c7cba5dae added attribute "overlap" to bench-memmove-walk, whose value is a string. This change makes compare_strings.py fail since benchout_strings.schema.json requires the values of attributes to be number. This patch relaxes such constraint. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* Update copyright dates with scripts/update-copyrightsPaul Eggert2022-01-016-6/+6
| | | | | | | | | | | | | | | | | | | | | | | I used these shell commands: ../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright (cd ../glibc && git commit -am"[this commit message]") and then ignored the output, which consisted lines saying "FOO: warning: copyright statement not found" for each of 7061 files FOO. I then removed trailing white space from math/tgmath.h, support/tst-support-open-dev-null-range.c, and sysdeps/x86_64/multiarch/strlen-vec.S, to work around the following obscure pre-commit check failure diagnostics from Savannah. I don't know why I run into these diagnostics whereas others evidently do not. remote: *** 912-#endif remote: *** 913: remote: *** 914- remote: *** error: lines with trailing whitespace found ... remote: *** error: sysdeps/unix/sysv/linux/statx_cp.c: trailing lines
* benchtests: Fix validate_benchout.py exceptionsNaohiro Tamura2021-09-163-3/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixed validate_benchout.py two exceptions, 1) AttributeError if benchout_strings.schema.json is specified, and 2) json.decoder.JSONDecodeError if benchout file is not JSON. $ ~/glibc/benchtests/scripts/validate_benchout.py bench-memset.out \ ~/glibc/benchtests/scripts/benchout_strings.schema.json Traceback (most recent call last): File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 86, in <module> sys.exit(main(sys.argv[1:])) File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 69, in main bench.parse_bench(args[0], args[1]) File "/home/naohirot/glibc/benchtests/scripts/import_bench.py", line 139, in parse_bench do_for_all_timings(bench, lambda b, f, v: File "/home/naohirot/glibc/benchtests/scripts/import_bench.py", line 107, in do_for_all_timings if 'timings' not in bench['functions'][func][k].keys(): AttributeError: 'str' object has no attribute 'keys' $ ~/glibc/benchtests/scripts/validate_benchout.py bench-math-inlines.out \ ~/glibc/benchtests/scripts/benchout_strings.schema.json Traceback (most recent call last): File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 86, in <module> sys.exit(main(sys.argv[1:])) File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 69, in main bench.parse_bench(args[0], args[1]) File "/home/naohirot/glibc/benchtests/scripts/import_bench.py", line 137, in parse_bench bench = json.load(benchfile) File "/usr/lib/python3.6/json/__init__.py", line 299, in load parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) File "/usr/lib/python3.6/json/__init__.py", line 354, in loads return _default_decoder.decode(s) File "/usr/lib/python3.6/json/decoder.py", line 342, in decode raise JSONDecodeError("Extra data", s, end) json.decoder.JSONDecodeError: Extra data: line 1 column 17 (char 16) Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* benchtests: Enable scripts/plot_strings.py to read stdinNaohiro Tamura2021-09-131-3/+8
| | | | | | | | | | | | | | | | | | | | This patch enables scripts/plot_strings.py to read a benchmark result file from stdin. To keep backward compatibility, that is to keep accepting multiple of benchmark result files in argument, blank argument doesn't mean stdin, but '-' does. Therefore nargs parameter of ArgumentParser.add_argument() method is not changed to '?', but keep '+'. ex: $ jq '.' bench-memset.out | plot_strings.py - $ jq '.' bench-memset.out | plot_strings.py - bench-memset-large.out $ plot_strings.py bench-memset.out bench-memset-large.out error ex: $ jq '.' bench-memset.out | plot_strings.py Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
* benchtests: Fix pthread-locks test to produce valid jsonSiddhesh Poyarekar2021-04-181-0/+4
| | | | | | | | | The benchtests json allows {function {variant}} categorization of results whereas the pthread-locks tests had {function {variant {subvariant}}}, which broke validation. Fix that by serializing the subvariants as variant-subvariant. Also update the schema to recognize the new benchmark attributes after fixing the naming conventions.
* Update copyright dates with scripts/update-copyrightsPaul Eggert2021-01-026-6/+6
| | | | | | | | | | | | | | | | I used these shell commands: ../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright (cd ../glibc && git commit -am"[this commit message]") and then ignored the output, which consisted lines saying "FOO: warning: copyright statement not found" for each of 6694 files FOO. I then removed trailing white space from benchtests/bench-pthread-locks.c and iconvdata/tst-iconv-big5-hkscs-to-2ucs4.c, to work around this diagnostic from Savannah: remote: *** pre-commit check failed ... remote: *** error: lines with trailing whitespace found remote: error: hook declined to update refs/heads/master
* Convert Python scripts to Python 3Alistair Francis2020-03-032-2/+2
| | | | | | | | | Change all of the #! lines in Python scripts that are called from Makefiles to reference /usr/bin/python3. All of the scripts called from Makefiles are already run with Python 3, so let's make sure they are explicitly using Python 3 if called manually.
* Update copyright dates with scripts/update-copyrights.Joseph Myers2020-01-016-6/+6
|
* Add new script for plotting string benchmark JSON outputKrzysztof Koch2019-11-131-0/+395
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a script for visualizing the JSON output generated by existing glibc string microbenchmarks. Overview: plot_strings.py is capable of plotting benchmark results in the following formats, which are controlled with the -p or --plot argument: 1. absolute timings (-p time): plot the timings as they are in the input benchmark results file. 2. relative timings (-p rel): plot relative timing difference with respect to a chosen ifunc (controlled with -b argument). 3. performance relative to max (-p max): for each varied parameter value, plot 1/timing as the percentage of the maximum value out of the plotted ifuncs. 4. throughput (-p thru): plot varied parameter value over timing For all types of graphs, there is an option to explicitly specify the subset of ifuncs to plot using the --ifuncs parameter. For plot types 1. and 4. one can hide/expose exact benchmark figures using the --values flag. When plotting relative timing differences between ifuncs, the first ifunc listed in the input JSON file is the baseline, unless the baseline implementation is explicitly chosen with the --baseline parameter. For the ease of reading, the script marks the statistically insignificant range on the graphs. The default is +-5% but this value can be controlled with the --threshold parameter. To accommodate for the heterogeneity in benchmark results files, one can control i.e the x-axis scale, the resolution (dpi) of the generated figures or the key to access the varied parameter value in the JSON file. The corresponding options are --logarithmic, --resolution or --key. The --key parameter ensures that plot_strings.py works with all files which pass JSON schema validation. The schema can be chosen with the --schema parameter. If a window manager is available, one can enable interactive figure display using the --display flag. Finally, one can use the --grid flag to enable grid lines in the generated figures. Implementation: plot_strings.py traverses the JSON tree until a 'results' array is found and generates a separate figure for each such array. The figure is then saved to a file in one of the available formats (controlled with the --extension parameter). As the tree is traversed, the recursive function tracks the metadata about the test being run, so that each figure has a unique and meaningful title and filename. While plot_strings.py works with existing benchmarks, provisions have been made to allow adding more structure and metadata to these benchmarks. Currently, many benchmarks produce multiple timing values for the same value of the varied parameter (typically 'length'). Mutiple data points for the same parameter usually mean that some other parameter was varied as well, for example, if memmove's src and dst buffers overlap or not (see bench-memmove-walk.c and bench-memmove-walk.out). Unfortunately, this information is not exposed in the benchmark output file, so plot_strings.py has to resort to computing the geometric mean of these multiple values. In the process, useful information about the benchmark configuration is lost. Also, averaging the timings for different alignments can hide useful characterstics of the benchmarked ifuncs. Testing: plot_strings.py has been tested on all existing string microbenchmarks which produce results in JSON format. The script was tested on both Windows 10 and Ubuntu 16.04.2 LTS. It runs on both python 2 and 3 (2.7.12 and 3.5.12 tested). Useful commands: 1. Plot timings for all ifuncs in bench-strlen.out: $ ./plot_strings.py bench-strlen.out 2. Display help: $ ./plot_strings.py -h 3. Plot throughput for __memset_avx512_unaligned_erms and __memset_avx512_unaligned. Save the generated figure in pdf format to 'results/'. Use logarithmic x-axis scale, show grid lines and expose the performance numbers: $ ./plot_strings.py bench.out -o results/ -lgv -e pdf -p thru \ -i __memset_avx512_unaligned_erms __memset_avx512_unaligned 4. Plot relative timings for all ifuncs in bench.out with __generic_memset as baseline. Display percentage difference threshold of +-10%: $ ./plot_strings.py bench.out -p rel -b __generic_memset -t 10 Discussion: 1. I would like to propose relaxing the benchout_strings.schema.json to allow specifying either a 'results' array with 'timings' (as before) or a 'variants' array. See below example: { "timing_type": "hp_timing", "functions": { "memcpy": { "bench-variant": "default", "ifuncs": ["generic_memcpy", "__memcpy_thunderx"], "variants": [ { "name": "powers of 2", "variants": [ { "name": "both aligned", "results": [ { "length": 1, "align1": 0, "align2": 0, "timings": [x, y] }, { "length": 2, "align1": 0, "align2": 0, "timings": [x, y] }, ... { "length": 65536, "align1": 0, "align2": 0, "timings": [x, y] }] }, { "name": "dst misaligned", "results": [ { "length": 1, "align1": 0, "align2": 0, "timings": [x, y] }, { "length": 2, "align1": 0, "align2": 1, "timings": [x, y] }, ... 'variants' array consists of objects such that each object has a 'name' attribute to describe the configuration of a particular test in the benchmark. This can be a description, for example, of how the parameter was varied or what was the buffer alignment tested. The 'name' attribute is then followed by another 'variants' array or a 'results' array. The nesting of variants allows arbitrary grouping of benchmark timings, while allowing description of these groups. Using recusion, it is possible to proceduraly create titles and filenames for the figures being generated.
* Prefer https to http for gnu.org and fsf.org URLsPaul Eggert2019-09-075-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Also, change sources.redhat.com to sourceware.org. This patch was automatically generated by running the following shell script, which uses GNU sed, and which avoids modifying files imported from upstream: sed -ri ' s,(http|ftp)(://(.*\.)?(gnu|fsf|sourceware)\.org($|[^.]|\.[^a-z])),https\2,g s,(http|ftp)(://(.*\.)?)sources\.redhat\.com($|[^.]|\.[^a-z]),https\2sourceware.org\4,g ' \ $(find $(git ls-files) -prune -type f \ ! -name '*.po' \ ! -name 'ChangeLog*' \ ! -path COPYING ! -path COPYING.LIB \ ! -path manual/fdl-1.3.texi ! -path manual/lgpl-2.1.texi \ ! -path manual/texinfo.tex ! -path scripts/config.guess \ ! -path scripts/config.sub ! -path scripts/install-sh \ ! -path scripts/mkinstalldirs ! -path scripts/move-if-change \ ! -path INSTALL ! -path locale/programs/charmap-kw.h \ ! -path po/libc.pot ! -path sysdeps/gnu/errlist.c \ ! '(' -name configure \ -execdir test -f configure.ac -o -f configure.in ';' ')' \ ! '(' -name preconfigure \ -execdir test -f preconfigure.ac ';' ')' \ -print) and then by running 'make dist-prepare' to regenerate files built from the altered files, and then executing the following to cleanup: chmod a+x sysdeps/unix/sysv/linux/riscv/configure # Omit irrelevant whitespace and comment-only changes, # perhaps from a slightly-different Autoconf version. git checkout -f \ sysdeps/csky/configure \ sysdeps/hppa/configure \ sysdeps/riscv/configure \ sysdeps/unix/sysv/linux/csky/configure # Omit changes that caused a pre-commit check to fail like this: # remote: *** error: sysdeps/powerpc/powerpc64/ppc-mcount.S: trailing lines git checkout -f \ sysdeps/powerpc/powerpc64/ppc-mcount.S \ sysdeps/unix/sysv/linux/s390/s390-64/syscall.S # Omit change that caused a pre-commit check to fail like this: # remote: *** error: sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S: last line does not end in newline git checkout -f sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S
* Update copyright dates with scripts/update-copyrights.Joseph Myers2019-01-015-5/+5
| | | | | | | * All files with FSF copyright notices: Update copyright dates using scripts/update-copyrights. * locale/programs/charmap-kw.h: Regenerated. * locale/programs/locfile-kw.h: Likewise.
* benchtests: send non-consumable data to stderrLeonardo Sandoval2018-12-121-3/+4
| | | | | | | | | | | Non-consumable data, alias data not related to benchmarks, should be sent to the standard error, thus pipelines can work as expected. * benchtests/scripts/compare_bench.py (do_compare): write to stderr in case stat is not present. * benchtests/scripts/compare_bench.py (plot_graphs): write to stderr in case timings field is not present. Also string showing the output filename goes into the stderr.
* benchtests: include --stats parameterLeonardo Sandoval2018-12-121-9/+18
| | | | | | | | | | | | | | Allows user to pick a statistic, defaulting to min and mean, from command line. At the same time, if stat does not exit, catch the run-time exception and keep comparing the rest of benchmarked functions. Finally, take care of division-by-zero exceptions and as the latter, keep comparing the rest of the functions, turning the script a bit more fault tolerant thus useful. * benchtests/scripts/compare_bench.py (do_compare): Catch KeyError and ZeroDivisorError exceptions. * benchtests/scripts/compare_bench.py (compare_runs): Use stats argument to loop through user provided statistics. * benchtests/scripts/compare_bench.py (main): Include the --stats argument.
* benchtests: keep comparing even if function timings do not matchLeonardo Sandoval2018-12-121-1/+1
| | | | | | | Allows other functions to be processed, making the script a bit more fault tolerant thus useful. * benchtests/scripts/compare_bench.py (compare_runs): Continue instead of return.
* benchtests: Set float type on --threshold argumentLeonardo Sandoval2018-10-081-1/+1
| | | | | | | | | | | Otherwise, we see the following runtime error when using the parameter: File "./glibc/benchtests/scripts/compare_bench.py", line 46, in do_compare if d > threshold: TypeError: '>' not supported between instances of 'float' and 'str' * benchtests/scripts/compare_bench.py (main): set float type on threshold argument.
* [benchtests] Add workload test properties to schemaSiddhesh Poyarekar2018-08-111-0/+4
| | | | | | | | Add the workload test properties (max-throughput, latency, etc.) to the schema to prevent benchmark output validation from failing. * benchtests/scripts/benchout.schema.json (properties): Add new properties.
* [benchtests] Fix compare_strings.py for python2Siddhesh Poyarekar2018-08-031-2/+3
| | | | | | | | | Python 2 does not have a FileNotFoundError so drop it in favour of simply printing out the last (and most informative) line of the exception. * benchtests/scripts/compare_strings.py: Import traceback. (parse_file): Pretty-print error.
* benchtests: improve argument parsing through argparse libraryLeonardo Sandoval2018-07-191-21/+19
| | | | | | | | | | | | | The argparse library is used on compare_bench script to improve command line argument parsing. The 'schema validation file' is now optional, reducing by one the number of required parameters. * benchtests/scripts/compare_bench.py (__main__): use the argparse library to improve command line parsing. (__main__): make schema file as optional parameter (--schema), defaulting to benchtests/scripts/benchout.schema.json. (main): move out of the parsing stuff to __main_  and leave it only as caller of main comparison functions.
* benchtests: Add -f/--functions argumentH.J. Lu2018-06-121-10/+42
| | | | | | | | | | | | | On x86-64, there may be multiple IFUNC implementations for a given function. But we may be only interested in a subset of them. This patch adds -f/--functions argument to compare a subset of IFUNC implementations. * benchtests/scripts/compare_strings.py (process_results): Add funcs argument. Compare only functions which are selected. (main): Check if base function is among selected functions. Pass selected functions to process_results. (__main__): Add -f/--functions argument.
* benchtests: Catch exceptions in input argumentsLeonardo Sandoval2018-06-011-10/+24
| | | | | | | | | | | Catch runtime exceptions in case the user provided: wrong base function, attribute(s) or input file. In any of the latter, quit immediately with non-zero return code. * benchtests/scripts/compare_string.py: (process_results) Catch exception in non-existent base_func and catch exception in non-existent attribute. (parse_file) Catch exception in non-existent input file.
* benchtests: Add --no-diff and --no-header optionsLeonardo Sandoval2018-06-011-10/+18
| | | | | | | | | Having a string comparison report with neither diff numbers nor header yields a more useful output to be consumed by other tools. * benchtests/scripts/compare_string.py: Add --no-diff and --no-header options to avoid diff calculation and omit header, respectively. (main): process --no-diff and --no-header
* Update copyright dates with scripts/update-copyrights.Joseph Myers2018-01-015-5/+5
| | | | | | | * All files with FSF copyright notices: Update copyright dates using scripts/update-copyrights. * locale/programs/charmap-kw.h: Regenerated. * locale/programs/locfile-kw.h: Likewise.
* benchtests: Expand range of tests names in schema.jsonVictor Rodriguez2017-11-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | When executing bench-math the benchmark output is invalid with this error msg: Invalid benchmark output: 'workload-spec2006.wrf' does not match any of the regexes: '^[_a-zA-Z0-9]*$¹ or Invalid benchmark output: Additional properties are not allowed ('workload-spec2006.wrf' was unexpected) The error was seen when running the test: workload-spec2006.wrf, 'stack=1024,guard=1' and 'stack=1024,guard=2'. The problem is that the current regex's do not accept the hyphen, dot, equal and comma in the output. This patch changes the regex in benchout.schema.json to accept symbols in benchmark tests names. ChangeLog: * benchtests/scripts/benchout.schema.json: Fix regex to accept a wider range of tests names. Signed-off-by: Victor Rodriguez <victor.rodriguez.bahena@intel.com> Reviewed-By: Siddhesh Poyarekar <siddhesh@sourceware.org>
* benchtests: Adjust valid and accepted propertiesVictor Rodriguez2017-11-281-1/+2
| | | | | | | | | | | | | | | | Benchmark workload-spec2006.wrf does not produce max, min or mean results but instead produce throughput. This is represented in benchtests/bench-skeleton.c. This patch adjust benchout.schema.json to consider bench.out from bench-math benchmarks as valid ChangeLog: * benchtests/scripts/benchout.schema.json: Add throughput as accepted result from property and remove "max", min" and "mean" from required properties based on benchtests/bench-skeleton.c. Signed-off-by: Victor Rodriguez <victor.rodriguez.bahena@intel.com> Reviewed-By: Siddhesh Poyarekar <siddhesh@sourceware.org>
* benchtests: New -g option to generate graphs in compare_strings.pySiddhesh Poyarekar2017-09-161-3/+8
| | | | | | | | | | | The compare_strings.py option unconditionally generates a graph PNG image of the input data, which can be unnecessary and slow. Put this behind an optional flag -g. * benchtests/scripts/compare_strings.py: New option -g. (draw_graph): Print a message that a graph is being generated. (process_results): Generate graph only if -g is passed. (main): Process option -g.
* benchtests: Make compare_strings.py output a bit prettierSiddhesh Poyarekar2017-09-161-9/+11
| | | | | | | | | Make the column widths for the outputs fixed so that they look a little less messy. They will still look bad with lots of IFUNCs (like on x86) but it's still a step forward. * benchtests/scripts/compare_strings.py (process_results): Better spacing for output.
* benchtests: Use argparse to parse argumentsSiddhesh Poyarekar2017-09-161-13/+24
| | | | | | | | | | | Make the script more usable by adding proper command line options along with a way to query the options. The script is capable of doing a bunch of things right now like choosing a base for comparison, choosing to generate graphs, etc. and they should be accessible via command line switches. * benchtests/scripts/compare_strings.py: Use argparse. * benchtests/README: Document existence of compare_strings.py.
* Add math benchmark latency testWilco Dijkstra2017-08-171-6/+14
| | | | | | | | | | | | | | | | | | | | | | This patch further improves math function benchmarking by adding a latency test in addition to throughput. This enables more accurate comparisons of the math functions. The latency test works by creating a dependency on the previous iteration: func_res = F (func_res * zero + input[i]). The multiply by zero avoids changing the input. It reports reciprocal throughput and latency in nanoseconds (depending on the timing header used) and max/min throughput in iterations per second: "workload-spec2006.wrf": { "reciprocal-throughput": 100, "latency": 200, "max-throughput": 1.0e+07, "min-throughput": 5.0e+06 } * benchtests/bench-skeleton.c (main): Add support for latency benchmarking. * benchtests/scripts/bench.py: Add support for latency benchmarking.
* benchtests: Avoid a display error when running in text terminalSiddhesh Poyarekar2017-08-081-0/+2
| | | | | | | | | | The compare_strings.py script generates a graph for the benchmarks it performs a comparison on and that fails if X is not available. Avoid the error and ensure that only the graph is generated and saved as a PNG file. * benchtests/scripts/compare_strings.py: Avoid display error when generating graph.
* benchtests: Allow selecting baseline for compare_string.pySiddhesh Poyarekar2017-08-081-10/+18
| | | | | | | | | | | | This patch allows one to provide the function name using an optional -base option to compare all other functions against. This is useful when pitching one implementation of a string function against alternatives. In the absence of this option, comparisons are done against the first ifunc in the list. * benchtests/scripts/compare_strings.py (main): Add an optional -base option. (process_results): New argument base_func.
* benchtests: New script to parse memcpy resultsSiddhesh Poyarekar2017-06-222-0/+173
| | | | | | | | | | | | Read the memcpy results in json and print out the results in tabular form, in addition to generating a graph of the results to compare all of the implementations. The format of the output is extensible enough to allow this kind of analysis to be done on other string functions as well. * benchtests/scripts/benchout_strings.schema.json: New file. * benchtests/scripts/compare_strings.py: New file.
* Update copyright dates with scripts/update-copyrights.Joseph Myers2017-01-014-4/+4
|
* Update copyright dates with scripts/update-copyrights.Joseph Myers2016-01-044-4/+4
|
* benchtests: Mark output variables as usedSiddhesh Poyarekar2015-11-171-1/+1
| | | | | | | | Prevent function calls that don't return anything from being optimized out by the compiler by marking its input variables as used. This prevents the sincos function call from being optimized out in the benchmark.
* benchtest: script to compare two benchmarksSiddhesh Poyarekar2015-06-012-0/+280
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This script is a sample implementation that uses import_bench to construct two benchmark objects and compare them. If detailed timing information is available (when one does `make DETAILED=1 bench`), it writes out graphs for all functions it benchmarks and prints significant differences in timings of the two benchmark runs. If detailed timing information is not available, it points out significant differences in aggregate times. Call this script as follows: compare_bench.py schema_file.json bench1.out bench2.out Alternatively, if one wants to set a different threshold for warnings (default is a 10% difference): compare_bench.py schema_file.json bench1.out bench2.out 25 The threshold in the example above is 25%. schema_file.json is the JSON schema (which is $srcdir/benchtests/scripts/benchout.schema.json for the benchmark output file) and bench1.out and bench2.out are the two benchmark output files to compare. The key functionality here is the compress_timings function which groups together points that are close together into a single point that is the mean of all its representative points. Any point in such a group is at most 1.5x the smallest point in that group. The detailed derivation is a comment in the function. * benchtests/scripts/compare_bench.py: New file. * benchtests/scripts/import_bench.py (mean): New function. (split_list): Likewise. (do_for_all_timings): Likewise. (compress_timings): Likewise.
* New module to import and process benchmark outputSiddhesh Poyarekar2015-06-012-25/+71
| | | | | | | | | | | | | | | This is the beginning of a module to import and process benchmark outputs. The module currently supports importing of a bench.out and validating it against a schema file. In future this could grow a set of routines that benchmark consumers may find useful to build their own analysis tools. I have altered validate_bench to use this module too. * benchtests/scripts/import_bench.py: New file. * benchtests/scripts/validate_benchout.py: Import import_bench instead of jsonschema. (validate_bench): Remove function. (main): Use import_bench.
* Update copyright dates with scripts/update-copyrights.Joseph Myers2015-01-022-2/+2
|
* Validate bench.out against a JSON schemaSiddhesh Poyarekar2014-06-112-0/+127
| | | | | This patch adds a JSON schema for the benchmark output file and also adds a script that validates the generated output against the schema.
* benchtests: Add new directive for benchmark initialization hookSiddhesh Poyarekar2014-05-261-1/+6
| | | | | | | Add a new 'init' directive that specifies the name of the function to call to do function-specific initialization. This is useful for benchmarks that need to do a one-time initialization before the functions are executed.
* Detailed benchmark outputs for functionsSiddhesh Poyarekar2014-03-291-1/+5
| | | | | | | | This patch adds an option to get detailed benchmark output for functions. Invoking the benchmark with 'make DETAILED=1 bench' causes each benchmark program to store a mean execution time for each input it works on. This is useful to give a more comprehensive picture of performance of functions compared to just the single mean figure.
* Make bench.out in json formatSiddhesh Poyarekar2014-03-291-1/+1
| | | | | | | | | | | | | | This patch changes the output format of the main benchmark output file (bench.out) to an extensible format. I chose JSON over XML because in addition to being extensible, it is also not too verbose. Additionally it has good support in python. The significant change I have made in terms of functionality is to put timing information as an attribute in JSON instead of a string and to do that, there is a separate program that prints out a JSON snippet mentioning the type of timing (hp_timing or clock_gettime). The mean timing has now changed from iterations per unit to actual timing per iteration.
* benchtests: Move bench.py to benchtests/scripts/Siddhesh Poyarekar2014-03-241-0/+299
It makes much more sense to have all benchmarking-related scripts in a single place away from everything else.