about summary refs log tree commit diff
path: root/benchtests/scripts
Commit message (Collapse)AuthorAgeFilesLines
* Validate bench.out against a JSON schemaSiddhesh Poyarekar2014-06-112-0/+127
| | | | | This patch adds a JSON schema for the benchmark output file and also adds a script that validates the generated output against the schema.
* benchtests: Add new directive for benchmark initialization hookSiddhesh Poyarekar2014-05-261-1/+6
| | | | | | | Add a new 'init' directive that specifies the name of the function to call to do function-specific initialization. This is useful for benchmarks that need to do a one-time initialization before the functions are executed.
* Detailed benchmark outputs for functionsSiddhesh Poyarekar2014-03-291-1/+5
| | | | | | | | This patch adds an option to get detailed benchmark output for functions. Invoking the benchmark with 'make DETAILED=1 bench' causes each benchmark program to store a mean execution time for each input it works on. This is useful to give a more comprehensive picture of performance of functions compared to just the single mean figure.
* Make bench.out in json formatSiddhesh Poyarekar2014-03-291-1/+1
| | | | | | | | | | | | | | This patch changes the output format of the main benchmark output file (bench.out) to an extensible format. I chose JSON over XML because in addition to being extensible, it is also not too verbose. Additionally it has good support in python. The significant change I have made in terms of functionality is to put timing information as an attribute in JSON instead of a string and to do that, there is a separate program that prints out a JSON snippet mentioning the type of timing (hp_timing or clock_gettime). The mean timing has now changed from iterations per unit to actual timing per iteration.
* benchtests: Move bench.py to benchtests/scripts/Siddhesh Poyarekar2014-03-241-0/+299
It makes much more sense to have all benchmarking-related scripts in a single place away from everything else.