| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
This patch adds a JSON schema for the benchmark output file and also
adds a script that validates the generated output against the schema.
|
|
|
|
|
|
|
| |
Add a new 'init' directive that specifies the name of the function to
call to do function-specific initialization. This is useful for
benchmarks that need to do a one-time initialization before the
functions are executed.
|
|
|
|
|
|
|
|
| |
This patch adds an option to get detailed benchmark output for
functions. Invoking the benchmark with 'make DETAILED=1 bench' causes
each benchmark program to store a mean execution time for each input
it works on. This is useful to give a more comprehensive picture of
performance of functions compared to just the single mean figure.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch changes the output format of the main benchmark output file
(bench.out) to an extensible format. I chose JSON over XML because in
addition to being extensible, it is also not too verbose.
Additionally it has good support in python.
The significant change I have made in terms of functionality is to put
timing information as an attribute in JSON instead of a string and to
do that, there is a separate program that prints out a JSON snippet
mentioning the type of timing (hp_timing or clock_gettime). The mean
timing has now changed from iterations per unit to actual timing per
iteration.
|
|
It makes much more sense to have all benchmarking-related scripts in a
single place away from everything else.
|