about summary refs log tree commit diff
path: root/benchtests/scripts/import_bench.py
diff options
context:
space:
mode:
authorSiddhesh Poyarekar <siddhesh@redhat.com>2015-06-01 23:14:11 +0530
committerSiddhesh Poyarekar <siddhesh@redhat.com>2015-06-01 23:14:11 +0530
commit0cd2828695cc328aa1b48379436d15c39d433076 (patch)
treec7f6fc5aab240ffdb8c7f321056ea9185c3ec154 /benchtests/scripts/import_bench.py
parent0994b9b6f685d460ee72170824a5393b592dc3c5 (diff)
downloadglibc-0cd2828695cc328aa1b48379436d15c39d433076.tar.gz
glibc-0cd2828695cc328aa1b48379436d15c39d433076.tar.xz
glibc-0cd2828695cc328aa1b48379436d15c39d433076.zip
benchtest: script to compare two benchmarks
This script is a sample implementation that uses import_bench to
construct two benchmark objects and compare them.  If detailed timing
information is available (when one does `make DETAILED=1 bench`), it
writes out graphs for all functions it benchmarks and prints
significant differences in timings of the two benchmark runs.  If
detailed timing information is not available, it points out
significant differences in aggregate times.

Call this script as follows:

  compare_bench.py schema_file.json bench1.out bench2.out

Alternatively, if one wants to set a different threshold for warnings
(default is a 10% difference):

  compare_bench.py schema_file.json bench1.out bench2.out 25

The threshold in the example above is 25%.  schema_file.json is the
JSON schema (which is $srcdir/benchtests/scripts/benchout.schema.json
for the benchmark output file) and bench1.out and bench2.out are the
two benchmark output files to compare.

The key functionality here is the compress_timings function which
groups together points that are close together into a single point
that is the mean of all its representative points.  Any point in such
a group is at most 1.5x the smallest point in that group.  The
detailed derivation is a comment in the function.

	* benchtests/scripts/compare_bench.py: New file.
	* benchtests/scripts/import_bench.py (mean): New function.
	(split_list): Likewise.
	(do_for_all_timings): Likewise.
	(compress_timings): Likewise.
Diffstat (limited to 'benchtests/scripts/import_bench.py')
-rw-r--r--benchtests/scripts/import_bench.py96
1 files changed, 96 insertions, 0 deletions
diff --git a/benchtests/scripts/import_bench.py b/benchtests/scripts/import_bench.py
index 81248c2adf..d37ff62383 100644
--- a/benchtests/scripts/import_bench.py
+++ b/benchtests/scripts/import_bench.py
@@ -25,6 +25,102 @@ except ImportError:
     raise
 
 
+def mean(lst):
+    """Compute and return mean of numbers in a list
+
+    The numpy average function has horrible performance, so implement our
+    own mean function.
+
+    Args:
+        lst: The list of numbers to average.
+    Return:
+        The mean of members in the list.
+    """
+    return sum(lst) / len(lst)
+
+
+def split_list(bench, func, var):
+    """ Split the list into a smaller set of more distinct points
+
+    Group together points such that the difference between the smallest
+    point and the mean is less than 1/3rd of the mean.  This means that
+    the mean is at most 1.5x the smallest member of that group.
+
+    mean - xmin < mean / 3
+    i.e. 2 * mean / 3 < xmin
+    i.e. mean < 3 * xmin / 2
+
+    For an evenly distributed group, the largest member will be less than
+    twice the smallest member of the group.
+    Derivation:
+
+    An evenly distributed series would be xmin, xmin + d, xmin + 2d...
+
+    mean = (2 * n * xmin + n * (n - 1) * d) / 2 * n
+    and max element is xmin + (n - 1) * d
+
+    Now, mean < 3 * xmin / 2
+
+    3 * xmin > 2 * mean
+    3 * xmin > (2 * n * xmin + n * (n - 1) * d) / n
+    3 * n * xmin > 2 * n * xmin + n * (n - 1) * d
+    n * xmin > n * (n - 1) * d
+    xmin > (n - 1) * d
+    2 * xmin > xmin + (n-1) * d
+    2 * xmin > xmax
+
+    Hence, proved.
+
+    Similarly, it is trivial to prove that for a similar aggregation by using
+    the maximum element, the maximum element in the group must be at most 4/3
+    times the mean.
+
+    Args:
+        bench: The benchmark object
+        func: The function name
+        var: The function variant name
+    """
+    means = []
+    lst = bench['functions'][func][var]['timings']
+    last = len(lst) - 1
+    while lst:
+        for i in range(last + 1):
+            avg = mean(lst[i:])
+            if avg > 0.75 * lst[last]:
+                means.insert(0, avg)
+                lst = lst[:i]
+                last = i - 1
+                break
+    bench['functions'][func][var]['timings'] = means
+
+
+def do_for_all_timings(bench, callback):
+    """Call a function for all timing objects for each function and its
+    variants.
+
+    Args:
+        bench: The benchmark object
+        callback: The callback function
+    """
+    for func in bench['functions'].keys():
+        for k in bench['functions'][func].keys():
+            if 'timings' not in bench['functions'][func][k].keys():
+                continue
+
+            callback(bench, func, k)
+
+
+def compress_timings(points):
+    """Club points with close enough values into a single mean value
+
+    See split_list for details on how the clubbing is done.
+
+    Args:
+        points: The set of points.
+    """
+    do_for_all_timings(points, split_list)
+
+
 def parse_bench(filename, schema_filename):
     """Parse the input file