| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
Building benchmarks as static executables:
=========================================
To build benchmarks as static executables, on the build system, run:
$ make STATIC-BENCHTESTS=yes bench-build
You can copy benchmark executables to another machine and run them
without copying the source nor build directories.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
No bug. Remove reallocation of bufs between implementation tests. Move
initialization outside of foreach implementation test loop. Increase
iteration count.
Generally before this commit was seeing a great deal of variability
between runs. The goal of this commit is to make the results more
reliable.
Benchtests build and bench-memcmp succeeding.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixed validate_benchout.py two exceptions,
1) AttributeError
if benchout_strings.schema.json is specified, and
2) json.decoder.JSONDecodeError
if benchout file is not JSON.
$ ~/glibc/benchtests/scripts/validate_benchout.py bench-memset.out \
~/glibc/benchtests/scripts/benchout_strings.schema.json
Traceback (most recent call last):
File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 86, in <module>
sys.exit(main(sys.argv[1:]))
File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 69, in main
bench.parse_bench(args[0], args[1])
File "/home/naohirot/glibc/benchtests/scripts/import_bench.py", line 139, in parse_bench
do_for_all_timings(bench, lambda b, f, v:
File "/home/naohirot/glibc/benchtests/scripts/import_bench.py", line 107, in do_for_all_timings
if 'timings' not in bench['functions'][func][k].keys():
AttributeError: 'str' object has no attribute 'keys'
$ ~/glibc/benchtests/scripts/validate_benchout.py bench-math-inlines.out \
~/glibc/benchtests/scripts/benchout_strings.schema.json
Traceback (most recent call last):
File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 86, in <module>
sys.exit(main(sys.argv[1:]))
File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 69, in main
bench.parse_bench(args[0], args[1])
File "/home/naohirot/glibc/benchtests/scripts/import_bench.py", line 137, in parse_bench
bench = json.load(benchfile)
File "/usr/lib/python3.6/json/__init__.py", line 299, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/usr/lib/python3.6/json/__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.6/json/decoder.py", line 342, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 17 (char 16)
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
|
|
|
|
|
|
|
| |
This patch removed redundant "#include <assert.h>" from
bench-memset-large.c and bench-memset-walk.c.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch enables scripts/plot_strings.py to read a benchmark result
file from stdin.
To keep backward compatibility, that is to keep accepting multiple of
benchmark result files in argument, blank argument doesn't mean stdin,
but '-' does.
Therefore nargs parameter of ArgumentParser.add_argument() method is
not changed to '?', but keep '+'.
ex:
$ jq '.' bench-memset.out | plot_strings.py -
$ jq '.' bench-memset.out | plot_strings.py - bench-memset-large.out
$ plot_strings.py bench-memset.out bench-memset-large.out
error ex:
$ jq '.' bench-memset.out | plot_strings.py
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
|
|
|
|
|
|
|
|
| |
They provide TLS_GD/TLS_LD/TLS_IE/TLS_IE macros for TLS testing. Now
that we have migrated to __thread and tls_model attributes, these macros
are unused and the tls-macros.h files can retire.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
|
|
|
|
|
|
|
|
| |
These comments refer to slow paths that were removed in
glibc 2.34 or earlier. The corresponding "names" that yield
separate workload traces for "make bench" are thus obsolete.
We are however keeping the corresponding inputs.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
|
| |
|
|
|
|
|
|
|
|
| |
The benchmark and tests must fail in case of allocation failure in the
implementation array. Also annotate the x* allocators in support.h so
that the compiler has more information about them.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
|
|
|
|
|
|
|
|
| |
This patch fixed mprotect system call failure on AArch64.
This failure happened on not only A64FX but also ThunderX2.
Also this patch updated a JSON key from "max-size" to "length" so that
'plot_strings.py' can process 'bench-memcpy-random.out'
|
|
|
|
|
|
|
|
|
|
| |
This patch adds support for testing memcpy with both dst > src and dst
< src. Since memcpy is implemented as memmove which has seperate
control flows for certain sizes depending on dst > src it seems like
1) information that should be provided in the benchtest output and a
variable that can be controlled for the benchmarks.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
No bug. This commit adds some additional performance test cases to
bench-memcmp.c and test-memcmp.c. The new benchtests include some
medium range sizes, as well as small sizes near page cross. The new
correctness tests correspond with the new benchtests though add some
additional cases for checking the page cross logic.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
|
|
|
|
|
|
|
|
| |
Convert the output of benchtests/bench-rawmemchr to JSON like other string
benchmarks. This makes the output more parseable and allows usage of
compare_strings.py, for example.
Reviewed-by: Lucas A. M. Magalhaes <lamm@linux.ibm.com>
|
|
|
|
|
|
| |
These workload traces cover the whole "long double" range.
This patch was prepared with the help of Adhemerval Zanella.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
|
|
|
|
|
|
|
|
| |
No bug. This commit adds some additional cases for bench-memchr.c
including testing medium sizes and testing short length with both an
inbound match and out of bound match.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
|
|
|
|
| |
Collect data on memcpy from 2KB to 4KB with the 64-byte increment value.
|
|
|
|
|
|
|
|
|
| |
No bug. This commit adds tests cases and benchmarks for page cross and
for memset to the end of the page without crossing. As well in
test-memset.c this commit adds sentinel on start/end of tstbuf to test
for overwrites
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
|
|
|
|
| |
Variant names don't accept brackets.
|
|
|
|
|
|
|
|
|
| |
The benchtests json allows {function {variant}} categorization of
results whereas the pthread-locks tests had {function {variant
{subvariant}}}, which broke validation. Fix that by serializing the
subvariants as variant-subvariant. Also update the schema to
recognize the new benchmark attributes after fixing the naming
conventions.
|
|
|
|
|
|
|
|
|
|
|
|
| |
No Bug. This commit expanding the range of tests / benchmarks for
memmove and memcpy. The test expansion is mostly in the vein of
increasing the maximum size, increasing the number of unique
alignments tested, and testing both source < destination and vice
versa. The benchmark expansaion is just to increase the number of
unique alignments. test-memcpy, test-memccpy, test-mempcpy,
test-memmove, and tst-memmove-overflow all pass.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
This patch adds workload traces for all double format functions where such
files are missing. For each function, a set of 1000 random values is
generated at random using SageMath, such that the output values are
meaningful (for example avoiding too large inputs for exp10 where the
output would be +Inf). More details about the generated values are
given at the beginning of each file.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
|
|
|
|
| |
Add a benchtest to ilogb, ilogbf and ilogbf128 based on the logb* benchtests.
|
|
|
|
|
|
|
|
|
|
| |
This patch updates json "bench-variant" attribute of "bench-memset.c"
to "default" so that the script "benchtests/scripts/plot_strings.py"
can generate a file "memset_time_default_linear.png".
Without this patch, the script "benchtests/scripts/plot_strings.py"
generates a file "memset_time__linear.png" which has inconsistent form
with "memcpy_time_default_linear.png" and
"memmove_time_default_linear.png".
|
|
|
|
|
|
| |
This patch adds additional benchmarks and tests for string size of
4096 and several benchmarks for string size 256 with different
alignments.
|
|
|
|
|
|
|
|
|
| |
Since commit 2682695e5c7a, `make bench-build' with `--enable-static-pie'
fails due to bench-timing-type being incorrectly built with MODULE_NAME
set to `libc'. This commit sets MODULE_NAME to nonlib, thus fixing the
build failure.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GNU ld and gold have supported --print-output-format since 2011. glibc
requires binutils>=2.25 (2015), so if LD is GNU ld or gold, we can
assume the option is supported.
lld is by default a cross linker supporting multiple targets. It auto
detects the file format and does not need OUTPUT_FORMAT. It does not
support --print-output-format.
By parsing objdump -f, we can support all the three linkers.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 6694 files FOO.
I then removed trailing white space from benchtests/bench-pthread-locks.c
and iconvdata/tst-iconv-big5-hkscs-to-2ucs4.c, to work around this
diagnostic from Savannah:
remote: *** pre-commit check failed ...
remote: *** error: lines with trailing whitespace found
remote: error: hook declined to update refs/heads/master
|
|
|
|
|
|
|
|
| |
Performance benchmarks for various posix locks: mutex, rwlock,
spinlock, condvar, and semaphore. Each test is performed with
an empty loop body or with a computationally "interesting" (i.e.
difficult to optimize away, and used just to allow lock code to
be "hidden" in the filler's CPU cycles).
|
|
|
|
| |
Add strcmp workloads on page boundary.
|
|
|
|
| |
Add strncmp workloads on page boundary.
|
|
|
|
|
|
|
|
|
|
| |
__float128 is a non-standard name and is not available on some architectures
(like aarch64 or s390x) even though they may support the standard _Float128
type. Other architectures (like armv7) don't support quad-precision
floating-point operations at all.
This commit replaces benchtests references to __float128 with _Float128 and
runs the corresponding tests only on architectures that support it.
|
|
|
|
|
| |
This patch adds workload traces for sinf128 in binary32. The trace is
made of 1000 random numbers, generated with SageMath.
|
|
|
|
|
| |
This patch adds workload traces for sinf in binary32. The trace is
made of 1000 random numbers, generated with SageMath.
|
|
|
|
|
| |
This patch adds workload traces for sin in binary64. The trace is
made of 1000 random numbers, generated with SageMath.
|
|
|
|
|
| |
This patch adds workload traces for pow in binary128. The trace is
made of 1000 random numbers, generated with SageMath.
|
|
|
|
|
| |
This patch adds workload traces for pow in binary64. The trace is
made of 1000 random numbers, generated with SageMath.
|
|
|
|
|
| |
This patch adds workload traces for exp in binary128. The trace is
made of 1000 random numbers, generated with SageMath.
|
|
|
|
|
| |
This patch adds workload traces for exp in binary64. The trace is
made of 1000 random numbers, generated with SageMath.
|
|
|
|
|
|
| |
Improve documentation of the 'name' directive and the 'workload' mechanism.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sun RPC was removed from glibc. This includes rpcgen program, librpcsvc,
and Sun RPC headers. Also test for bug #20790 was removed
(test for rpcgen).
Backward compatibility for old programs is kept only for architectures
and ABIs that have been added in or before version 2.28.
libtirpc is mature enough, librpcsvc and rpcgen are provided in
rpcsvc-proto project.
NOTE: libnsl code depends on Sun RPC (installed libnsl headers use
installed Sun RPC headers), thus --enable-obsolete-rpc was a dependency
for --enable-obsolete-nsl (removed in a previous commit).
The arc ABI list file has to be updated because the port was added
with the sunrpc symbols
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
|
|
|
|
|
|
| |
It is based on expf one by converting each line with the formula:
new_val = (float) log10 (exp ((double) old_val))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit 7621e38bf3c58b2d0359545f1f2898017fd89d05
Author: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Date: Tue Jan 29 17:43:45 2019 +0000
Add generic hp-timing support
removed the clock_gettime option. Restore the clock_gettime option for
some x86 CPUs on which value from RDTSC may not be incremented at a fixed
rate.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit e9698175b0b60407db1e89bcf29437ab224bca0b
Author: Lukasz Majewski <lukma@denx.de>
Date: Mon Mar 16 08:31:41 2020 +0100
y2038: Replace __clock_gettime with __clock_gettime64
breaks benchtests with sysdeps/generic/hp-timing.h:
In file included from ./bench-timing.h:23,
from ./bench-skeleton.c:25,
from
/export/build/gnu/tools-build/glibc-gitlab/build-x86_64-linux/benchtests/bench-rint.c:45:
./bench-skeleton.c: In function ‘main’:
../sysdeps/generic/hp-timing.h:37:23: error: storage size of ‘tv’ isn’t known
37 | struct __timespec64 tv; \
| ^~
Define HP_TIMING_NOW with clock_gettime in sysdeps/generic/hp-timing.h
if _ISOMAC is defined. Don't define __clock_gettime in bench-timing.h
since it is no longer needed.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The __clock_gettime internal function is not supporting 64 bit time on
architectures with __WORDSIZE == 32 and __TIMESIZE != 64 (like e.g. ARM 32
bit).
The __clock_gettime64 function shall be used instead in the glibc itself as
it supports 64 bit time on those systems.
This patch does not bring any changes to systems with __WORDSIZE == 64 as
for them the __clock_gettime64 is aliased to __clock_gettime (in
./include/time.h).
|
|
|
|
|
| |
This patch adds benchtests for the roundeven and roundevenf functions.
The inputs are copied from trunc-inputs.
|
|
|
|
|
|
|
|
|
| |
Change all of the #! lines in Python scripts that are called from
Makefiles to reference /usr/bin/python3.
All of the scripts called from Makefiles are already run with Python 3,
so let's make sure they are explicitly using Python 3 if called
manually.
|
|
|
|
|
|
|
|
|
| |
Improve the random memcpy benchmark. Double the number of copies and
increase the memory sizes tested to 512KB. Add a more detailed
distribution of memcpy alignment and sizes up to 4096 based on SPEC2017
traces.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
|
|
|
|
|
| |
benchtests/timing-type is built with the newly built libc, so should
be run with it like actual tests and benchmarks.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a script for visualizing the JSON output generated by existing
glibc string microbenchmarks.
Overview:
plot_strings.py is capable of plotting benchmark results in the
following formats, which are controlled with the -p or --plot argument:
1. absolute timings (-p time): plot the timings as they are in the
input benchmark results file.
2. relative timings (-p rel): plot relative timing difference with
respect to a chosen ifunc (controlled with -b argument).
3. performance relative to max (-p max): for each varied parameter
value, plot 1/timing as the percentage of the maximum value out of
the plotted ifuncs.
4. throughput (-p thru): plot varied parameter value over timing
For all types of graphs, there is an option to explicitly specify
the subset of ifuncs to plot using the --ifuncs parameter.
For plot types 1. and 4. one can hide/expose exact benchmark figures
using the --values flag.
When plotting relative timing differences between ifuncs, the first
ifunc listed in the input JSON file is the baseline, unless the
baseline implementation is explicitly chosen with the --baseline
parameter. For the ease of reading, the script marks the statistically
insignificant range on the graphs. The default is +-5% but this
value can be controlled with the --threshold parameter.
To accommodate for the heterogeneity in benchmark results files,
one can control i.e the x-axis scale, the resolution (dpi) of the
generated figures or the key to access the varied parameter value
in the JSON file. The corresponding options are --logarithmic,
--resolution or --key. The --key parameter ensures that plot_strings.py
works with all files which pass JSON schema validation. The schema
can be chosen with the --schema parameter.
If a window manager is available, one can enable interactive
figure display using the --display flag.
Finally, one can use the --grid flag to enable grid lines in the
generated figures.
Implementation:
plot_strings.py traverses the JSON tree until a 'results' array
is found and generates a separate figure for each such array.
The figure is then saved to a file in one of the available formats
(controlled with the --extension parameter).
As the tree is traversed, the recursive function tracks the metadata
about the test being run, so that each figure has a unique and
meaningful title and filename.
While plot_strings.py works with existing benchmarks, provisions
have been made to allow adding more structure and metadata to these
benchmarks. Currently, many benchmarks produce multiple timing values
for the same value of the varied parameter (typically 'length').
Mutiple data points for the same parameter usually mean that some other
parameter was varied as well, for example, if memmove's src and dst
buffers overlap or not (see bench-memmove-walk.c and
bench-memmove-walk.out).
Unfortunately, this information is not exposed in the benchmark output
file, so plot_strings.py has to resort to computing the geometric mean
of these multiple values. In the process, useful information about the
benchmark configuration is lost. Also, averaging the timings for
different alignments can hide useful characterstics of the benchmarked
ifuncs.
Testing:
plot_strings.py has been tested on all existing string microbenchmarks
which produce results in JSON format. The script was tested on both
Windows 10 and Ubuntu 16.04.2 LTS. It runs on both python 2 and 3
(2.7.12 and 3.5.12 tested).
Useful commands:
1. Plot timings for all ifuncs in bench-strlen.out:
$ ./plot_strings.py bench-strlen.out
2. Display help:
$ ./plot_strings.py -h
3. Plot throughput for __memset_avx512_unaligned_erms and
__memset_avx512_unaligned. Save the generated figure in pdf format to
'results/'. Use logarithmic x-axis scale, show grid lines and expose
the performance numbers:
$ ./plot_strings.py bench.out -o results/ -lgv -e pdf -p thru \
-i __memset_avx512_unaligned_erms __memset_avx512_unaligned
4. Plot relative timings for all ifuncs in bench.out with __generic_memset
as baseline. Display percentage difference threshold of +-10%:
$ ./plot_strings.py bench.out -p rel -b __generic_memset -t 10
Discussion:
1. I would like to propose relaxing the benchout_strings.schema.json
to allow specifying either a 'results' array with 'timings' (as before)
or a 'variants' array. See below example:
{
"timing_type": "hp_timing",
"functions": {
"memcpy": {
"bench-variant": "default",
"ifuncs": ["generic_memcpy", "__memcpy_thunderx"],
"variants": [
{
"name": "powers of 2",
"variants": [
{
"name": "both aligned",
"results": [
{
"length": 1,
"align1": 0,
"align2": 0,
"timings": [x, y]
},
{
"length": 2,
"align1": 0,
"align2": 0,
"timings": [x, y]
},
...
{
"length": 65536,
"align1": 0,
"align2": 0,
"timings": [x, y]
}]
},
{
"name": "dst misaligned",
"results": [
{
"length": 1,
"align1": 0,
"align2": 0,
"timings": [x, y]
},
{
"length": 2,
"align1": 0,
"align2": 1,
"timings": [x, y]
},
...
'variants' array consists of objects such that each object has a 'name'
attribute to describe the configuration of a particular test in the
benchmark. This can be a description, for example, of how the parameter
was varied or what was the buffer alignment tested. The 'name' attribute
is then followed by another 'variants' array or a 'results' array.
The nesting of variants allows arbitrary grouping of benchmark timings,
while allowing description of these groups. Using recusion, it is
possible to proceduraly create titles and filenames for the figures being
generated.
|