High system load can skew benchmark results. By including system load averages
in the library's output, we help users identify a potential issue in the
quality of their measurements, and thus assist them in producing better (more
reproducible) results.
I got the idea for this from Brendan Gregg's checklist for benchmark accuracy
(http://www.brendangregg.com/blog/2018-06-30/benchmarking-checklist.html).
* Set -Wno-deprecated-declarations for Intel
Intel compiler silently ignores -Wno-deprecated-declarations
so warning no. 1786 must be explicitly suppressed.
* Make std::int64_t → double casts explicit
While std::int64_t → double is a perfectly conformant
implicit conversion, Intel compiler warns about it.
Make them explicit via static_cast<double>.
* Make std::int64_t → int casts explicit
Intel compiler warns about emplacing an std::int64_t
into an int container. Just make the conversion explicit
via static_cast<int>.
* Cleanup Intel -Wno-deprecated-declarations workaround logic
Inspired by these [two](a1ebe07bea) [bugs](0891555be5) in my code due to the lack of those i have found fixed in my code:
* `kIsIterationInvariant` - `* state.iterations()`
The value is constant for every iteration, and needs to be **multiplied** by the iteration count.
* `kAvgIterations` - `/ state.iterations()`
The is global over all the iterations, and needs to be **divided** by the iteration count.
They play nice with `kIsRate`:
* `kIsIterationInvariantRate`
* `kAvgIterationsRate`.
I'm not sure how meaningful they are when combined with `kAvgThreads`.
I guess the `kIsThreadInvariant` can be added, too, for symmetry with `kAvgThreads`.
As previously discussed, let's flip the switch ^^.
This exposes the problem that it will now be run
for everyone, even if one did not read the help
about the recommended repetition count.
This is not good. So i think we can do the smart thing:
```
$ ./compare.py benchmarks gbench/Inputs/test3_run{0,1}.json
Comparing gbench/Inputs/test3_run0.json to gbench/Inputs/test3_run1.json
Benchmark Time CPU Time Old Time New CPU Old CPU New
--------------------------------------------------------------------------------------------------------
BM_One -0.1000 +0.1000 10 9 100 110
BM_Two +0.1111 -0.0111 9 10 90 89
BM_Two +0.2500 +0.1125 8 10 80 89
BM_Two_pvalue 0.2207 0.6831 U Test, Repetitions: 2. WARNING: Results unreliable! 9+ repetitions recommended.
BM_Two_stat +0.0000 +0.0000 8 8 80 80
```
(old screenshot)
![image](https://user-images.githubusercontent.com/88600/41502182-ea25d872-71bc-11e8-9842-8aa049509b14.png)
Or, in the good case (noise omitted):
```
s$ ./compare.py benchmarks /tmp/run{0,1}.json
Comparing /tmp/run0.json to /tmp/run1.json
Benchmark Time CPU Time Old Time New CPU Old CPU New
---------------------------------------------------------------------------------------------------------------------------------
<99 more rows like this>
./_T012014.RW2/threads:8/real_time +0.0160 +0.0596 46 47 10 10
./_T012014.RW2/threads:8/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 100
./_T012014.RW2/threads:8/real_time_mean +0.0094 +0.0609 46 47 10 10
./_T012014.RW2/threads:8/real_time_median +0.0104 +0.0613 46 46 10 10
./_T012014.RW2/threads:8/real_time_stddev -0.1160 -0.1807 1 1 0 0
```
(old screenshot)
![image](https://user-images.githubusercontent.com/88600/41502185-fb8193f4-71bc-11e8-85fa-cbba83e39db4.png)
* Fix compilation on Android with GNU STL
GNU STL in Android NDK lacks string conversion functions from C++11, including std::stoul, std::stoi, and std::stod.
This patch reimplements these functions in benchmark:: namespace using C-style equivalents from C++03.
* Avoid use of log2 which doesn't exist in Android GNU STL
GNU STL in Android NDK lacks log2 function from C99/C++11.
This patch replaces their use in the code with double log(double) function.
* format all documents according to contributor guidelines and specifications
use clang-format on/off to stop formatting when it makes excessively poor decisions
* format all tests as well, and mark blocks which change too much
As @dominichamon and I have discussed, the current reporter interface
is poor at best. And something should be done to fix it.
I strongly suspect such a fix will require an entire reimagining
of the API, and therefore breaking backwards compatibility fully.
For that reason we should start deprecating and removing parts
that we don't intend to replace. One of these parts, I argue,
is the CSVReporter. I propose that the new reporter interface
should choose a single output format (JSON) and traffic entirely
in that. If somebody really wanted to replace the functionality
of the CSVReporter they would do so as an external tool which
transforms the JSON.
For these reasons I propose deprecating the CSVReporter.
The first problem you have to solve yourself. The second one can be aided.
The benchmark library can compute some statistics over the repetitions,
which helps with grasping the results somewhat.
But that is only for the one set of results. It does not really help to compare
the two benchmark results, which is the interesting bit. Thankfully, there are
these bundled `tools/compare.py` and `tools/compare_bench.py` scripts.
They can provide a diff between two benchmarking results. Yay!
Except not really, it's just a diff, while it is very informative and better than
nothing, it does not really help answer The Question - am i just looking at the noise?
It's like not having these per-benchmark statistics...
Roughly, we can formulate the question as:
> Are these two benchmarks the same?
> Did my change actually change anything, or is the difference below the noise level?
Well, this really sounds like a [null hypothesis](https://en.wikipedia.org/wiki/Null_hypothesis), does it not?
So maybe we can use statistics here, and solve all our problems?
lol, no, it won't solve all the problems. But maybe it will act as a tool,
to better understand the output, just like the usual statistics on the repetitions...
I'm making an assumption here that most of the people care about the change
of average value, not the standard deviation. Thus i believe we can use T-Test,
be it either [Student's t-test](https://en.wikipedia.org/wiki/Student%27s_t-test), or [Welch's t-test](https://en.wikipedia.org/wiki/Welch%27s_t-test).
**EDIT**: however, after @dominichamon review, it was decided that it is better
to use more robust [Mann–Whitney U test](https://en.wikipedia.org/wiki/Mann–Whitney_U_test)
I'm using [scipy.stats.mannwhitneyu](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mannwhitneyu.html#scipy.stats.mannwhitneyu).
There are two new user-facing knobs:
```
$ ./compare.py --help
usage: compare.py [-h] [-u] [--alpha UTEST_ALPHA]
{benchmarks,filters,benchmarksfiltered} ...
versatile benchmark output compare tool
<...>
optional arguments:
-h, --help show this help message and exit
-u, --utest Do a two-tailed Mann-Whitney U test with the null
hypothesis that it is equally likely that a randomly
selected value from one sample will be less than or
greater than a randomly selected value from a second
sample. WARNING: requires **LARGE** (9 or more)
number of repetitions to be meaningful!
--alpha UTEST_ALPHA significance level alpha. if the calculated p-value is
below this value, then the result is said to be
statistically significant and the null hypothesis is
rejected. (default: 0.0500)
```
Example output:
![screenshot_20180512_175517](https://user-images.githubusercontent.com/88600/39958581-ae897924-560d-11e8-81b9-806db6c3e691.png)
As you can guess, the alpha does affect anything but the coloring of the computed p-values.
If it is green, then the change in the average values is statistically-significant.
I'm detecting the repetitions by matching name. This way, no changes to the json are _needed_.
Caveats:
* This won't work if the json is not in the same order as outputted by the benchmark,
or if the parsing does not retain the ordering.
* This won't work if after the grouped repetitions there isn't at least one row with
different name (e.g. statistic). Since there isn't a knob to disable printing of statistics
(only the other way around), i'm not too worried about this.
* **The results will be wrong if the repetition count is different between the two benchmarks being compared.**
* Even though i have added (hopefully full) test coverage, the code of these python tools is staring
to look a bit jumbled.
* So far i have added this only to the `tools/compare.py`.
Should i add it to `tools/compare_bench.py` too?
Or should we deduplicate them (by removing the latter one)?
* Add benchmark_main library with support for Bazel.
* fix newline at end of file
* Add CMake support for benchmark_main.
* Mention optionally using benchmark_main in README.
* Correct/clarify build/install instructions
GTest is google test, don't obsfucate needlessly for newcomers.
Adding google test into installation guide helps newcomers.
Third option under this line: "Note that Google Benchmark requires Google Test to build and run the tests. This
dependency can be provided three ways:"
Was not true (did not occur). If there is a further option that needs to be specified in order for that functionality to work it needs to be specified.
* Add prerequisite knowledge section
A lot of assumptions are made about the reader in the documentation. This is unfortunate.
* Removal of abbreviations for google test
Git was being executed in the current directory, so could not get the
latest tag if cmake was run from a build directory. Force git to be
run from with the source directory.
Note, bazel only supports MSVC on Windows, and not MinGW, so
linking against shlwapi.lib only needs to follow MSVC conventions.
git_repository() did not work in local testing, so is swapped for
http_archive(). The latter is also documented as the preferred way
to depend on an external library in bazel.
* Allow support for negative regex filtering
This patch allows one to apply a negation to the entire regex filter
by appending it with a '-' character, much in the same style as
GoogleTest uses.
* Address issues in PR
* Add unit tests for negative filtering
Before this change, we would report the number of requested iterations
passed to the state. After, we will report the actual number run. As a
side-effect, instead of multiplying the expected iterations by the
number of threads to get the total number, we can report the actual
number of iterations across all threads, which takes into account the
situation where some threads might run more iterations than others.
* Ensure 64-bit truncation doesn't happen for complexity results
* One more complexity_n 64-bit fix
* Missed another vector of int
* Piping through the int64_t
* Allow AddRange to work with int64_t.
Fixes#516
Also, tweak how we manage per-test build needs, and create a standard
_gtest suffix for googletest to differentiate from non-googletest tests.
I also ran clang-format on the files that I changed (but not the
benchmark include or main src as they have too many clang-format
issues).
* Add benchmark_gtest to cmake
* Set(Items|Bytes)Processed now take int64_t
* Add tests to verify assembler output -- Fix DoNotOptimize.
For things like `DoNotOptimize`, `ClobberMemory`, and even `KeepRunning()`,
it is important exactly what assembly they generate. However, we currently
have no way to test this. Instead it must be manually validated every
time a change occurs -- including a change in compiler version.
This patch attempts to introduce a way to test the assembled output automatically.
It's mirrors how LLVM verifies compiler output, and it uses LLVM FileCheck to run
the tests in a similar way.
The tests function by generating the assembly for a test in CMake, and then
using FileCheck to verify the // CHECK lines in the source file are found
in the generated assembly.
Currently, the tests only run on 64-bit x86 systems under GCC and Clang,
and when FileCheck is found on the system.
Additionally, this patch tries to improve the code gen from DoNotOptimize.
This should probably be a separate change, but I needed something to test.
* Disable assembly tests on Bazel for now
* Link FIXME to github issue
* Fix Tests on OS X
* fix strip_asm.py to work on both Linux and OS X like targets
Having the copts set on a per-target level can lead to ODR violations
in some cases. Avoid this by ensuring the regex engine is picked
through compiler intrinsics in the header directly.
This patch disables the -Winvalid-offsetof warning for GCC and Clang
when using it to check the cache lines of the State object.
Technically this usage of offsetof is undefined behavior until C++17.
However, all major compilers support this application as an extension,
as demonstrated by the passing static assert (If a compiler encounters UB
during evaluation of a constant expression, that UB must be diagnosed).
Unfortunately, Clang and GCC also produce a warning about it.
This patch temporarily suppresses the warning using #pragma's in the
source file (instead of globally suppressing the warning in the build systems).
This way the warning is ignored for both CMake and Bazel builds without
having to modify either build system.
Older CMake versions, in particular 2.8, don't seem to correctly handle
interface include directories. This causes failures when building the
tests. Additionally, older CMake versions use a different library install
directory than expected (i.e. they use lib/<target-triple>). This caused
certain tests to fail to link.
This patch fixes both those issues. The first by manually adding the
correct include directory when building the tests. The second by specifying
the library output directory when configuring the GTest build.
* Add myself to CONTRIBUTORS under the corp CLA for Stripe, Inc.
* Add support for building with Bazel.
Limitations compared to existing CMake rules:
* Defaults to using C++11 `<regex>`, with an override via Bazel flag
`--define` of `google_benchmark.have_regex`. The TravisCI config sets
the regex implementation to `posix` because it uses ancient compilers.
* Debug vs Opt mode can't be set per test. TravisCI runs all the tests
in debug mode to satisfy `diagnostics_test`, which depends on `CHECK`
being live.
* Set Bazel workspace name so other repos can refer to it by stable name.
This is recommended by the Bazel style guide to avoid each dependent
workspace defining its own name for the dependency.
* Rename StringXxx to StrXxx in string_util.h and its users
This makes the naming consistent within string_util and moves is the
Abseil convention.
* Style guide is 2 spaces before end of line "//" comments
* Rename StrPrintF/StringPrintF to StrFormat for absl compatibility.