1# Benchmark Tools 2 3## compare.py 4 5The `compare.py` can be used to compare the result of benchmarks. 6 7### Dependencies 8The utility relies on the [scipy](https://www.scipy.org) package which can be installed using pip: 9```bash 10pip3 install -r requirements.txt 11``` 12 13### Displaying aggregates only 14 15The switch `-a` / `--display_aggregates_only` can be used to control the 16displayment of the normal iterations vs the aggregates. When passed, it will 17be passthrough to the benchmark binaries to be run, and will be accounted for 18in the tool itself; only the aggregates will be displayed, but not normal runs. 19It only affects the display, the separate runs will still be used to calculate 20the U test. 21 22### Modes of operation 23 24There are three modes of operation: 25 261. Just compare two benchmarks 27The program is invoked like: 28 29``` bash 30$ compare.py benchmarks <benchmark_baseline> <benchmark_contender> [benchmark options]... 31``` 32Where `<benchmark_baseline>` and `<benchmark_contender>` either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file. 33 34`[benchmark options]` will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal `--benchmark_*` parameters, or some custom parameters your binary takes. 35 36Example output: 37``` 38$ ./compare.py benchmarks ./a.out ./a.out 39RUNNING: ./a.out --benchmark_out=/tmp/tmprBT5nW 40Run on (8 X 4000 MHz CPU s) 412017-11-07 21:16:44 42------------------------------------------------------ 43Benchmark Time CPU Iterations 44------------------------------------------------------ 45BM_memcpy/8 36 ns 36 ns 19101577 211.669MB/s 46BM_memcpy/64 76 ns 76 ns 9412571 800.199MB/s 47BM_memcpy/512 84 ns 84 ns 8249070 5.64771GB/s 48BM_memcpy/1024 116 ns 116 ns 6181763 8.19505GB/s 49BM_memcpy/8192 643 ns 643 ns 1062855 11.8636GB/s 50BM_copy/8 222 ns 222 ns 3137987 34.3772MB/s 51BM_copy/64 1608 ns 1608 ns 432758 37.9501MB/s 52BM_copy/512 12589 ns 12589 ns 54806 38.7867MB/s 53BM_copy/1024 25169 ns 25169 ns 27713 38.8003MB/s 54BM_copy/8192 201165 ns 201112 ns 3486 38.8466MB/s 55RUNNING: ./a.out --benchmark_out=/tmp/tmpt1wwG_ 56Run on (8 X 4000 MHz CPU s) 572017-11-07 21:16:53 58------------------------------------------------------ 59Benchmark Time CPU Iterations 60------------------------------------------------------ 61BM_memcpy/8 36 ns 36 ns 19397903 211.255MB/s 62BM_memcpy/64 73 ns 73 ns 9691174 839.635MB/s 63BM_memcpy/512 85 ns 85 ns 8312329 5.60101GB/s 64BM_memcpy/1024 118 ns 118 ns 6438774 8.11608GB/s 65BM_memcpy/8192 656 ns 656 ns 1068644 11.6277GB/s 66BM_copy/8 223 ns 223 ns 3146977 34.2338MB/s 67BM_copy/64 1611 ns 1611 ns 435340 37.8751MB/s 68BM_copy/512 12622 ns 12622 ns 54818 38.6844MB/s 69BM_copy/1024 25257 ns 25239 ns 27779 38.6927MB/s 70BM_copy/8192 205013 ns 205010 ns 3479 38.108MB/s 71Comparing ./a.out to ./a.out 72Benchmark Time CPU Time Old Time New CPU Old CPU New 73------------------------------------------------------------------------------------------------------ 74BM_memcpy/8 +0.0020 +0.0020 36 36 36 36 75BM_memcpy/64 -0.0468 -0.0470 76 73 76 73 76BM_memcpy/512 +0.0081 +0.0083 84 85 84 85 77BM_memcpy/1024 +0.0098 +0.0097 116 118 116 118 78BM_memcpy/8192 +0.0200 +0.0203 643 656 643 656 79BM_copy/8 +0.0046 +0.0042 222 223 222 223 80BM_copy/64 +0.0020 +0.0020 1608 1611 1608 1611 81BM_copy/512 +0.0027 +0.0026 12589 12622 12589 12622 82BM_copy/1024 +0.0035 +0.0028 25169 25257 25169 25239 83BM_copy/8192 +0.0191 +0.0194 201165 205013 201112 205010 84``` 85 86What it does is for the every benchmark from the first run it looks for the benchmark with exactly the same name in the second run, and then compares the results. If the names differ, the benchmark is omitted from the diff. 87As you can note, the values in `Time` and `CPU` columns are calculated as `(new - old) / |old|`. 88 892. Compare two different filters of one benchmark 90The program is invoked like: 91 92``` bash 93$ compare.py filters <benchmark> <filter_baseline> <filter_contender> [benchmark options]... 94``` 95Where `<benchmark>` either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file. 96 97Where `<filter_baseline>` and `<filter_contender>` are the same regex filters that you would pass to the `[--benchmark_filter=<regex>]` parameter of the benchmark binary. 98 99`[benchmark options]` will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal `--benchmark_*` parameters, or some custom parameters your binary takes. 100 101Example output: 102``` 103$ ./compare.py filters ./a.out BM_memcpy BM_copy 104RUNNING: ./a.out --benchmark_filter=BM_memcpy --benchmark_out=/tmp/tmpBWKk0k 105Run on (8 X 4000 MHz CPU s) 1062017-11-07 21:37:28 107------------------------------------------------------ 108Benchmark Time CPU Iterations 109------------------------------------------------------ 110BM_memcpy/8 36 ns 36 ns 17891491 211.215MB/s 111BM_memcpy/64 74 ns 74 ns 9400999 825.646MB/s 112BM_memcpy/512 87 ns 87 ns 8027453 5.46126GB/s 113BM_memcpy/1024 111 ns 111 ns 6116853 8.5648GB/s 114BM_memcpy/8192 657 ns 656 ns 1064679 11.6247GB/s 115RUNNING: ./a.out --benchmark_filter=BM_copy --benchmark_out=/tmp/tmpAvWcOM 116Run on (8 X 4000 MHz CPU s) 1172017-11-07 21:37:33 118---------------------------------------------------- 119Benchmark Time CPU Iterations 120---------------------------------------------------- 121BM_copy/8 227 ns 227 ns 3038700 33.6264MB/s 122BM_copy/64 1640 ns 1640 ns 426893 37.2154MB/s 123BM_copy/512 12804 ns 12801 ns 55417 38.1444MB/s 124BM_copy/1024 25409 ns 25407 ns 27516 38.4365MB/s 125BM_copy/8192 202986 ns 202990 ns 3454 38.4871MB/s 126Comparing BM_memcpy to BM_copy (from ./a.out) 127Benchmark Time CPU Time Old Time New CPU Old CPU New 128-------------------------------------------------------------------------------------------------------------------- 129[BM_memcpy vs. BM_copy]/8 +5.2829 +5.2812 36 227 36 227 130[BM_memcpy vs. BM_copy]/64 +21.1719 +21.1856 74 1640 74 1640 131[BM_memcpy vs. BM_copy]/512 +145.6487 +145.6097 87 12804 87 12801 132[BM_memcpy vs. BM_copy]/1024 +227.1860 +227.1776 111 25409 111 25407 133[BM_memcpy vs. BM_copy]/8192 +308.1664 +308.2898 657 202986 656 202990 134``` 135 136As you can see, it applies filter to the benchmarks, both when running the benchmark, and before doing the diff. And to make the diff work, the matches are replaced with some common string. Thus, you can compare two different benchmark families within one benchmark binary. 137As you can note, the values in `Time` and `CPU` columns are calculated as `(new - old) / |old|`. 138 1393. Compare filter one from benchmark one to filter two from benchmark two: 140The program is invoked like: 141 142``` bash 143$ compare.py filters <benchmark_baseline> <filter_baseline> <benchmark_contender> <filter_contender> [benchmark options]... 144``` 145 146Where `<benchmark_baseline>` and `<benchmark_contender>` either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file. 147 148Where `<filter_baseline>` and `<filter_contender>` are the same regex filters that you would pass to the `[--benchmark_filter=<regex>]` parameter of the benchmark binary. 149 150`[benchmark options]` will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal `--benchmark_*` parameters, or some custom parameters your binary takes. 151 152Example output: 153``` 154$ ./compare.py benchmarksfiltered ./a.out BM_memcpy ./a.out BM_copy 155RUNNING: ./a.out --benchmark_filter=BM_memcpy --benchmark_out=/tmp/tmp_FvbYg 156Run on (8 X 4000 MHz CPU s) 1572017-11-07 21:38:27 158------------------------------------------------------ 159Benchmark Time CPU Iterations 160------------------------------------------------------ 161BM_memcpy/8 37 ns 37 ns 18953482 204.118MB/s 162BM_memcpy/64 74 ns 74 ns 9206578 828.245MB/s 163BM_memcpy/512 91 ns 91 ns 8086195 5.25476GB/s 164BM_memcpy/1024 120 ns 120 ns 5804513 7.95662GB/s 165BM_memcpy/8192 664 ns 664 ns 1028363 11.4948GB/s 166RUNNING: ./a.out --benchmark_filter=BM_copy --benchmark_out=/tmp/tmpDfL5iE 167Run on (8 X 4000 MHz CPU s) 1682017-11-07 21:38:32 169---------------------------------------------------- 170Benchmark Time CPU Iterations 171---------------------------------------------------- 172BM_copy/8 230 ns 230 ns 2985909 33.1161MB/s 173BM_copy/64 1654 ns 1653 ns 419408 36.9137MB/s 174BM_copy/512 13122 ns 13120 ns 53403 37.2156MB/s 175BM_copy/1024 26679 ns 26666 ns 26575 36.6218MB/s 176BM_copy/8192 215068 ns 215053 ns 3221 36.3283MB/s 177Comparing BM_memcpy (from ./a.out) to BM_copy (from ./a.out) 178Benchmark Time CPU Time Old Time New CPU Old CPU New 179-------------------------------------------------------------------------------------------------------------------- 180[BM_memcpy vs. BM_copy]/8 +5.1649 +5.1637 37 230 37 230 181[BM_memcpy vs. BM_copy]/64 +21.4352 +21.4374 74 1654 74 1653 182[BM_memcpy vs. BM_copy]/512 +143.6022 +143.5865 91 13122 91 13120 183[BM_memcpy vs. BM_copy]/1024 +221.5903 +221.4790 120 26679 120 26666 184[BM_memcpy vs. BM_copy]/8192 +322.9059 +323.0096 664 215068 664 215053 185``` 186This is a mix of the previous two modes, two (potentially different) benchmark binaries are run, and a different filter is applied to each one. 187As you can note, the values in `Time` and `CPU` columns are calculated as `(new - old) / |old|`. 188 189### Note: Interpreting the output 190 191Performance measurements are an art, and performance comparisons are doubly so. 192Results are often noisy and don't necessarily have large absolute differences to 193them, so just by visual inspection, it is not at all apparent if two 194measurements are actually showing a performance change or not. It is even more 195confusing with multiple benchmark repetitions. 196 197Thankfully, what we can do, is use statistical tests on the results to determine 198whether the performance has statistically-significantly changed. `compare.py` 199uses [Mann–Whitney U 200test](https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test), with a null 201hypothesis being that there's no difference in performance. 202 203**The below output is a summary of a benchmark comparison with statistics 204provided for a multi-threaded process.** 205``` 206Benchmark Time CPU Time Old Time New CPU Old CPU New 207----------------------------------------------------------------------------------------------------------------------------- 208benchmark/threads:1/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27 209benchmark/threads:1/process_time/real_time_mean -0.1442 -0.1442 90 77 90 77 210benchmark/threads:1/process_time/real_time_median -0.1444 -0.1444 90 77 90 77 211benchmark/threads:1/process_time/real_time_stddev +0.3974 +0.3933 0 0 0 0 212benchmark/threads:1/process_time/real_time_cv +0.6329 +0.6280 0 0 0 0 213OVERALL_GEOMEAN -0.1442 -0.1442 0 0 0 0 214``` 215-------------------------------------------- 216Here's a breakdown of each row: 217 218**benchmark/threads:1/process_time/real_time_pvalue**: This shows the _p-value_ for 219the statistical test comparing the performance of the process running with one 220thread. A value of 0.0000 suggests a statistically significant difference in 221performance. The comparison was conducted using the U Test (Mann-Whitney 222U Test) with 27 repetitions for each case. 223 224**benchmark/threads:1/process_time/real_time_mean**: This shows the relative 225difference in mean execution time between two different cases. The negative 226value (-0.1442) implies that the new process is faster by about 14.42%. The old 227time was 90 units, while the new time is 77 units. 228 229**benchmark/threads:1/process_time/real_time_median**: Similarly, this shows the 230relative difference in the median execution time. Again, the new process is 231faster by 14.44%. 232 233**benchmark/threads:1/process_time/real_time_stddev**: This is the relative 234difference in the standard deviation of the execution time, which is a measure 235of how much variation or dispersion there is from the mean. A positive value 236(+0.3974) implies there is more variance in the execution time in the new 237process. 238 239**benchmark/threads:1/process_time/real_time_cv**: CV stands for Coefficient of 240Variation. It is the ratio of the standard deviation to the mean. It provides a 241standardized measure of dispersion. An increase (+0.6329) indicates more 242relative variability in the new process. 243 244**OVERALL_GEOMEAN**: Geomean stands for geometric mean, a type of average that is 245less influenced by outliers. The negative value indicates a general improvement 246in the new process. However, given the values are all zero for the old and new 247times, this seems to be a mistake or placeholder in the output. 248 249----------------------------------------- 250 251 252 253Let's first try to see what the different columns represent in the above 254`compare.py` benchmarking output: 255 256 1. **Benchmark:** The name of the function being benchmarked, along with the 257 size of the input (after the slash). 258 259 2. **Time:** The average time per operation, across all iterations. 260 261 3. **CPU:** The average CPU time per operation, across all iterations. 262 263 4. **Iterations:** The number of iterations the benchmark was run to get a 264 stable estimate. 265 266 5. **Time Old and Time New:** These represent the average time it takes for a 267 function to run in two different scenarios or versions. For example, you 268 might be comparing how fast a function runs before and after you make some 269 changes to it. 270 271 6. **CPU Old and CPU New:** These show the average amount of CPU time that the 272 function uses in two different scenarios or versions. This is similar to 273 Time Old and Time New, but focuses on CPU usage instead of overall time. 274 275In the comparison section, the relative differences in both time and CPU time 276are displayed for each input size. 277 278 279A statistically-significant difference is determined by a **p-value**, which is 280a measure of the probability that the observed difference could have occurred 281just by random chance. A smaller p-value indicates stronger evidence against the 282null hypothesis. 283 284**Therefore:** 285 1. If the p-value is less than the chosen significance level (alpha), we 286 reject the null hypothesis and conclude the benchmarks are significantly 287 different. 288 2. If the p-value is greater than or equal to alpha, we fail to reject the 289 null hypothesis and treat the two benchmarks as similar. 290 291 292 293The result of said the statistical test is additionally communicated through color coding: 294```diff 295+ Green: 296``` 297 The benchmarks are _**statistically different**_. This could mean the 298 performance has either **significantly improved** or **significantly 299 deteriorated**. You should look at the actual performance numbers to see which 300 is the case. 301```diff 302- Red: 303``` 304 The benchmarks are _**statistically similar**_. This means the performance 305 **hasn't significantly changed**. 306 307In statistical terms, **'green'** means we reject the null hypothesis that 308there's no difference in performance, and **'red'** means we fail to reject the 309null hypothesis. This might seem counter-intuitive if you're expecting 'green' 310to mean 'improved performance' and 'red' to mean 'worsened performance'. 311```bash 312 But remember, in this context: 313 314 'Success' means 'successfully finding a difference'. 315 'Failure' means 'failing to find a difference'. 316``` 317 318 319Also, please note that **even if** we determine that there **is** a 320statistically-significant difference between the two measurements, it does not 321_necessarily_ mean that the actual benchmarks that were measured **are** 322different, or vice versa, even if we determine that there is **no** 323statistically-significant difference between the two measurements, it does not 324necessarily mean that the actual benchmarks that were measured **are not** 325different. 326 327 328 329### U test 330 331If there is a sufficient repetition count of the benchmarks, the tool can do 332a [U Test](https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test), of the 333null hypothesis that it is equally likely that a randomly selected value from 334one sample will be less than or greater than a randomly selected value from a 335second sample. 336 337If the calculated p-value is below this value is lower than the significance 338level alpha, then the result is said to be statistically significant and the 339null hypothesis is rejected. Which in other words means that the two benchmarks 340aren't identical. 341 342**WARNING**: requires **LARGE** (no less than 9) number of repetitions to be 343meaningful! 344