• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1
2## Fruit benchmarks
3
4Fruit contains some code to benchmark various metrics (e.g. performance, compile time, executable size) in an automated
5way.
6
7### Benchmark suites
8
9The `suites` folder contains suites of Fruit (or Fruit-related) benchmarks that can be run using `run_benchmarks.py`.
10For example:
11
12```bash
13$ ~/projects/fruit/extras/benchmark/run_benchmarks.py \
14    --continue-benchmark=true \
15    --benchmark-definition ~/projects/fruit/extras/benchmark/suites/fruit_full.yml
16    --output-file ~/fruit_bench_results.txt \
17    --fruit-sources-dir ~/projects/fruit \
18    --fruit-benchmark-sources-dir ~/projects/fruit \
19    --boost-di-sources-dir ~/projects/boost-di
20```
21
22Once the benchmark run completes, you can format the results using some pre-defined tables, see the section below.
23
24The following benchmark suites are defined:
25
26* `fruit_full.yml`: full set of Fruit benchmarks (using the Fruit 3.x API).
27* `fruit_mostly_full.yml`: a subset of the tests in `fruit_full.yml`.
28* `fruit_quick.yml`: this is an even smaller subset, and the number of runs is capped at 10 so
29  the confidence intervals might be wider. It's useful as a quicker (around 10-15min) way to get a rough idea of the
30  performance (e.g. to evaluate the performance impact of a commit, during development).
31* `fruit_single.yml`: runs the Fruit runtime benchs under a single compiler and with just 1 combination of flags. This
32  also caps the number of runs at 8, so the resulting confidence intervals might be wider than they would be with
33  `fruit_full.yml`. This is a quick benchmark that can used during development of performance optimizations.
34* `fruit_debug.yml`: a suite used to debug Fruit's benchmarking code. This is very quick, but the actual results are
35  not meaningful. Run this after changing any benchmarking code, to check that it still works.
36* `boost_di`: unlike the others, this benchmark suite exercises the Boost.DI library (still in boost-experimental at the
37  time of writing) instead of Fruit.
38
39### Benchmark tables
40
41The `tables` folder contains some table definitions that can be used to get a human-readable representations of
42benchmark results generated using `run_benchmarks.py`.
43
44Note that there *isn't* a 1:1 mapping between benchmark suites and benchmark tables; the same table definition can be
45used with multiple benchmark suites (for example, a full suite and a quick variant that only has a subset of the
46dimensions) and multiple table definitions might be appropriate to display the results of a single suite (for example,
47there could be a table definition that displays only metrics meaningful to Fruit users and one that also displays
48more fine-grained metrics that are only meaningful to Fruit developers).
49
50Example usage of `format_bench_results.py` with one of these table definitions:
51
52```bash
53$ ~/projects/fruit/extras/benchmark/format_bench_results.py \
54    --benchmark-results ~/fruit_bench_results.txt \
55    --benchmark-tables-definition ~/projects/fruit/extras/benchmark/tables/fruit_wiki.yml
56```
57
58It's also possible to compare two benchmark results (for example, when running the same benchmarks before and after
59a Fruit commit):
60
61```bash
62$ ~/projects/fruit/extras/benchmark/format_bench_results.py \
63    --benchmark-results ~/fruit_bench_results_after.txt \
64    --benchmark-tables-definition ~/projects/fruit/extras/benchmark/tables/fruit_wiki.yml \
65    --baseline-benchmark-results ~/fruit_bench_results_before.txt
66```
67
68The following tables are defined:
69
70* `fruit_wiki.yml`: the "main" table definition, with the tables that are in Fruit's wiki.
71* `fruit_internal.yml`: a more detailed version of `fruit_wiki.yml`, also displaying metrics that are only meaningful
72  to Fruit developers (e.g. splitting the setup time into component creation time and normalization time).
73
74### Manual benchmarks
75
76In some cases, you might want to run the benchmarks manually (e.g. if you want to use `callgrind` to profile the
77benchmark run). This is how you can do that:
78
79```bash
80$ cd ~/projects/fruit
81$ mkdir build
82$ cd build
83$ CXX=g++-6 cmake .. -DCMAKE_BUILD_TYPE=RelWithDebInfo
84$ make -j 10
85$ cd ..
86$ mkdir generated-benchs
87$ extras/benchmark/generate_benchmark.py \
88    --compiler g++-6 \
89    --fruit-sources-dir ~/projects/fruit \
90    --fruit-build-dir ~/projects/fruit/build \
91    --num-components-with-no-deps 10 \
92    --num-components-with-deps 90 \
93    --num-deps 10 \
94    --output-dir generated-benchs \
95    --generate-debuginfo=true
96$ cd generated-benchs
97$ make -j 10
98$ valgrind \
99    --tool=callgrind \
100    --simulate-cache=yes \
101    --dump-instr=yes \
102    ./main 10000
103```
104