• Home
  • Raw
  • Download

Lines Matching +full:format +full:- +full:extra +full:- +full:args

5 [Output Formats](#output-formats)
7 [Output Files](#output-files)
9 [Running Benchmarks](#running-benchmarks)
11 [Running a Subset of Benchmarks](#running-a-subset-of-benchmarks)
13 [Result Comparison](#result-comparison)
15 [Extra Context](#extra-context)
19 [Runtime and Reporting Considerations](#runtime-and-reporting-considerations)
23 [Passing Arguments](#passing-arguments)
25 [Custom Benchmark Name](#custom-benchmark-name)
27 [Calculating Asymptotic Complexity](#asymptotic-complexity)
29 [Templated Benchmarks](#templated-benchmarks)
33 [Custom Counters](#custom-counters)
35 [Multithreaded Benchmarks](#multithreaded-benchmarks)
37 [CPU Timers](#cpu-timers)
39 [Manual Timing](#manual-timing)
41 [Setting the Time Unit](#setting-the-time-unit)
45 [User-Requested Performance Counters](perf_counters.md)
47 [Preventing Optimization](#preventing-optimization)
49 [Reporting Statistics](#reporting-statistics)
51 [Custom Statistics](#custom-statistics)
53 [Memory Usage](#memory-usage)
55 [Using RegisterBenchmark](#using-register-benchmark)
57 [Exiting with an Error](#exiting-with-an-error)
59 [A Faster `KeepRunning` Loop](#a-faster-keep-running-loop)
63 [Disabling CPU Frequency Scaling](#disabling-cpu-frequency-scaling)
67 <a name="output-formats" />
72 `--benchmark_format=<console|json|csv>` flag (or set the
74 the format type. `console` is the default format.
76 The Console format is intended to be a human readable format. By default
77 the format generates color output. Context is output on stderr and the
82 ----------------------------------------------------------------------
88 The JSON format outputs human readable json split into two top level attributes.
97 "date": "2015/03/17-18:40:25",
132 The CSV format outputs comma-separated values. The `context` is output on stderr
142 <a name="output-files" />
146 Write benchmark results to a file with the `--benchmark_out=<filename>` option
147 (or set `BENCHMARK_OUT`). Specify the output format with
148 `--benchmark_out_format={json|console|csv}` (or set
154 Specifying `--benchmark_out` does not suppress the console output.
156 <a name="running-benchmarks" />
163 `--option_flag=<value>` CLI switch, a corresponding environment variable
166 with the `--help` switch.
168 <a name="running-a-subset-of-benchmarks" />
172 The `--benchmark_filter=<regex>` option (or `BENCHMARK_FILTER=<regex>`
177 $ ./run_benchmarks.x --benchmark_filter=BM_memcpy/32
179 2016-06-25 19:34:24
181 ----------------------------------------------------
194 <a name="result-comparison" />
201 <a name="extra-context" />
203 ## Extra Context
205 Sometimes it's useful to add extra context to the content printed before the
211 $ ./run_benchmarks --benchmark_context=pwd=`pwd`
215 ----------------------------------------------------
229 <a name="runtime-and-reporting-considerations" />
252 using the `--benchmark_min_warmup_time` command-line option. Note that
253 `MinWarmUpTime` will overwrite the value of `--benchmark_min_warmup_time`
260 repetitions are requested using the `--benchmark_repetitions` command-line
264 As well as the per-benchmark entries, a preamble in the report will include
267 <a name="setup-teardown" />
275 benchmark is multi-threaded (will run in k threads), they will be invoked
292 BENCHMARK(BM_func)->Arg(1)->Arg(3)->Threads(16)->Threads(32)->Setup(DoSetup)->Teardown(DoTeardown);
298 - BM_func_Arg_1_Threads_16, BM_func_Arg_1_Threads_32
299 - BM_func_Arg_3_Threads_16, BM_func_Arg_3_Threads_32
301 <a name="passing-arguments" />
306 takes an extra argument to specify which one of the family of benchmarks to
322 BENCHMARK(BM_memcpy)->Arg(8)->Arg(64)->Arg(512)->Arg(4<<10)->Arg(8<<10);
326 short-hand. The following invocation will pick a few appropriate arguments in
330 BENCHMARK(BM_memcpy)->Range(8, 8<<10);
338 BENCHMARK(BM_memcpy)->RangeMultiplier(2)->Range(8, 8<<10);
356 BENCHMARK(BM_DenseRange)->DenseRange(0, 1024, 128);
377 ->Args({1<<10, 128})
378 ->Args({2<<10, 128})
379 ->Args({4<<10, 128})
380 ->Args({8<<10, 128})
381 ->Args({1<<10, 512})
382 ->Args({2<<10, 512})
383 ->Args({4<<10, 512})
384 ->Args({8<<10, 512});
388 short-hand. The following macro will pick a few appropriate arguments in the
392 <!-- {% raw %} -->
394 BENCHMARK(BM_SetInsert)->Ranges({{1<<10, 8<<10}, {128, 512}});
396 <!-- {% endraw %} -->
402 <!-- {% raw %} -->
405 ->ArgsProduct({{1<<10, 3<<10, 8<<10}, {20, 40, 60, 80}})
408 ->Args({1<<10, 20})
409 ->Args({3<<10, 20})
410 ->Args({8<<10, 20})
411 ->Args({3<<10, 40})
412 ->Args({8<<10, 40})
413 ->Args({1<<10, 40})
414 ->Args({1<<10, 60})
415 ->Args({3<<10, 60})
416 ->Args({8<<10, 60})
417 ->Args({1<<10, 80})
418 ->Args({3<<10, 80})
419 ->Args({8<<10, 80});
421 <!-- {% endraw %} -->
428 ->ArgsProduct({
434 ->ArgsProduct({
449 b->Args({i, j});
451 BENCHMARK(BM_SetInsert)->Apply(CustomArguments);
457 of extra arguments. The `BENCHMARK_CAPTURE(func, test_case_name, ...args)`
459 the first argument followed by the specified `args...`.
464 template <class ...Args>
465 void BM_takes_args(benchmark::State& state, Args&&... args) {
466 auto args_tuple = std::make_tuple(std::move(args)...);
474 // the specified values to `args`.
478 // the specified values to `args`.
482 Note that elements of `...args` may refer to global variables. Users should
485 <a name="asymptotic-complexity" />
490 following code will calculate the coefficient for the high-order term in the
491 running time and the normalized root-mean square error of string comparison.
495 std::string s1(state.range(0), '-');
496 std::string s2(state.range(0), '-');
504 ->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity(benchmark::oN);
512 ->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity();
516 that might be used to customize high-order term calculation.
519 BENCHMARK(BM_StringCompare)->RangeMultiplier(2)
520 ->Range(1<<10, 1<<18)->Complexity([](benchmark::IterationCount n)->double{return n; });
523 <a name="custom-benchmark-name" />
530 BENCHMARK(BM_memcpy)->Name("memcpy")->RangeMultiplier(2)->Range(8, 8<<10);
536 <a name="templated-benchmarks" />
548 for (int i = state.range(0); i--; )
550 for (int e = state.range(0); e--; )
558 BENCHMARK_TEMPLATE(BM_Sequential, WaitQueue<int>)->Range(1<<0, 1<<10);
561 BENCHMARK(BM_Sequential<WaitQueue<int>>)->Range(1<<0, 1<<10);
613 BENCHMARK_REGISTER_F(MyFixture, BarTest)->Threads(2);
642 BENCHMARK_REGISTER_F(MyFixture, DoubleTest)->Threads(2);
645 <a name="custom-counters" />
649 You can add your own counters with user-defined names. The example below
665 and `Counter` values. The latter is a `double`-like class, via an implicit
667 assignment operators (`=,+=,-=,*=,/=`) to change the value of each counter.
674 ; a bit flag which allows you to show counters as rates, and/or as per-thread
675 iteration, and/or as per-thread averages, and/or iteration invariants,
676 and/or finally inverting the result; and a flag specifying the 'unit' - i.e.
694 // Set the counter as a thread-average quantity. It will
708 <!-- {% raw %} -->
717 <!-- {% endraw %} -->
728 ------------------------------------------------------------------------------
730 ------------------------------------------------------------------------------
748 passing the flag `--benchmark_counters_tabular=true` to the benchmark
753 `--benchmark_counters_tabular=true` is passed:
756 ---------------------------------------------------------------------------------------
758 ---------------------------------------------------------------------------------------
767 --------------------------------------------------------------
769 --------------------------------------------------------------
787 <a name="multithreaded-benchmarks"/>
810 BENCHMARK(BM_MultiThreaded)->Threads(2);
818 BENCHMARK(BM_MultiThreaded)->ThreadRange(1, 8);
824 single-threaded code, you may want to use real-time ("wallclock") measurements
828 BENCHMARK(BM_test)->Range(8, 8<<10)->UseRealTime();
833 <a name="cpu-timers" />
858 // measure to anywhere from near-zero (the overhead spent before/after work
859 // handoff to worker thread[s]) to the whole single-thread time.
860 BENCHMARK(BM_OpenMP)->Range(8, 8<<10);
862 // Measure the user-visible time, the wall clock (literally, the time that
865 // time spent by the main thread in single-threaded case, in general decreasing
867 BENCHMARK(BM_OpenMP)->Range(8, 8<<10)->UseRealTime();
871 // time spent by the main thread in single-threaded case.
872 BENCHMARK(BM_OpenMP)->Range(8, 8<<10)->MeasureProcessCPUTime();
876 BENCHMARK(BM_OpenMP)->Range(8, 8<<10)->MeasureProcessCPUTime()->UseRealTime();
886 <!-- {% raw %} -->
899 BENCHMARK(BM_SetInsert_With_Timer_Control)->Ranges({{1<<10, 8<<10}, {128, 512}});
901 <!-- {% endraw %} -->
903 <a name="manual-timing" />
907 For benchmarking something for which neither CPU time nor real-time are
917 be accurately measured using CPU time or real-time. Instead, they can be
936 end - start);
941 BENCHMARK(BM_ManualTiming)->Range(1, 1<<17)->UseManualTime();
944 <a name="setting-the-time-unit" />
953 BENCHMARK(BM_test)->Unit(benchmark::kMillisecond);
957 `--benchmark_time_unit={ns|us|ms|s}` command line argument. The argument only
960 <a name="preventing-optimization" />
1012 auto data = v.data(); // Allow v.data() to be clobbered. Pass as non-const
1022 <a name="reporting-statistics" />
1032 `--benchmark_repetitions` flag or on a per benchmark basis by calling
1037 Additionally the `--benchmark_report_aggregates_only={true|false}`,
1038 `--benchmark_display_aggregates_only={true|false}` flags or
1044 is reported, to both the reporters - standard output (console), and the file.
1052 <a name="custom-statistics" />
1058 you have some real-time constraints. This is easy. The following code will
1071 ->ComputeStatistics("max", [](const std::vector<double>& v) -> double {
1074 ->Arg(512);
1090 ->ComputeStatistics("ratio", [](const std::vector<double>& v) -> double {
1093 ->Arg(512);
1096 <a name="memory-usage" />
1111 <a name="using-register-benchmark" />
1113 ## Using RegisterBenchmark(name, fn, args...)
1115 The `RegisterBenchmark(name, func, args...)` function provides an alternative
1117 `RegisterBenchmark(name, func, args...)` creates, registers, and returns a
1119 `func(st, args...)` where `st` is a `benchmark::State` object.
1141 <a name="exiting-with-an-error" />
1149 `KeepRunning()` are skipped. For the ranged-for version of the benchmark loop
1193 <a name="a-faster-keep-running-loop" />
1197 In C++11 mode, a ranged-based for loop should be used in preference to
1209 The reason the ranged-for loop is faster than using `KeepRunning`, is
1211 ever iteration, whereas the ranged-for variant is able to keep the iteration count
1214 For example, an empty inner loop of using the ranged-based for method looks like:
1223 add rbx, -1
1248 Unless C++03 compatibility is required, the ranged-for variant of writing
1251 <a name="disabling-cpu-frequency-scaling" />
1259 be noisy and will incur extra overhead.