• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1<a id="top"></a>
2# Command line
3
4**Contents**<br>
5[Specifying which tests to run](#specifying-which-tests-to-run)<br>
6[Choosing a reporter to use](#choosing-a-reporter-to-use)<br>
7[Breaking into the debugger](#breaking-into-the-debugger)<br>
8[Showing results for successful tests](#showing-results-for-successful-tests)<br>
9[Aborting after a certain number of failures](#aborting-after-a-certain-number-of-failures)<br>
10[Listing available tests, tags or reporters](#listing-available-tests-tags-or-reporters)<br>
11[Sending output to a file](#sending-output-to-a-file)<br>
12[Naming a test run](#naming-a-test-run)<br>
13[Eliding assertions expected to throw](#eliding-assertions-expected-to-throw)<br>
14[Make whitespace visible](#make-whitespace-visible)<br>
15[Warnings](#warnings)<br>
16[Reporting timings](#reporting-timings)<br>
17[Load test names to run from a file](#load-test-names-to-run-from-a-file)<br>
18[Just test names](#just-test-names)<br>
19[Specify the order test cases are run](#specify-the-order-test-cases-are-run)<br>
20[Specify a seed for the Random Number Generator](#specify-a-seed-for-the-random-number-generator)<br>
21[Identify framework and version according to the libIdentify standard](#identify-framework-and-version-according-to-the-libidentify-standard)<br>
22[Wait for key before continuing](#wait-for-key-before-continuing)<br>
23[Specify the number of benchmark samples to collect](#specify-the-number-of-benchmark-samples-to-collect)<br>
24[Specify the number of resamples for bootstrapping](#specify-the-number-of-resamples-for-bootstrapping)<br>
25[Specify the confidence-interval for bootstrapping](#specify-the-confidence-interval-for-bootstrapping)<br>
26[Disable statistical analysis of collected benchmark samples](#disable-statistical-analysis-of-collected-benchmark-samples)<br>
27[Specify the amount of time in milliseconds spent on warming up each test](#specify-the-amount-of-time-in-milliseconds-spent-on-warming-up-each-test)<br>
28[Usage](#usage)<br>
29[Specify the section to run](#specify-the-section-to-run)<br>
30[Filenames as tags](#filenames-as-tags)<br>
31[Override output colouring](#override-output-colouring)<br>
32
33Catch works quite nicely without any command line options at all - but for those times when you want greater control the following options are available.
34Click one of the following links to take you straight to that option - or scroll on to browse the available options.
35
36<a href="#specifying-which-tests-to-run">               `    <test-spec> ...`</a><br />
37<a href="#usage">                                       `    -h, -?, --help`</a><br />
38<a href="#listing-available-tests-tags-or-reporters">   `    -l, --list-tests`</a><br />
39<a href="#listing-available-tests-tags-or-reporters">   `    -t, --list-tags`</a><br />
40<a href="#showing-results-for-successful-tests">        `    -s, --success`</a><br />
41<a href="#breaking-into-the-debugger">                  `    -b, --break`</a><br />
42<a href="#eliding-assertions-expected-to-throw">        `    -e, --nothrow`</a><br />
43<a href="#invisibles">                                  `    -i, --invisibles`</a><br />
44<a href="#sending-output-to-a-file">                    `    -o, --out`</a><br />
45<a href="#choosing-a-reporter-to-use">                  `    -r, --reporter`</a><br />
46<a href="#naming-a-test-run">                           `    -n, --name`</a><br />
47<a href="#aborting-after-a-certain-number-of-failures"> `    -a, --abort`</a><br />
48<a href="#aborting-after-a-certain-number-of-failures"> `    -x, --abortx`</a><br />
49<a href="#warnings">                                    `    -w, --warn`</a><br />
50<a href="#reporting-timings">                           `    -d, --durations`</a><br />
51<a href="#input-file">                                  `    -f, --input-file`</a><br />
52<a href="#run-section">                                 `    -c, --section`</a><br />
53<a href="#filenames-as-tags">                           `    -#, --filenames-as-tags`</a><br />
54
55
56</br>
57
58<a href="#list-test-names-only">                        `    --list-test-names-only`</a><br />
59<a href="#listing-available-tests-tags-or-reporters">   `    --list-reporters`</a><br />
60<a href="#order">                                       `    --order`</a><br />
61<a href="#rng-seed">                                    `    --rng-seed`</a><br />
62<a href="#libidentify">                                 `    --libidentify`</a><br />
63<a href="#wait-for-keypress">                           `    --wait-for-keypress`</a><br />
64<a href="#benchmark-samples">                           `    --benchmark-samples`</a><br />
65<a href="#benchmark-resamples">                         `    --benchmark-resamples`</a><br />
66<a href="#benchmark-confidence-interval">               `    --benchmark-confidence-interval`</a><br />
67<a href="#benchmark-no-analysis">                       `    --benchmark-no-analysis`</a><br />
68<a href="#benchmark-warmup-time">                       `    --benchmark-warmup-time`</a><br />
69<a href="#use-colour">                                  `    --use-colour`</a><br />
70
71</br>
72
73
74
75<a id="specifying-which-tests-to-run"></a>
76## Specifying which tests to run
77
78<pre>&lt;test-spec> ...</pre>
79
80Test cases, wildcarded test cases, tags and tag expressions are all passed directly as arguments. Tags are distinguished by being enclosed in square brackets.
81
82If no test specs are supplied then all test cases, except "hidden" tests, are run.
83A test is hidden by giving it any tag starting with (or just) a period (```.```) - or, in the deprecated case, tagged ```[hide]``` or given name starting with `'./'`. To specify hidden tests from the command line ```[.]``` or ```[hide]``` can be used *regardless of how they were declared*.
84
85Specs must be enclosed in quotes if they contain spaces. If they do not contain spaces the quotes are optional.
86
87Wildcards consist of the `*` character at the beginning and/or end of test case names and can substitute for any number of any characters (including none).
88
89Test specs are case insensitive.
90
91If a spec is prefixed with `exclude:` or the `~` character then the pattern matches an exclusion. This means that tests matching the pattern are excluded from the set - even if a prior inclusion spec included them. Subsequent inclusion specs will take precedence, however.
92Inclusions and exclusions are evaluated in left-to-right order.
93
94Test case examples:
95
96<pre>thisTestOnly            Matches the test case called, 'thisTestOnly'
97"this test only"        Matches the test case called, 'this test only'
98these*                  Matches all cases starting with 'these'
99exclude:notThis         Matches all tests except, 'notThis'
100~notThis                Matches all tests except, 'notThis'
101~*private*              Matches all tests except those that contain 'private'
102a* ~ab* abc             Matches all tests that start with 'a', except those that
103                        start with 'ab', except 'abc', which is included
104-# [#somefile]          Matches all tests from the file 'somefile.cpp'
105</pre>
106
107Names within square brackets are interpreted as tags.
108A series of tags form an AND expression whereas a comma-separated sequence forms an OR expression. e.g.:
109
110<pre>[one][two],[three]</pre>
111This matches all tests tagged `[one]` and `[two]`, as well as all tests tagged `[three]`
112
113Test names containing special characters, such as `,` or `[` can specify them on the command line using `\`.
114`\` also escapes itself.
115
116<a id="choosing-a-reporter-to-use"></a>
117## Choosing a reporter to use
118
119<pre>-r, --reporter &lt;reporter></pre>
120
121A reporter is an object that formats and structures the output of running tests, and potentially summarises the results. By default a console reporter is used that writes, IDE friendly, textual output. Catch comes bundled with some alternative reporters, but more can be added in client code.<br />
122The bundled reporters are:
123
124<pre>-r console
125-r compact
126-r xml
127-r junit
128</pre>
129
130The JUnit reporter is an xml format that follows the structure of the JUnit XML Report ANT task, as consumed by a number of third-party tools, including Continuous Integration servers such as Hudson. If not otherwise needed, the standard XML reporter is preferred as this is a streaming reporter, whereas the Junit reporter needs to hold all its results until the end so it can write the overall results into attributes of the root node.
131
132<a id="breaking-into-the-debugger"></a>
133## Breaking into the debugger
134<pre>-b, --break</pre>
135
136Under most debuggers Catch2 is capable of automatically breaking on a test
137failure. This allows the user to see the current state of the test during
138failure.
139
140<a id="showing-results-for-successful-tests"></a>
141## Showing results for successful tests
142<pre>-s, --success</pre>
143
144Usually you only want to see reporting for failed tests. Sometimes it's useful to see *all* the output (especially when you don't trust that that test you just added worked first time!).
145To see successful, as well as failing, test results just pass this option. Note that each reporter may treat this option differently. The Junit reporter, for example, logs all results regardless.
146
147<a id="aborting-after-a-certain-number-of-failures"></a>
148## Aborting after a certain number of failures
149<pre>-a, --abort
150-x, --abortx [&lt;failure threshold>]
151</pre>
152
153If a ```REQUIRE``` assertion fails the test case aborts, but subsequent test cases are still run.
154If a ```CHECK``` assertion fails even the current test case is not aborted.
155
156Sometimes this results in a flood of failure messages and you'd rather just see the first few. Specifying ```-a``` or ```--abort``` on its own will abort the whole test run on the first failed assertion of any kind. Use ```-x``` or ```--abortx``` followed by a number to abort after that number of assertion failures.
157
158<a id="listing-available-tests-tags-or-reporters"></a>
159## Listing available tests, tags or reporters
160<pre>-l, --list-tests
161-t, --list-tags
162--list-reporters
163</pre>
164
165```-l``` or ```--list-tests``` will list all registered tests, along with any tags.
166If one or more test-specs have been supplied too then only the matching tests will be listed.
167
168```-t``` or ```--list-tags``` lists all available tags, along with the number of test cases they match. Again, supplying test specs limits the tags that match.
169
170```--list-reporters``` lists the available reporters.
171
172<a id="sending-output-to-a-file"></a>
173## Sending output to a file
174<pre>-o, --out &lt;filename>
175</pre>
176
177Use this option to send all output to a file. By default output is sent to stdout (note that uses of stdout and stderr *from within test cases* are redirected and included in the report - so even stderr will effectively end up on stdout).
178
179<a id="naming-a-test-run"></a>
180## Naming a test run
181<pre>-n, --name &lt;name for test run></pre>
182
183If a name is supplied it will be used by the reporter to provide an overall name for the test run. This can be useful if you are sending to a file, for example, and need to distinguish different test runs - either from different Catch executables or runs of the same executable with different options. If not supplied the name is defaulted to the name of the executable.
184
185<a id="eliding-assertions-expected-to-throw"></a>
186## Eliding assertions expected to throw
187<pre>-e, --nothrow</pre>
188
189Skips all assertions that test that an exception is thrown, e.g. ```REQUIRE_THROWS```.
190
191These can be a nuisance in certain debugging environments that may break when exceptions are thrown (while this is usually optional for handled exceptions, it can be useful to have enabled if you are trying to track down something unexpected).
192
193Sometimes exceptions are expected outside of one of the assertions that tests for them (perhaps thrown and caught within the code-under-test). The whole test case can be skipped when using ```-e``` by marking it with the ```[!throws]``` tag.
194
195When running with this option any throw checking assertions are skipped so as not to contribute additional noise. Be careful if this affects the behaviour of subsequent tests.
196
197<a id="invisibles"></a>
198## Make whitespace visible
199<pre>-i, --invisibles</pre>
200
201If a string comparison fails due to differences in whitespace - especially leading or trailing whitespace - it can be hard to see what's going on.
202This option transforms tabs and newline characters into ```\t``` and ```\n``` respectively when printing.
203
204<a id="warnings"></a>
205## Warnings
206<pre>-w, --warn &lt;warning name></pre>
207
208Enables reporting of suspicious test states. There are currently two
209available warnings
210
211```
212    NoAssertions   // Fail test case / leaf section if no assertions
213                   // (e.g. `REQUIRE`) is encountered.
214    NoTests        // Return non-zero exit code when no test cases were run
215                   // Also calls reporter's noMatchingTestCases method
216```
217
218
219<a id="reporting-timings"></a>
220## Reporting timings
221<pre>-d, --durations &lt;yes/no></pre>
222
223When set to ```yes``` Catch will report the duration of each test case, in milliseconds. Note that it does this regardless of whether a test case passes or fails. Note, also, the certain reporters (e.g. Junit) always report test case durations regardless of this option being set or not.
224
225<a id="input-file"></a>
226## Load test names to run from a file
227<pre>-f, --input-file &lt;filename></pre>
228
229Provide the name of a file that contains a list of test case names - one per line. Blank lines are skipped and anything after the comment character, ```#```, is ignored.
230
231A useful way to generate an initial instance of this file is to use the <a href="#list-test-names-only">list-test-names-only</a> option. This can then be manually curated to specify a specific subset of tests - or in a specific order.
232
233<a id="list-test-names-only"></a>
234## Just test names
235<pre>--list-test-names-only</pre>
236
237This option lists all available tests in a non-indented form, one on each line. This makes it ideal for saving to a file and feeding back into the <a href="#input-file">```-f``` or ```--input-file```</a> option.
238
239
240<a id="order"></a>
241## Specify the order test cases are run
242<pre>--order &lt;decl|lex|rand&gt;</pre>
243
244Test cases are ordered one of three ways:
245
246
247### decl
248Declaration order (this is the default order if no --order argument is provided). The order the tests were originally declared in. Note that ordering between files is not guaranteed and is implementation dependent.
249
250### lex
251Lexicographically sorted. Tests are sorted, alpha-numerically, by name.
252
253### rand
254Randomly sorted. Test names are sorted using ```std::random_shuffle()```. By default the random number generator is seeded with 0 - and so the order is repeatable. To control the random seed see <a href="#rng-seed">rng-seed</a>.
255
256<a id="rng-seed"></a>
257## Specify a seed for the Random Number Generator
258<pre>--rng-seed &lt;'time'|number&gt;</pre>
259
260Sets a seed for the random number generator using ```std::srand()```.
261If a number is provided this is used directly as the seed so the random pattern is repeatable.
262Alternatively if the keyword ```time``` is provided then the result of calling ```std::time(0)``` is used and so the pattern becomes unpredictable. In some cases, you might need to pass the keyword ```time``` in double quotes instead of single quotes.
263
264In either case the actual value for the seed is printed as part of Catch's output so if an issue is discovered that is sensitive to test ordering the ordering can be reproduced - even if it was originally seeded from ```std::time(0)```.
265
266<a id="libidentify"></a>
267## Identify framework and version according to the libIdentify standard
268<pre>--libidentify</pre>
269
270See [The LibIdentify repo for more information and examples](https://github.com/janwilmans/LibIdentify).
271
272<a id="wait-for-keypress"></a>
273## Wait for key before continuing
274<pre>--wait-for-keypress &lt;never|start|exit|both&gt;</pre>
275
276Will cause the executable to print a message and wait until the return/ enter key is pressed before continuing -
277either before running any tests, after running all tests - or both, depending on the argument.
278
279<a id="benchmark-samples"></a>
280## Specify the number of benchmark samples to collect
281<pre>--benchmark-samples &lt;# of samples&gt;</pre>
282
283> [Introduced](https://github.com/catchorg/Catch2/issues/1616) in Catch 2.9.0.
284
285When running benchmarks a number of "samples" is collected. This is the base data for later statistical analysis.
286Per sample a clock resolution dependent number of iterations of the user code is run, which is independent of the number of samples. Defaults to 100.
287
288<a id="benchmark-resamples"></a>
289## Specify the number of resamples for bootstrapping
290<pre>--benchmark-resamples &lt;# of resamples&gt;</pre>
291
292> [Introduced](https://github.com/catchorg/Catch2/issues/1616) in Catch 2.9.0.
293
294After the measurements are performed, statistical [bootstrapping] is performed
295on the samples. The number of resamples for that bootstrapping is configurable
296but defaults to 100000. Due to the bootstrapping it is possible to give
297estimates for the mean and standard deviation. The estimates come with a lower
298bound and an upper bound, and the confidence interval (which is configurable but
299defaults to 95%).
300
301 [bootstrapping]: http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29
302
303<a id="benchmark-confidence-interval"></a>
304## Specify the confidence-interval for bootstrapping
305<pre>--benchmark-confidence-interval &lt;confidence-interval&gt;</pre>
306
307> [Introduced](https://github.com/catchorg/Catch2/issues/1616) in Catch 2.9.0.
308
309The confidence-interval is used for statistical bootstrapping on the samples to
310calculate the upper and lower bounds of mean and standard deviation.
311Must be between 0 and 1 and defaults to 0.95.
312
313<a id="benchmark-no-analysis"></a>
314## Disable statistical analysis of collected benchmark samples
315<pre>--benchmark-no-analysis</pre>
316
317> [Introduced](https://github.com/catchorg/Catch2/issues/1616) in Catch 2.9.0.
318
319When this flag is specified no bootstrapping or any other statistical analysis is performed.
320Instead the user code is only measured and the plain mean from the samples is reported.
321
322<a id="benchmark-warmup-time"></a>
323## Specify the amount of time in milliseconds spent on warming up each test
324<pre>--benchmark-warmup-time</pre>
325
326> [Introduced](https://github.com/catchorg/Catch2/pull/1844) in Catch 2.11.2.
327
328Configure the amount of time spent warming up each test.
329
330<a id="usage"></a>
331## Usage
332<pre>-h, -?, --help</pre>
333
334Prints the command line arguments to stdout
335
336
337<a id="run-section"></a>
338## Specify the section to run
339<pre>-c, --section &lt;section name&gt;</pre>
340
341To limit execution to a specific section within a test case, use this option one or more times.
342To narrow to sub-sections use multiple instances, where each subsequent instance specifies a deeper nesting level.
343
344E.g. if you have:
345
346<pre>
347TEST_CASE( "Test" ) {
348  SECTION( "sa" ) {
349    SECTION( "sb" ) {
350      /*...*/
351    }
352    SECTION( "sc" ) {
353      /*...*/
354    }
355  }
356  SECTION( "sd" ) {
357    /*...*/
358  }
359}
360</pre>
361
362Then you can run `sb` with:
363<pre>./MyExe Test -c sa -c sb</pre>
364
365Or run just `sd` with:
366<pre>./MyExe Test -c sd</pre>
367
368To run all of `sa`, including `sb` and `sc` use:
369<pre>./MyExe Test -c sa</pre>
370
371There are some limitations of this feature to be aware of:
372- Code outside of sections being skipped will still be executed - e.g. any set-up code in the TEST_CASE before the
373start of the first section.</br>
374- At time of writing, wildcards are not supported in section names.
375- If you specify a section without narrowing to a test case first then all test cases will be executed
376(but only matching sections within them).
377
378
379<a id="filenames-as-tags"></a>
380## Filenames as tags
381<pre>-#, --filenames-as-tags</pre>
382
383When this option is used then every test is given an additional tag which is formed of the unqualified
384filename it is found in, with any extension stripped, prefixed with the `#` character.
385
386So, for example,  tests within the file `~\Dev\MyProject\Ferrets.cpp` would be tagged `[#Ferrets]`.
387
388<a id="use-colour"></a>
389## Override output colouring
390<pre>--use-colour &lt;yes|no|auto&gt;</pre>
391
392Catch colours output for terminals, but omits colouring when it detects that
393output is being sent to a pipe. This is done to avoid interfering with automated
394processing of output.
395
396`--use-colour yes` forces coloured output, `--use-colour no` disables coloured
397output. The default behaviour is `--use-colour auto`.
398
399---
400
401[Home](Readme.md#top)
402