• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Advanced googletest Topics
2
3## Introduction
4
5Now that you have read the [googletest Primer](primer.md) and learned how to
6write tests using googletest, it's time to learn some new tricks. This document
7will show you more assertions as well as how to construct complex failure
8messages, propagate fatal failures, reuse and speed up your test fixtures, and
9use various flags with your tests.
10
11## More Assertions
12
13This section covers some less frequently used, but still significant,
14assertions.
15
16### Explicit Success and Failure
17
18See [Explicit Success and Failure](reference/assertions.md#success-failure) in
19the Assertions Reference.
20
21### Exception Assertions
22
23See [Exception Assertions](reference/assertions.md#exceptions) in the Assertions
24Reference.
25
26### Predicate Assertions for Better Error Messages
27
28Even though googletest has a rich set of assertions, they can never be complete,
29as it's impossible (nor a good idea) to anticipate all scenarios a user might
30run into. Therefore, sometimes a user has to use `EXPECT_TRUE()` to check a
31complex expression, for lack of a better macro. This has the problem of not
32showing you the values of the parts of the expression, making it hard to
33understand what went wrong. As a workaround, some users choose to construct the
34failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this
35is awkward especially when the expression has side-effects or is expensive to
36evaluate.
37
38googletest gives you three different options to solve this problem:
39
40#### Using an Existing Boolean Function
41
42If you already have a function or functor that returns `bool` (or a type that
43can be implicitly converted to `bool`), you can use it in a *predicate
44assertion* to get the function arguments printed for free. See
45[`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) in the Assertions
46Reference for details.
47
48#### Using a Function That Returns an AssertionResult
49
50While `EXPECT_PRED*()` and friends are handy for a quick job, the syntax is not
51satisfactory: you have to use different macros for different arities, and it
52feels more like Lisp than C++. The `::testing::AssertionResult` class solves
53this problem.
54
55An `AssertionResult` object represents the result of an assertion (whether it's
56a success or a failure, and an associated message). You can create an
57`AssertionResult` using one of these factory functions:
58
59```c++
60namespace testing {
61
62// Returns an AssertionResult object to indicate that an assertion has
63// succeeded.
64AssertionResult AssertionSuccess();
65
66// Returns an AssertionResult object to indicate that an assertion has
67// failed.
68AssertionResult AssertionFailure();
69
70}
71```
72
73You can then use the `<<` operator to stream messages to the `AssertionResult`
74object.
75
76To provide more readable messages in Boolean assertions (e.g. `EXPECT_TRUE()`),
77write a predicate function that returns `AssertionResult` instead of `bool`. For
78example, if you define `IsEven()` as:
79
80```c++
81testing::AssertionResult IsEven(int n) {
82  if ((n % 2) == 0)
83    return testing::AssertionSuccess();
84  else
85    return testing::AssertionFailure() << n << " is odd";
86}
87```
88
89instead of:
90
91```c++
92bool IsEven(int n) {
93  return (n % 2) == 0;
94}
95```
96
97the failed assertion `EXPECT_TRUE(IsEven(Fib(4)))` will print:
98
99```none
100Value of: IsEven(Fib(4))
101  Actual: false (3 is odd)
102Expected: true
103```
104
105instead of a more opaque
106
107```none
108Value of: IsEven(Fib(4))
109  Actual: false
110Expected: true
111```
112
113If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE` as well
114(one third of Boolean assertions in the Google code base are negative ones), and
115are fine with making the predicate slower in the success case, you can supply a
116success message:
117
118```c++
119testing::AssertionResult IsEven(int n) {
120  if ((n % 2) == 0)
121    return testing::AssertionSuccess() << n << " is even";
122  else
123    return testing::AssertionFailure() << n << " is odd";
124}
125```
126
127Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print
128
129```none
130  Value of: IsEven(Fib(6))
131     Actual: true (8 is even)
132  Expected: false
133```
134
135#### Using a Predicate-Formatter
136
137If you find the default message generated by
138[`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) and
139[`EXPECT_TRUE`](reference/assertions.md#EXPECT_TRUE) unsatisfactory, or some
140arguments to your predicate do not support streaming to `ostream`, you can
141instead use *predicate-formatter assertions* to *fully* customize how the
142message is formatted. See
143[`EXPECT_PRED_FORMAT*`](reference/assertions.md#EXPECT_PRED_FORMAT) in the
144Assertions Reference for details.
145
146### Floating-Point Comparison
147
148See [Floating-Point Comparison](reference/assertions.md#floating-point) in the
149Assertions Reference.
150
151#### Floating-Point Predicate-Format Functions
152
153Some floating-point operations are useful, but not that often used. In order to
154avoid an explosion of new macros, we provide them as predicate-format functions
155that can be used in the predicate assertion macro
156[`EXPECT_PRED_FORMAT2`](reference/assertions.md#EXPECT_PRED_FORMAT), for
157example:
158
159```c++
160using ::testing::FloatLE;
161using ::testing::DoubleLE;
162...
163EXPECT_PRED_FORMAT2(FloatLE, val1, val2);
164EXPECT_PRED_FORMAT2(DoubleLE, val1, val2);
165```
166
167The above code verifies that `val1` is less than, or approximately equal to,
168`val2`.
169
170### Asserting Using gMock Matchers
171
172See [`EXPECT_THAT`](reference/assertions.md#EXPECT_THAT) in the Assertions
173Reference.
174
175### More String Assertions
176
177(Please read the [previous](#asserting-using-gmock-matchers) section first if
178you haven't.)
179
180You can use the gMock [string matchers](reference/matchers.md#string-matchers)
181with [`EXPECT_THAT`](reference/assertions.md#EXPECT_THAT) to do more string
182comparison tricks (sub-string, prefix, suffix, regular expression, and etc). For
183example,
184
185```c++
186using ::testing::HasSubstr;
187using ::testing::MatchesRegex;
188...
189  ASSERT_THAT(foo_string, HasSubstr("needle"));
190  EXPECT_THAT(bar_string, MatchesRegex("\\w*\\d+"));
191```
192
193### Windows HRESULT assertions
194
195See [Windows HRESULT Assertions](reference/assertions.md#HRESULT) in the
196Assertions Reference.
197
198### Type Assertions
199
200You can call the function
201
202```c++
203::testing::StaticAssertTypeEq<T1, T2>();
204```
205
206to assert that types `T1` and `T2` are the same. The function does nothing if
207the assertion is satisfied. If the types are different, the function call will
208fail to compile, the compiler error message will say that `T1 and T2 are not the
209same type` and most likely (depending on the compiler) show you the actual
210values of `T1` and `T2`. This is mainly useful inside template code.
211
212**Caveat**: When used inside a member function of a class template or a function
213template, `StaticAssertTypeEq<T1, T2>()` is effective only if the function is
214instantiated. For example, given:
215
216```c++
217template <typename T> class Foo {
218 public:
219  void Bar() { testing::StaticAssertTypeEq<int, T>(); }
220};
221```
222
223the code:
224
225```c++
226void Test1() { Foo<bool> foo; }
227```
228
229will not generate a compiler error, as `Foo<bool>::Bar()` is never actually
230instantiated. Instead, you need:
231
232```c++
233void Test2() { Foo<bool> foo; foo.Bar(); }
234```
235
236to cause a compiler error.
237
238### Assertion Placement
239
240You can use assertions in any C++ function. In particular, it doesn't have to be
241a method of the test fixture class. The one constraint is that assertions that
242generate a fatal failure (`FAIL*` and `ASSERT_*`) can only be used in
243void-returning functions. This is a consequence of Google's not using
244exceptions. By placing it in a non-void function you'll get a confusing compile
245error like `"error: void value not ignored as it ought to be"` or `"cannot
246initialize return object of type 'bool' with an rvalue of type 'void'"` or
247`"error: no viable conversion from 'void' to 'string'"`.
248
249If you need to use fatal assertions in a function that returns non-void, one
250option is to make the function return the value in an out parameter instead. For
251example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You
252need to make sure that `*result` contains some sensible value even when the
253function returns prematurely. As the function now returns `void`, you can use
254any assertion inside of it.
255
256If changing the function's type is not an option, you should just use assertions
257that generate non-fatal failures, such as `ADD_FAILURE*` and `EXPECT_*`.
258
259{: .callout .note}
260NOTE: Constructors and destructors are not considered void-returning functions,
261according to the C++ language specification, and so you may not use fatal
262assertions in them; you'll get a compilation error if you try. Instead, either
263call `abort` and crash the entire test executable, or put the fatal assertion in
264a `SetUp`/`TearDown` function; see
265[constructor/destructor vs. `SetUp`/`TearDown`](faq.md#CtorVsSetUp)
266
267{: .callout .warning}
268WARNING: A fatal assertion in a helper function (private void-returning method)
269called from a constructor or destructor does not terminate the current test, as
270your intuition might suggest: it merely returns from the constructor or
271destructor early, possibly leaving your object in a partially-constructed or
272partially-destructed state! You almost certainly want to `abort` or use
273`SetUp`/`TearDown` instead.
274
275## Skipping test execution
276
277Related to the assertions `SUCCEED()` and `FAIL()`, you can prevent further test
278execution at runtime with the `GTEST_SKIP()` macro. This is useful when you need
279to check for preconditions of the system under test during runtime and skip
280tests in a meaningful way.
281
282`GTEST_SKIP()` can be used in individual test cases or in the `SetUp()` methods
283of classes derived from either `::testing::Environment` or `::testing::Test`.
284For example:
285
286```c++
287TEST(SkipTest, DoesSkip) {
288  GTEST_SKIP() << "Skipping single test";
289  EXPECT_EQ(0, 1);  // Won't fail; it won't be executed
290}
291
292class SkipFixture : public ::testing::Test {
293 protected:
294  void SetUp() override {
295    GTEST_SKIP() << "Skipping all tests for this fixture";
296  }
297};
298
299// Tests for SkipFixture won't be executed.
300TEST_F(SkipFixture, SkipsOneTest) {
301  EXPECT_EQ(5, 7);  // Won't fail
302}
303```
304
305As with assertion macros, you can stream a custom message into `GTEST_SKIP()`.
306
307## Teaching googletest How to Print Your Values
308
309When a test assertion such as `EXPECT_EQ` fails, googletest prints the argument
310values to help you debug. It does this using a user-extensible value printer.
311
312This printer knows how to print built-in C++ types, native arrays, STL
313containers, and any type that supports the `<<` operator. For other types, it
314prints the raw bytes in the value and hopes that you the user can figure it out.
315
316As mentioned earlier, the printer is *extensible*. That means you can teach it
317to do a better job at printing your particular type than to dump the bytes. To
318do that, define `<<` for your type:
319
320```c++
321#include <ostream>
322
323namespace foo {
324
325class Bar {  // We want googletest to be able to print instances of this.
326...
327  // Create a free inline friend function.
328  friend std::ostream& operator<<(std::ostream& os, const Bar& bar) {
329    return os << bar.DebugString();  // whatever needed to print bar to os
330  }
331};
332
333// If you can't declare the function in the class it's important that the
334// << operator is defined in the SAME namespace that defines Bar.  C++'s look-up
335// rules rely on that.
336std::ostream& operator<<(std::ostream& os, const Bar& bar) {
337  return os << bar.DebugString();  // whatever needed to print bar to os
338}
339
340}  // namespace foo
341```
342
343Sometimes, this might not be an option: your team may consider it bad style to
344have a `<<` operator for `Bar`, or `Bar` may already have a `<<` operator that
345doesn't do what you want (and you cannot change it). If so, you can instead
346define a `PrintTo()` function like this:
347
348```c++
349#include <ostream>
350
351namespace foo {
352
353class Bar {
354  ...
355  friend void PrintTo(const Bar& bar, std::ostream* os) {
356    *os << bar.DebugString();  // whatever needed to print bar to os
357  }
358};
359
360// If you can't declare the function in the class it's important that PrintTo()
361// is defined in the SAME namespace that defines Bar.  C++'s look-up rules rely
362// on that.
363void PrintTo(const Bar& bar, std::ostream* os) {
364  *os << bar.DebugString();  // whatever needed to print bar to os
365}
366
367}  // namespace foo
368```
369
370If you have defined both `<<` and `PrintTo()`, the latter will be used when
371googletest is concerned. This allows you to customize how the value appears in
372googletest's output without affecting code that relies on the behavior of its
373`<<` operator.
374
375If you want to print a value `x` using googletest's value printer yourself, just
376call `::testing::PrintToString(x)`, which returns an `std::string`:
377
378```c++
379vector<pair<Bar, int> > bar_ints = GetBarIntVector();
380
381EXPECT_TRUE(IsCorrectBarIntVector(bar_ints))
382    << "bar_ints = " << testing::PrintToString(bar_ints);
383```
384
385## Death Tests
386
387In many applications, there are assertions that can cause application failure if
388a condition is not met. These consistency checks, which ensure that the program
389is in a known good state, are there to fail at the earliest possible time after
390some program state is corrupted. If the assertion checks the wrong condition,
391then the program may proceed in an erroneous state, which could lead to memory
392corruption, security holes, or worse. Hence it is vitally important to test that
393such assertion statements work as expected.
394
395Since these precondition checks cause the processes to die, we call such tests
396_death tests_. More generally, any test that checks that a program terminates
397(except by throwing an exception) in an expected fashion is also a death test.
398
399Note that if a piece of code throws an exception, we don't consider it "death"
400for the purpose of death tests, as the caller of the code could catch the
401exception and avoid the crash. If you want to verify exceptions thrown by your
402code, see [Exception Assertions](#ExceptionAssertions).
403
404If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, see
405["Catching" Failures](#catching-failures).
406
407### How to Write a Death Test
408
409GoogleTest provides assertion macros to support death tests. See
410[Death Assertions](reference/assertions.md#death) in the Assertions Reference
411for details.
412
413To write a death test, simply use one of the macros inside your test function.
414For example,
415
416```c++
417TEST(MyDeathTest, Foo) {
418  // This death test uses a compound statement.
419  ASSERT_DEATH({
420    int n = 5;
421    Foo(&n);
422  }, "Error on line .* of Foo()");
423}
424
425TEST(MyDeathTest, NormalExit) {
426  EXPECT_EXIT(NormalExit(), testing::ExitedWithCode(0), "Success");
427}
428
429TEST(MyDeathTest, KillProcess) {
430  EXPECT_EXIT(KillProcess(), testing::KilledBySignal(SIGKILL),
431              "Sending myself unblockable signal");
432}
433```
434
435verifies that:
436
437*   calling `Foo(5)` causes the process to die with the given error message,
438*   calling `NormalExit()` causes the process to print `"Success"` to stderr and
439    exit with exit code 0, and
440*   calling `KillProcess()` kills the process with signal `SIGKILL`.
441
442The test function body may contain other assertions and statements as well, if
443necessary.
444
445Note that a death test only cares about three things:
446
4471.  does `statement` abort or exit the process?
4482.  (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status
449    satisfy `predicate`? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`)
450    is the exit status non-zero? And
4513.  does the stderr output match `matcher`?
452
453In particular, if `statement` generates an `ASSERT_*` or `EXPECT_*` failure, it
454will **not** cause the death test to fail, as googletest assertions don't abort
455the process.
456
457### Death Test Naming
458
459{: .callout .important}
460IMPORTANT: We strongly recommend you to follow the convention of naming your
461**test suite** (not test) `*DeathTest` when it contains a death test, as
462demonstrated in the above example. The
463[Death Tests And Threads](#death-tests-and-threads) section below explains why.
464
465If a test fixture class is shared by normal tests and death tests, you can use
466`using` or `typedef` to introduce an alias for the fixture class and avoid
467duplicating its code:
468
469```c++
470class FooTest : public testing::Test { ... };
471
472using FooDeathTest = FooTest;
473
474TEST_F(FooTest, DoesThis) {
475  // normal test
476}
477
478TEST_F(FooDeathTest, DoesThat) {
479  // death test
480}
481```
482
483### Regular Expression Syntax
484
485On POSIX systems (e.g. Linux, Cygwin, and Mac), googletest uses the
486[POSIX extended regular expression](http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html#tag_09_04)
487syntax. To learn about this syntax, you may want to read this
488[Wikipedia entry](http://en.wikipedia.org/wiki/Regular_expression#POSIX_Extended_Regular_Expressions).
489
490On Windows, googletest uses its own simple regular expression implementation. It
491lacks many features. For example, we don't support union (`"x|y"`), grouping
492(`"(xy)"`), brackets (`"[xy]"`), and repetition count (`"x{5,7}"`), among
493others. Below is what we do support (`A` denotes a literal character, period
494(`.`), or a single `\\ ` escape sequence; `x` and `y` denote regular
495expressions.):
496
497Expression | Meaning
498---------- | --------------------------------------------------------------
499`c`        | matches any literal character `c`
500`\\d`      | matches any decimal digit
501`\\D`      | matches any character that's not a decimal digit
502`\\f`      | matches `\f`
503`\\n`      | matches `\n`
504`\\r`      | matches `\r`
505`\\s`      | matches any ASCII whitespace, including `\n`
506`\\S`      | matches any character that's not a whitespace
507`\\t`      | matches `\t`
508`\\v`      | matches `\v`
509`\\w`      | matches any letter, `_`, or decimal digit
510`\\W`      | matches any character that `\\w` doesn't match
511`\\c`      | matches any literal character `c`, which must be a punctuation
512`.`        | matches any single character except `\n`
513`A?`       | matches 0 or 1 occurrences of `A`
514`A*`       | matches 0 or many occurrences of `A`
515`A+`       | matches 1 or many occurrences of `A`
516`^`        | matches the beginning of a string (not that of each line)
517`$`        | matches the end of a string (not that of each line)
518`xy`       | matches `x` followed by `y`
519
520To help you determine which capability is available on your system, googletest
521defines macros to govern which regular expression it is using. The macros are:
522`GTEST_USES_SIMPLE_RE=1` or `GTEST_USES_POSIX_RE=1`. If you want your death
523tests to work in all cases, you can either `#if` on these macros or use the more
524limited syntax only.
525
526### How It Works
527
528See [Death Assertions](reference/assertions.md#death) in the Assertions
529Reference.
530
531### Death Tests And Threads
532
533The reason for the two death test styles has to do with thread safety. Due to
534well-known problems with forking in the presence of threads, death tests should
535be run in a single-threaded context. Sometimes, however, it isn't feasible to
536arrange that kind of environment. For example, statically-initialized modules
537may start threads before main is ever reached. Once threads have been created,
538it may be difficult or impossible to clean them up.
539
540googletest has three features intended to raise awareness of threading issues.
541
5421.  A warning is emitted if multiple threads are running when a death test is
543    encountered.
5442.  Test suites with a name ending in "DeathTest" are run before all other
545    tests.
5463.  It uses `clone()` instead of `fork()` to spawn the child process on Linux
547    (`clone()` is not available on Cygwin and Mac), as `fork()` is more likely
548    to cause the child to hang when the parent process has multiple threads.
549
550It's perfectly fine to create threads inside a death test statement; they are
551executed in a separate process and cannot affect the parent.
552
553### Death Test Styles
554
555The "threadsafe" death test style was introduced in order to help mitigate the
556risks of testing in a possibly multithreaded environment. It trades increased
557test execution time (potentially dramatically so) for improved thread safety.
558
559The automated testing framework does not set the style flag. You can choose a
560particular style of death tests by setting the flag programmatically:
561
562```c++
563GTEST_FLAG_SET(death_test_style, "threadsafe")
564```
565
566You can do this in `main()` to set the style for all death tests in the binary,
567or in individual tests. Recall that flags are saved before running each test and
568restored afterwards, so you need not do that yourself. For example:
569
570```c++
571int main(int argc, char** argv) {
572  testing::InitGoogleTest(&argc, argv);
573  GTEST_FLAG_SET(death_test_style, "fast");
574  return RUN_ALL_TESTS();
575}
576
577TEST(MyDeathTest, TestOne) {
578  GTEST_FLAG_SET(death_test_style, "threadsafe");
579  // This test is run in the "threadsafe" style:
580  ASSERT_DEATH(ThisShouldDie(), "");
581}
582
583TEST(MyDeathTest, TestTwo) {
584  // This test is run in the "fast" style:
585  ASSERT_DEATH(ThisShouldDie(), "");
586}
587```
588
589### Caveats
590
591The `statement` argument of `ASSERT_EXIT()` can be any valid C++ statement. If
592it leaves the current function via a `return` statement or by throwing an
593exception, the death test is considered to have failed. Some googletest macros
594may return from the current function (e.g. `ASSERT_TRUE()`), so be sure to avoid
595them in `statement`.
596
597Since `statement` runs in the child process, any in-memory side effect (e.g.
598modifying a variable, releasing memory, etc) it causes will *not* be observable
599in the parent process. In particular, if you release memory in a death test,
600your program will fail the heap check as the parent process will never see the
601memory reclaimed. To solve this problem, you can
602
6031.  try not to free memory in a death test;
6042.  free the memory again in the parent process; or
6053.  do not use the heap checker in your program.
606
607Due to an implementation detail, you cannot place multiple death test assertions
608on the same line; otherwise, compilation will fail with an unobvious error
609message.
610
611Despite the improved thread safety afforded by the "threadsafe" style of death
612test, thread problems such as deadlock are still possible in the presence of
613handlers registered with `pthread_atfork(3)`.
614
615## Using Assertions in Sub-routines
616
617{: .callout .note}
618Note: If you want to put a series of test assertions in a subroutine to check
619for a complex condition, consider using
620[a custom GMock matcher](gmock_cook_book.md#NewMatchers) instead. This lets you
621provide a more readable error message in case of failure and avoid all of the
622issues described below.
623
624### Adding Traces to Assertions
625
626If a test sub-routine is called from several places, when an assertion inside it
627fails, it can be hard to tell which invocation of the sub-routine the failure is
628from. You can alleviate this problem using extra logging or custom failure
629messages, but that usually clutters up your tests. A better solution is to use
630the `SCOPED_TRACE` macro or the `ScopedTrace` utility:
631
632```c++
633SCOPED_TRACE(message);
634```
635
636```c++
637ScopedTrace trace("file_path", line_number, message);
638```
639
640where `message` can be anything streamable to `std::ostream`. `SCOPED_TRACE`
641macro will cause the current file name, line number, and the given message to be
642added in every failure message. `ScopedTrace` accepts explicit file name and
643line number in arguments, which is useful for writing test helpers. The effect
644will be undone when the control leaves the current lexical scope.
645
646For example,
647
648```c++
64910: void Sub1(int n) {
65011:   EXPECT_EQ(Bar(n), 1);
65112:   EXPECT_EQ(Bar(n + 1), 2);
65213: }
65314:
65415: TEST(FooTest, Bar) {
65516:   {
65617:     SCOPED_TRACE("A");  // This trace point will be included in
65718:                         // every failure in this scope.
65819:     Sub1(1);
65920:   }
66021:   // Now it won't.
66122:   Sub1(9);
66223: }
663```
664
665could result in messages like these:
666
667```none
668path/to/foo_test.cc:11: Failure
669Value of: Bar(n)
670Expected: 1
671  Actual: 2
672Google Test trace:
673path/to/foo_test.cc:17: A
674
675path/to/foo_test.cc:12: Failure
676Value of: Bar(n + 1)
677Expected: 2
678  Actual: 3
679```
680
681Without the trace, it would've been difficult to know which invocation of
682`Sub1()` the two failures come from respectively. (You could add an extra
683message to each assertion in `Sub1()` to indicate the value of `n`, but that's
684tedious.)
685
686Some tips on using `SCOPED_TRACE`:
687
6881.  With a suitable message, it's often enough to use `SCOPED_TRACE` at the
689    beginning of a sub-routine, instead of at each call site.
6902.  When calling sub-routines inside a loop, make the loop iterator part of the
691    message in `SCOPED_TRACE` such that you can know which iteration the failure
692    is from.
6933.  Sometimes the line number of the trace point is enough for identifying the
694    particular invocation of a sub-routine. In this case, you don't have to
695    choose a unique message for `SCOPED_TRACE`. You can simply use `""`.
6964.  You can use `SCOPED_TRACE` in an inner scope when there is one in the outer
697    scope. In this case, all active trace points will be included in the failure
698    messages, in reverse order they are encountered.
6995.  The trace dump is clickable in Emacs - hit `return` on a line number and
700    you'll be taken to that line in the source file!
701
702### Propagating Fatal Failures
703
704A common pitfall when using `ASSERT_*` and `FAIL*` is not understanding that
705when they fail they only abort the _current function_, not the entire test. For
706example, the following test will segfault:
707
708```c++
709void Subroutine() {
710  // Generates a fatal failure and aborts the current function.
711  ASSERT_EQ(1, 2);
712
713  // The following won't be executed.
714  ...
715}
716
717TEST(FooTest, Bar) {
718  Subroutine();  // The intended behavior is for the fatal failure
719                 // in Subroutine() to abort the entire test.
720
721  // The actual behavior: the function goes on after Subroutine() returns.
722  int* p = nullptr;
723  *p = 3;  // Segfault!
724}
725```
726
727To alleviate this, googletest provides three different solutions. You could use
728either exceptions, the `(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the
729`HasFatalFailure()` function. They are described in the following two
730subsections.
731
732#### Asserting on Subroutines with an exception
733
734The following code can turn ASSERT-failure into an exception:
735
736```c++
737class ThrowListener : public testing::EmptyTestEventListener {
738  void OnTestPartResult(const testing::TestPartResult& result) override {
739    if (result.type() == testing::TestPartResult::kFatalFailure) {
740      throw testing::AssertionException(result);
741    }
742  }
743};
744int main(int argc, char** argv) {
745  ...
746  testing::UnitTest::GetInstance()->listeners().Append(new ThrowListener);
747  return RUN_ALL_TESTS();
748}
749```
750
751This listener should be added after other listeners if you have any, otherwise
752they won't see failed `OnTestPartResult`.
753
754#### Asserting on Subroutines
755
756As shown above, if your test calls a subroutine that has an `ASSERT_*` failure
757in it, the test will continue after the subroutine returns. This may not be what
758you want.
759
760Often people want fatal failures to propagate like exceptions. For that
761googletest offers the following macros:
762
763Fatal assertion                       | Nonfatal assertion                    | Verifies
764------------------------------------- | ------------------------------------- | --------
765`ASSERT_NO_FATAL_FAILURE(statement);` | `EXPECT_NO_FATAL_FAILURE(statement);` | `statement` doesn't generate any new fatal failures in the current thread.
766
767Only failures in the thread that executes the assertion are checked to determine
768the result of this type of assertions. If `statement` creates new threads,
769failures in these threads are ignored.
770
771Examples:
772
773```c++
774ASSERT_NO_FATAL_FAILURE(Foo());
775
776int i;
777EXPECT_NO_FATAL_FAILURE({
778  i = Bar();
779});
780```
781
782Assertions from multiple threads are currently not supported on Windows.
783
784#### Checking for Failures in the Current Test
785
786`HasFatalFailure()` in the `::testing::Test` class returns `true` if an
787assertion in the current test has suffered a fatal failure. This allows
788functions to catch fatal failures in a sub-routine and return early.
789
790```c++
791class Test {
792 public:
793  ...
794  static bool HasFatalFailure();
795};
796```
797
798The typical usage, which basically simulates the behavior of a thrown exception,
799is:
800
801```c++
802TEST(FooTest, Bar) {
803  Subroutine();
804  // Aborts if Subroutine() had a fatal failure.
805  if (HasFatalFailure()) return;
806
807  // The following won't be executed.
808  ...
809}
810```
811
812If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test
813fixture, you must add the `::testing::Test::` prefix, as in:
814
815```c++
816if (testing::Test::HasFatalFailure()) return;
817```
818
819Similarly, `HasNonfatalFailure()` returns `true` if the current test has at
820least one non-fatal failure, and `HasFailure()` returns `true` if the current
821test has at least one failure of either kind.
822
823## Logging Additional Information
824
825In your test code, you can call `RecordProperty("key", value)` to log additional
826information, where `value` can be either a string or an `int`. The *last* value
827recorded for a key will be emitted to the
828[XML output](#generating-an-xml-report) if you specify one. For example, the
829test
830
831```c++
832TEST_F(WidgetUsageTest, MinAndMaxWidgets) {
833  RecordProperty("MaximumWidgets", ComputeMaxUsage());
834  RecordProperty("MinimumWidgets", ComputeMinUsage());
835}
836```
837
838will output XML like this:
839
840```xml
841  ...
842    <testcase name="MinAndMaxWidgets" file="test.cpp" line="1" status="run" time="0.006" classname="WidgetUsageTest" MaximumWidgets="12" MinimumWidgets="9" />
843  ...
844```
845
846{: .callout .note}
847> NOTE:
848>
849> *   `RecordProperty()` is a static member of the `Test` class. Therefore it
850>     needs to be prefixed with `::testing::Test::` if used outside of the
851>     `TEST` body and the test fixture class.
852> *   *`key`* must be a valid XML attribute name, and cannot conflict with the
853>     ones already used by googletest (`name`, `status`, `time`, `classname`,
854>     `type_param`, and `value_param`).
855> *   Calling `RecordProperty()` outside of the lifespan of a test is allowed.
856>     If it's called outside of a test but between a test suite's
857>     `SetUpTestSuite()` and `TearDownTestSuite()` methods, it will be
858>     attributed to the XML element for the test suite. If it's called outside
859>     of all test suites (e.g. in a test environment), it will be attributed to
860>     the top-level XML element.
861
862## Sharing Resources Between Tests in the Same Test Suite
863
864googletest creates a new test fixture object for each test in order to make
865tests independent and easier to debug. However, sometimes tests use resources
866that are expensive to set up, making the one-copy-per-test model prohibitively
867expensive.
868
869If the tests don't change the resource, there's no harm in their sharing a
870single resource copy. So, in addition to per-test set-up/tear-down, googletest
871also supports per-test-suite set-up/tear-down. To use it:
872
8731.  In your test fixture class (say `FooTest` ), declare as `static` some member
874    variables to hold the shared resources.
8752.  Outside your test fixture class (typically just below it), define those
876    member variables, optionally giving them initial values.
8773.  In the same test fixture class, define a `static void SetUpTestSuite()`
878    function (remember not to spell it as **`SetupTestSuite`** with a small
879    `u`!) to set up the shared resources and a `static void TearDownTestSuite()`
880    function to tear them down.
881
882That's it! googletest automatically calls `SetUpTestSuite()` before running the
883*first test* in the `FooTest` test suite (i.e. before creating the first
884`FooTest` object), and calls `TearDownTestSuite()` after running the *last test*
885in it (i.e. after deleting the last `FooTest` object). In between, the tests can
886use the shared resources.
887
888Remember that the test order is undefined, so your code can't depend on a test
889preceding or following another. Also, the tests must either not modify the state
890of any shared resource, or, if they do modify the state, they must restore the
891state to its original value before passing control to the next test.
892
893Note that `SetUpTestSuite()` may be called multiple times for a test fixture
894class that has derived classes, so you should not expect code in the function
895body to be run only once. Also, derived classes still have access to shared
896resources defined as static members, so careful consideration is needed when
897managing shared resources to avoid memory leaks.
898
899Here's an example of per-test-suite set-up and tear-down:
900
901```c++
902class FooTest : public testing::Test {
903 protected:
904  // Per-test-suite set-up.
905  // Called before the first test in this test suite.
906  // Can be omitted if not needed.
907  static void SetUpTestSuite() {
908    // Avoid reallocating static objects if called in subclasses of FooTest.
909    if (shared_resource_ == nullptr) {
910      shared_resource_ = new ...;
911    }
912  }
913
914  // Per-test-suite tear-down.
915  // Called after the last test in this test suite.
916  // Can be omitted if not needed.
917  static void TearDownTestSuite() {
918    delete shared_resource_;
919    shared_resource_ = nullptr;
920  }
921
922  // You can define per-test set-up logic as usual.
923  void SetUp() override { ... }
924
925  // You can define per-test tear-down logic as usual.
926  void TearDown() override { ... }
927
928  // Some expensive resource shared by all tests.
929  static T* shared_resource_;
930};
931
932T* FooTest::shared_resource_ = nullptr;
933
934TEST_F(FooTest, Test1) {
935  ... you can refer to shared_resource_ here ...
936}
937
938TEST_F(FooTest, Test2) {
939  ... you can refer to shared_resource_ here ...
940}
941```
942
943{: .callout .note}
944NOTE: Though the above code declares `SetUpTestSuite()` protected, it may
945sometimes be necessary to declare it public, such as when using it with
946`TEST_P`.
947
948## Global Set-Up and Tear-Down
949
950Just as you can do set-up and tear-down at the test level and the test suite
951level, you can also do it at the test program level. Here's how.
952
953First, you subclass the `::testing::Environment` class to define a test
954environment, which knows how to set-up and tear-down:
955
956```c++
957class Environment : public ::testing::Environment {
958 public:
959  ~Environment() override {}
960
961  // Override this to define how to set up the environment.
962  void SetUp() override {}
963
964  // Override this to define how to tear down the environment.
965  void TearDown() override {}
966};
967```
968
969Then, you register an instance of your environment class with googletest by
970calling the `::testing::AddGlobalTestEnvironment()` function:
971
972```c++
973Environment* AddGlobalTestEnvironment(Environment* env);
974```
975
976Now, when `RUN_ALL_TESTS()` is called, it first calls the `SetUp()` method of
977each environment object, then runs the tests if none of the environments
978reported fatal failures and `GTEST_SKIP()` was not called. `RUN_ALL_TESTS()`
979always calls `TearDown()` with each environment object, regardless of whether or
980not the tests were run.
981
982It's OK to register multiple environment objects. In this suite, their `SetUp()`
983will be called in the order they are registered, and their `TearDown()` will be
984called in the reverse order.
985
986Note that googletest takes ownership of the registered environment objects.
987Therefore **do not delete them** by yourself.
988
989You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is called,
990probably in `main()`. If you use `gtest_main`, you need to call this before
991`main()` starts for it to take effect. One way to do this is to define a global
992variable like this:
993
994```c++
995testing::Environment* const foo_env =
996    testing::AddGlobalTestEnvironment(new FooEnvironment);
997```
998
999However, we strongly recommend you to write your own `main()` and call
1000`AddGlobalTestEnvironment()` there, as relying on initialization of global
1001variables makes the code harder to read and may cause problems when you register
1002multiple environments from different translation units and the environments have
1003dependencies among them (remember that the compiler doesn't guarantee the order
1004in which global variables from different translation units are initialized).
1005
1006## Value-Parameterized Tests
1007
1008*Value-parameterized tests* allow you to test your code with different
1009parameters without writing multiple copies of the same test. This is useful in a
1010number of situations, for example:
1011
1012*   You have a piece of code whose behavior is affected by one or more
1013    command-line flags. You want to make sure your code performs correctly for
1014    various values of those flags.
1015*   You want to test different implementations of an OO interface.
1016*   You want to test your code over various inputs (a.k.a. data-driven testing).
1017    This feature is easy to abuse, so please exercise your good sense when doing
1018    it!
1019
1020### How to Write Value-Parameterized Tests
1021
1022To write value-parameterized tests, first you should define a fixture class. It
1023must be derived from both `testing::Test` and `testing::WithParamInterface<T>`
1024(the latter is a pure interface), where `T` is the type of your parameter
1025values. For convenience, you can just derive the fixture class from
1026`testing::TestWithParam<T>`, which itself is derived from both `testing::Test`
1027and `testing::WithParamInterface<T>`. `T` can be any copyable type. If it's a
1028raw pointer, you are responsible for managing the lifespan of the pointed
1029values.
1030
1031{: .callout .note}
1032NOTE: If your test fixture defines `SetUpTestSuite()` or `TearDownTestSuite()`
1033they must be declared **public** rather than **protected** in order to use
1034`TEST_P`.
1035
1036```c++
1037class FooTest :
1038    public testing::TestWithParam<const char*> {
1039  // You can implement all the usual fixture class members here.
1040  // To access the test parameter, call GetParam() from class
1041  // TestWithParam<T>.
1042};
1043
1044// Or, when you want to add parameters to a pre-existing fixture class:
1045class BaseTest : public testing::Test {
1046  ...
1047};
1048class BarTest : public BaseTest,
1049                public testing::WithParamInterface<const char*> {
1050  ...
1051};
1052```
1053
1054Then, use the `TEST_P` macro to define as many test patterns using this fixture
1055as you want. The `_P` suffix is for "parameterized" or "pattern", whichever you
1056prefer to think.
1057
1058```c++
1059TEST_P(FooTest, DoesBlah) {
1060  // Inside a test, access the test parameter with the GetParam() method
1061  // of the TestWithParam<T> class:
1062  EXPECT_TRUE(foo.Blah(GetParam()));
1063  ...
1064}
1065
1066TEST_P(FooTest, HasBlahBlah) {
1067  ...
1068}
1069```
1070
1071Finally, you can use the `INSTANTIATE_TEST_SUITE_P` macro to instantiate the
1072test suite with any set of parameters you want. GoogleTest defines a number of
1073functions for generating test parameters—see details at
1074[`INSTANTIATE_TEST_SUITE_P`](reference/testing.md#INSTANTIATE_TEST_SUITE_P) in
1075the Testing Reference.
1076
1077For example, the following statement will instantiate tests from the `FooTest`
1078test suite each with parameter values `"meeny"`, `"miny"`, and `"moe"` using the
1079[`Values`](reference/testing.md#param-generators) parameter generator:
1080
1081```c++
1082INSTANTIATE_TEST_SUITE_P(MeenyMinyMoe,
1083                         FooTest,
1084                         testing::Values("meeny", "miny", "moe"));
1085```
1086
1087{: .callout .note}
1088NOTE: The code above must be placed at global or namespace scope, not at
1089function scope.
1090
1091The first argument to `INSTANTIATE_TEST_SUITE_P` is a unique name for the
1092instantiation of the test suite. The next argument is the name of the test
1093pattern, and the last is the
1094[parameter generator](reference/testing.md#param-generators).
1095
1096You can instantiate a test pattern more than once, so to distinguish different
1097instances of the pattern, the instantiation name is added as a prefix to the
1098actual test suite name. Remember to pick unique prefixes for different
1099instantiations. The tests from the instantiation above will have these names:
1100
1101*   `MeenyMinyMoe/FooTest.DoesBlah/0` for `"meeny"`
1102*   `MeenyMinyMoe/FooTest.DoesBlah/1` for `"miny"`
1103*   `MeenyMinyMoe/FooTest.DoesBlah/2` for `"moe"`
1104*   `MeenyMinyMoe/FooTest.HasBlahBlah/0` for `"meeny"`
1105*   `MeenyMinyMoe/FooTest.HasBlahBlah/1` for `"miny"`
1106*   `MeenyMinyMoe/FooTest.HasBlahBlah/2` for `"moe"`
1107
1108You can use these names in [`--gtest_filter`](#running-a-subset-of-the-tests).
1109
1110The following statement will instantiate all tests from `FooTest` again, each
1111with parameter values `"cat"` and `"dog"` using the
1112[`ValuesIn`](reference/testing.md#param-generators) parameter generator:
1113
1114```c++
1115const char* pets[] = {"cat", "dog"};
1116INSTANTIATE_TEST_SUITE_P(Pets, FooTest, testing::ValuesIn(pets));
1117```
1118
1119The tests from the instantiation above will have these names:
1120
1121*   `Pets/FooTest.DoesBlah/0` for `"cat"`
1122*   `Pets/FooTest.DoesBlah/1` for `"dog"`
1123*   `Pets/FooTest.HasBlahBlah/0` for `"cat"`
1124*   `Pets/FooTest.HasBlahBlah/1` for `"dog"`
1125
1126Please note that `INSTANTIATE_TEST_SUITE_P` will instantiate *all* tests in the
1127given test suite, whether their definitions come before or *after* the
1128`INSTANTIATE_TEST_SUITE_P` statement.
1129
1130Additionally, by default, every `TEST_P` without a corresponding
1131`INSTANTIATE_TEST_SUITE_P` causes a failing test in test suite
1132`GoogleTestVerification`. If you have a test suite where that omission is not an
1133error, for example it is in a library that may be linked in for other reasons or
1134where the list of test cases is dynamic and may be empty, then this check can be
1135suppressed by tagging the test suite:
1136
1137```c++
1138GTEST_ALLOW_UNINSTANTIATED_PARAMETERIZED_TEST(FooTest);
1139```
1140
1141You can see [sample7_unittest.cc] and [sample8_unittest.cc] for more examples.
1142
1143[sample7_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample7_unittest.cc "Parameterized Test example"
1144[sample8_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample8_unittest.cc "Parameterized Test example with multiple parameters"
1145
1146### Creating Value-Parameterized Abstract Tests
1147
1148In the above, we define and instantiate `FooTest` in the *same* source file.
1149Sometimes you may want to define value-parameterized tests in a library and let
1150other people instantiate them later. This pattern is known as *abstract tests*.
1151As an example of its application, when you are designing an interface you can
1152write a standard suite of abstract tests (perhaps using a factory function as
1153the test parameter) that all implementations of the interface are expected to
1154pass. When someone implements the interface, they can instantiate your suite to
1155get all the interface-conformance tests for free.
1156
1157To define abstract tests, you should organize your code like this:
1158
11591.  Put the definition of the parameterized test fixture class (e.g. `FooTest`)
1160    in a header file, say `foo_param_test.h`. Think of this as *declaring* your
1161    abstract tests.
11622.  Put the `TEST_P` definitions in `foo_param_test.cc`, which includes
1163    `foo_param_test.h`. Think of this as *implementing* your abstract tests.
1164
1165Once they are defined, you can instantiate them by including `foo_param_test.h`,
1166invoking `INSTANTIATE_TEST_SUITE_P()`, and depending on the library target that
1167contains `foo_param_test.cc`. You can instantiate the same abstract test suite
1168multiple times, possibly in different source files.
1169
1170### Specifying Names for Value-Parameterized Test Parameters
1171
1172The optional last argument to `INSTANTIATE_TEST_SUITE_P()` allows the user to
1173specify a function or functor that generates custom test name suffixes based on
1174the test parameters. The function should accept one argument of type
1175`testing::TestParamInfo<class ParamType>`, and return `std::string`.
1176
1177`testing::PrintToStringParamName` is a builtin test suffix generator that
1178returns the value of `testing::PrintToString(GetParam())`. It does not work for
1179`std::string` or C strings.
1180
1181{: .callout .note}
1182NOTE: test names must be non-empty, unique, and may only contain ASCII
1183alphanumeric characters. In particular, they
1184[should not contain underscores](faq.md#why-should-test-suite-names-and-test-names-not-contain-underscore)
1185
1186```c++
1187class MyTestSuite : public testing::TestWithParam<int> {};
1188
1189TEST_P(MyTestSuite, MyTest)
1190{
1191  std::cout << "Example Test Param: " << GetParam() << std::endl;
1192}
1193
1194INSTANTIATE_TEST_SUITE_P(MyGroup, MyTestSuite, testing::Range(0, 10),
1195                         testing::PrintToStringParamName());
1196```
1197
1198Providing a custom functor allows for more control over test parameter name
1199generation, especially for types where the automatic conversion does not
1200generate helpful parameter names (e.g. strings as demonstrated above). The
1201following example illustrates this for multiple parameters, an enumeration type
1202and a string, and also demonstrates how to combine generators. It uses a lambda
1203for conciseness:
1204
1205```c++
1206enum class MyType { MY_FOO = 0, MY_BAR = 1 };
1207
1208class MyTestSuite : public testing::TestWithParam<std::tuple<MyType, std::string>> {
1209};
1210
1211INSTANTIATE_TEST_SUITE_P(
1212    MyGroup, MyTestSuite,
1213    testing::Combine(
1214        testing::Values(MyType::MY_FOO, MyType::MY_BAR),
1215        testing::Values("A", "B")),
1216    [](const testing::TestParamInfo<MyTestSuite::ParamType>& info) {
1217      std::string name = absl::StrCat(
1218          std::get<0>(info.param) == MyType::MY_FOO ? "Foo" : "Bar",
1219          std::get<1>(info.param));
1220      absl::c_replace_if(name, [](char c) { return !std::isalnum(c); }, '_');
1221      return name;
1222    });
1223```
1224
1225## Typed Tests
1226
1227Suppose you have multiple implementations of the same interface and want to make
1228sure that all of them satisfy some common requirements. Or, you may have defined
1229several types that are supposed to conform to the same "concept" and you want to
1230verify it. In both cases, you want the same test logic repeated for different
1231types.
1232
1233While you can write one `TEST` or `TEST_F` for each type you want to test (and
1234you may even factor the test logic into a function template that you invoke from
1235the `TEST`), it's tedious and doesn't scale: if you want `m` tests over `n`
1236types, you'll end up writing `m*n` `TEST`s.
1237
1238*Typed tests* allow you to repeat the same test logic over a list of types. You
1239only need to write the test logic once, although you must know the type list
1240when writing typed tests. Here's how you do it:
1241
1242First, define a fixture class template. It should be parameterized by a type.
1243Remember to derive it from `::testing::Test`:
1244
1245```c++
1246template <typename T>
1247class FooTest : public testing::Test {
1248 public:
1249  ...
1250  using List = std::list<T>;
1251  static T shared_;
1252  T value_;
1253};
1254```
1255
1256Next, associate a list of types with the test suite, which will be repeated for
1257each type in the list:
1258
1259```c++
1260using MyTypes = ::testing::Types<char, int, unsigned int>;
1261TYPED_TEST_SUITE(FooTest, MyTypes);
1262```
1263
1264The type alias (`using` or `typedef`) is necessary for the `TYPED_TEST_SUITE`
1265macro to parse correctly. Otherwise the compiler will think that each comma in
1266the type list introduces a new macro argument.
1267
1268Then, use `TYPED_TEST()` instead of `TEST_F()` to define a typed test for this
1269test suite. You can repeat this as many times as you want:
1270
1271```c++
1272TYPED_TEST(FooTest, DoesBlah) {
1273  // Inside a test, refer to the special name TypeParam to get the type
1274  // parameter.  Since we are inside a derived class template, C++ requires
1275  // us to visit the members of FooTest via 'this'.
1276  TypeParam n = this->value_;
1277
1278  // To visit static members of the fixture, add the 'TestFixture::'
1279  // prefix.
1280  n += TestFixture::shared_;
1281
1282  // To refer to typedefs in the fixture, add the 'typename TestFixture::'
1283  // prefix.  The 'typename' is required to satisfy the compiler.
1284  typename TestFixture::List values;
1285
1286  values.push_back(n);
1287  ...
1288}
1289
1290TYPED_TEST(FooTest, HasPropertyA) { ... }
1291```
1292
1293You can see [sample6_unittest.cc] for a complete example.
1294
1295[sample6_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample6_unittest.cc "Typed Test example"
1296
1297## Type-Parameterized Tests
1298
1299*Type-parameterized tests* are like typed tests, except that they don't require
1300you to know the list of types ahead of time. Instead, you can define the test
1301logic first and instantiate it with different type lists later. You can even
1302instantiate it more than once in the same program.
1303
1304If you are designing an interface or concept, you can define a suite of
1305type-parameterized tests to verify properties that any valid implementation of
1306the interface/concept should have. Then, the author of each implementation can
1307just instantiate the test suite with their type to verify that it conforms to
1308the requirements, without having to write similar tests repeatedly. Here's an
1309example:
1310
1311First, define a fixture class template, as we did with typed tests:
1312
1313```c++
1314template <typename T>
1315class FooTest : public testing::Test {
1316  ...
1317};
1318```
1319
1320Next, declare that you will define a type-parameterized test suite:
1321
1322```c++
1323TYPED_TEST_SUITE_P(FooTest);
1324```
1325
1326Then, use `TYPED_TEST_P()` to define a type-parameterized test. You can repeat
1327this as many times as you want:
1328
1329```c++
1330TYPED_TEST_P(FooTest, DoesBlah) {
1331  // Inside a test, refer to TypeParam to get the type parameter.
1332  TypeParam n = 0;
1333  ...
1334}
1335
1336TYPED_TEST_P(FooTest, HasPropertyA) { ... }
1337```
1338
1339Now the tricky part: you need to register all test patterns using the
1340`REGISTER_TYPED_TEST_SUITE_P` macro before you can instantiate them. The first
1341argument of the macro is the test suite name; the rest are the names of the
1342tests in this test suite:
1343
1344```c++
1345REGISTER_TYPED_TEST_SUITE_P(FooTest,
1346                            DoesBlah, HasPropertyA);
1347```
1348
1349Finally, you are free to instantiate the pattern with the types you want. If you
1350put the above code in a header file, you can `#include` it in multiple C++
1351source files and instantiate it multiple times.
1352
1353```c++
1354using MyTypes = ::testing::Types<char, int, unsigned int>;
1355INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, MyTypes);
1356```
1357
1358To distinguish different instances of the pattern, the first argument to the
1359`INSTANTIATE_TYPED_TEST_SUITE_P` macro is a prefix that will be added to the
1360actual test suite name. Remember to pick unique prefixes for different
1361instances.
1362
1363In the special case where the type list contains only one type, you can write
1364that type directly without `::testing::Types<...>`, like this:
1365
1366```c++
1367INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, int);
1368```
1369
1370You can see [sample6_unittest.cc] for a complete example.
1371
1372## Testing Private Code
1373
1374If you change your software's internal implementation, your tests should not
1375break as long as the change is not observable by users. Therefore, **per the
1376black-box testing principle, most of the time you should test your code through
1377its public interfaces.**
1378
1379**If you still find yourself needing to test internal implementation code,
1380consider if there's a better design.** The desire to test internal
1381implementation is often a sign that the class is doing too much. Consider
1382extracting an implementation class, and testing it. Then use that implementation
1383class in the original class.
1384
1385If you absolutely have to test non-public interface code though, you can. There
1386are two cases to consider:
1387
1388*   Static functions ( *not* the same as static member functions!) or unnamed
1389    namespaces, and
1390*   Private or protected class members
1391
1392To test them, we use the following special techniques:
1393
1394*   Both static functions and definitions/declarations in an unnamed namespace
1395    are only visible within the same translation unit. To test them, you can
1396    `#include` the entire `.cc` file being tested in your `*_test.cc` file.
1397    (#including `.cc` files is not a good way to reuse code - you should not do
1398    this in production code!)
1399
1400    However, a better approach is to move the private code into the
1401    `foo::internal` namespace, where `foo` is the namespace your project
1402    normally uses, and put the private declarations in a `*-internal.h` file.
1403    Your production `.cc` files and your tests are allowed to include this
1404    internal header, but your clients are not. This way, you can fully test your
1405    internal implementation without leaking it to your clients.
1406
1407*   Private class members are only accessible from within the class or by
1408    friends. To access a class' private members, you can declare your test
1409    fixture as a friend to the class and define accessors in your fixture. Tests
1410    using the fixture can then access the private members of your production
1411    class via the accessors in the fixture. Note that even though your fixture
1412    is a friend to your production class, your tests are not automatically
1413    friends to it, as they are technically defined in sub-classes of the
1414    fixture.
1415
1416    Another way to test private members is to refactor them into an
1417    implementation class, which is then declared in a `*-internal.h` file. Your
1418    clients aren't allowed to include this header but your tests can. Such is
1419    called the
1420    [Pimpl](https://www.gamedev.net/articles/programming/general-and-gameplay-programming/the-c-pimpl-r1794/)
1421    (Private Implementation) idiom.
1422
1423    Or, you can declare an individual test as a friend of your class by adding
1424    this line in the class body:
1425
1426    ```c++
1427        FRIEND_TEST(TestSuiteName, TestName);
1428    ```
1429
1430    For example,
1431
1432    ```c++
1433    // foo.h
1434    class Foo {
1435      ...
1436     private:
1437      FRIEND_TEST(FooTest, BarReturnsZeroOnNull);
1438
1439      int Bar(void* x);
1440    };
1441
1442    // foo_test.cc
1443    ...
1444    TEST(FooTest, BarReturnsZeroOnNull) {
1445      Foo foo;
1446      EXPECT_EQ(foo.Bar(NULL), 0);  // Uses Foo's private member Bar().
1447    }
1448    ```
1449
1450    Pay special attention when your class is defined in a namespace. If you want
1451    your test fixtures and tests to be friends of your class, then they must be
1452    defined in the exact same namespace (no anonymous or inline namespaces).
1453
1454    For example, if the code to be tested looks like:
1455
1456    ```c++
1457    namespace my_namespace {
1458
1459    class Foo {
1460      friend class FooTest;
1461      FRIEND_TEST(FooTest, Bar);
1462      FRIEND_TEST(FooTest, Baz);
1463      ... definition of the class Foo ...
1464    };
1465
1466    }  // namespace my_namespace
1467    ```
1468
1469    Your test code should be something like:
1470
1471    ```c++
1472    namespace my_namespace {
1473
1474    class FooTest : public testing::Test {
1475     protected:
1476      ...
1477    };
1478
1479    TEST_F(FooTest, Bar) { ... }
1480    TEST_F(FooTest, Baz) { ... }
1481
1482    }  // namespace my_namespace
1483    ```
1484
1485## "Catching" Failures
1486
1487If you are building a testing utility on top of googletest, you'll want to test
1488your utility. What framework would you use to test it? googletest, of course.
1489
1490The challenge is to verify that your testing utility reports failures correctly.
1491In frameworks that report a failure by throwing an exception, you could catch
1492the exception and assert on it. But googletest doesn't use exceptions, so how do
1493we test that a piece of code generates an expected failure?
1494
1495`"gtest/gtest-spi.h"` contains some constructs to do this.
1496After #including this header, you can use
1497
1498```c++
1499  EXPECT_FATAL_FAILURE(statement, substring);
1500```
1501
1502to assert that `statement` generates a fatal (e.g. `ASSERT_*`) failure in the
1503current thread whose message contains the given `substring`, or use
1504
1505```c++
1506  EXPECT_NONFATAL_FAILURE(statement, substring);
1507```
1508
1509if you are expecting a non-fatal (e.g. `EXPECT_*`) failure.
1510
1511Only failures in the current thread are checked to determine the result of this
1512type of expectations. If `statement` creates new threads, failures in these
1513threads are also ignored. If you want to catch failures in other threads as
1514well, use one of the following macros instead:
1515
1516```c++
1517  EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substring);
1518  EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substring);
1519```
1520
1521{: .callout .note}
1522NOTE: Assertions from multiple threads are currently not supported on Windows.
1523
1524For technical reasons, there are some caveats:
1525
15261.  You cannot stream a failure message to either macro.
1527
15282.  `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot reference
1529    local non-static variables or non-static members of `this` object.
1530
15313.  `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot return a
1532    value.
1533
1534## Registering tests programmatically
1535
1536The `TEST` macros handle the vast majority of all use cases, but there are few
1537where runtime registration logic is required. For those cases, the framework
1538provides the `::testing::RegisterTest` that allows callers to register arbitrary
1539tests dynamically.
1540
1541This is an advanced API only to be used when the `TEST` macros are insufficient.
1542The macros should be preferred when possible, as they avoid most of the
1543complexity of calling this function.
1544
1545It provides the following signature:
1546
1547```c++
1548template <typename Factory>
1549TestInfo* RegisterTest(const char* test_suite_name, const char* test_name,
1550                       const char* type_param, const char* value_param,
1551                       const char* file, int line, Factory factory);
1552```
1553
1554The `factory` argument is a factory callable (move-constructible) object or
1555function pointer that creates a new instance of the Test object. It handles
1556ownership to the caller. The signature of the callable is `Fixture*()`, where
1557`Fixture` is the test fixture class for the test. All tests registered with the
1558same `test_suite_name` must return the same fixture type. This is checked at
1559runtime.
1560
1561The framework will infer the fixture class from the factory and will call the
1562`SetUpTestSuite` and `TearDownTestSuite` for it.
1563
1564Must be called before `RUN_ALL_TESTS()` is invoked, otherwise behavior is
1565undefined.
1566
1567Use case example:
1568
1569```c++
1570class MyFixture : public testing::Test {
1571 public:
1572  // All of these optional, just like in regular macro usage.
1573  static void SetUpTestSuite() { ... }
1574  static void TearDownTestSuite() { ... }
1575  void SetUp() override { ... }
1576  void TearDown() override { ... }
1577};
1578
1579class MyTest : public MyFixture {
1580 public:
1581  explicit MyTest(int data) : data_(data) {}
1582  void TestBody() override { ... }
1583
1584 private:
1585  int data_;
1586};
1587
1588void RegisterMyTests(const std::vector<int>& values) {
1589  for (int v : values) {
1590    testing::RegisterTest(
1591        "MyFixture", ("Test" + std::to_string(v)).c_str(), nullptr,
1592        std::to_string(v).c_str(),
1593        __FILE__, __LINE__,
1594        // Important to use the fixture type as the return type here.
1595        [=]() -> MyFixture* { return new MyTest(v); });
1596  }
1597}
1598...
1599int main(int argc, char** argv) {
1600  testing::InitGoogleTest(&argc, argv);
1601  std::vector<int> values_to_test = LoadValuesFromConfig();
1602  RegisterMyTests(values_to_test);
1603  ...
1604  return RUN_ALL_TESTS();
1605}
1606```
1607
1608## Getting the Current Test's Name
1609
1610Sometimes a function may need to know the name of the currently running test.
1611For example, you may be using the `SetUp()` method of your test fixture to set
1612the golden file name based on which test is running. The
1613[`TestInfo`](reference/testing.md#TestInfo) class has this information.
1614
1615To obtain a `TestInfo` object for the currently running test, call
1616`current_test_info()` on the [`UnitTest`](reference/testing.md#UnitTest)
1617singleton object:
1618
1619```c++
1620  // Gets information about the currently running test.
1621  // Do NOT delete the returned object - it's managed by the UnitTest class.
1622  const testing::TestInfo* const test_info =
1623      testing::UnitTest::GetInstance()->current_test_info();
1624
1625  printf("We are in test %s of test suite %s.\n",
1626         test_info->name(),
1627         test_info->test_suite_name());
1628```
1629
1630`current_test_info()` returns a null pointer if no test is running. In
1631particular, you cannot find the test suite name in `SetUpTestSuite()`,
1632`TearDownTestSuite()` (where you know the test suite name implicitly), or
1633functions called from them.
1634
1635## Extending googletest by Handling Test Events
1636
1637googletest provides an **event listener API** to let you receive notifications
1638about the progress of a test program and test failures. The events you can
1639listen to include the start and end of the test program, a test suite, or a test
1640method, among others. You may use this API to augment or replace the standard
1641console output, replace the XML output, or provide a completely different form
1642of output, such as a GUI or a database. You can also use test events as
1643checkpoints to implement a resource leak checker, for example.
1644
1645### Defining Event Listeners
1646
1647To define a event listener, you subclass either
1648[`testing::TestEventListener`](reference/testing.md#TestEventListener) or
1649[`testing::EmptyTestEventListener`](reference/testing.md#EmptyTestEventListener)
1650The former is an (abstract) interface, where *each pure virtual method can be
1651overridden to handle a test event* (For example, when a test starts, the
1652`OnTestStart()` method will be called.). The latter provides an empty
1653implementation of all methods in the interface, such that a subclass only needs
1654to override the methods it cares about.
1655
1656When an event is fired, its context is passed to the handler function as an
1657argument. The following argument types are used:
1658
1659*   UnitTest reflects the state of the entire test program,
1660*   TestSuite has information about a test suite, which can contain one or more
1661    tests,
1662*   TestInfo contains the state of a test, and
1663*   TestPartResult represents the result of a test assertion.
1664
1665An event handler function can examine the argument it receives to find out
1666interesting information about the event and the test program's state.
1667
1668Here's an example:
1669
1670```c++
1671  class MinimalistPrinter : public testing::EmptyTestEventListener {
1672    // Called before a test starts.
1673    void OnTestStart(const testing::TestInfo& test_info) override {
1674      printf("*** Test %s.%s starting.\n",
1675             test_info.test_suite_name(), test_info.name());
1676    }
1677
1678    // Called after a failed assertion or a SUCCESS().
1679    void OnTestPartResult(const testing::TestPartResult& test_part_result) override {
1680      printf("%s in %s:%d\n%s\n",
1681             test_part_result.failed() ? "*** Failure" : "Success",
1682             test_part_result.file_name(),
1683             test_part_result.line_number(),
1684             test_part_result.summary());
1685    }
1686
1687    // Called after a test ends.
1688    void OnTestEnd(const testing::TestInfo& test_info) override {
1689      printf("*** Test %s.%s ending.\n",
1690             test_info.test_suite_name(), test_info.name());
1691    }
1692  };
1693```
1694
1695### Using Event Listeners
1696
1697To use the event listener you have defined, add an instance of it to the
1698googletest event listener list (represented by class
1699[`TestEventListeners`](reference/testing.md#TestEventListeners) - note the "s"
1700at the end of the name) in your `main()` function, before calling
1701`RUN_ALL_TESTS()`:
1702
1703```c++
1704int main(int argc, char** argv) {
1705  testing::InitGoogleTest(&argc, argv);
1706  // Gets hold of the event listener list.
1707  testing::TestEventListeners& listeners =
1708      testing::UnitTest::GetInstance()->listeners();
1709  // Adds a listener to the end.  googletest takes the ownership.
1710  listeners.Append(new MinimalistPrinter);
1711  return RUN_ALL_TESTS();
1712}
1713```
1714
1715There's only one problem: the default test result printer is still in effect, so
1716its output will mingle with the output from your minimalist printer. To suppress
1717the default printer, just release it from the event listener list and delete it.
1718You can do so by adding one line:
1719
1720```c++
1721  ...
1722  delete listeners.Release(listeners.default_result_printer());
1723  listeners.Append(new MinimalistPrinter);
1724  return RUN_ALL_TESTS();
1725```
1726
1727Now, sit back and enjoy a completely different output from your tests. For more
1728details, see [sample9_unittest.cc].
1729
1730[sample9_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample9_unittest.cc "Event listener example"
1731
1732You may append more than one listener to the list. When an `On*Start()` or
1733`OnTestPartResult()` event is fired, the listeners will receive it in the order
1734they appear in the list (since new listeners are added to the end of the list,
1735the default text printer and the default XML generator will receive the event
1736first). An `On*End()` event will be received by the listeners in the *reverse*
1737order. This allows output by listeners added later to be framed by output from
1738listeners added earlier.
1739
1740### Generating Failures in Listeners
1741
1742You may use failure-raising macros (`EXPECT_*()`, `ASSERT_*()`, `FAIL()`, etc)
1743when processing an event. There are some restrictions:
1744
17451.  You cannot generate any failure in `OnTestPartResult()` (otherwise it will
1746    cause `OnTestPartResult()` to be called recursively).
17472.  A listener that handles `OnTestPartResult()` is not allowed to generate any
1748    failure.
1749
1750When you add listeners to the listener list, you should put listeners that
1751handle `OnTestPartResult()` *before* listeners that can generate failures. This
1752ensures that failures generated by the latter are attributed to the right test
1753by the former.
1754
1755See [sample10_unittest.cc] for an example of a failure-raising listener.
1756
1757[sample10_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample10_unittest.cc "Failure-raising listener example"
1758
1759## Running Test Programs: Advanced Options
1760
1761googletest test programs are ordinary executables. Once built, you can run them
1762directly and affect their behavior via the following environment variables
1763and/or command line flags. For the flags to work, your programs must call
1764`::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`.
1765
1766To see a list of supported flags and their usage, please run your test program
1767with the `--help` flag. You can also use `-h`, `-?`, or `/?` for short.
1768
1769If an option is specified both by an environment variable and by a flag, the
1770latter takes precedence.
1771
1772### Selecting Tests
1773
1774#### Listing Test Names
1775
1776Sometimes it is necessary to list the available tests in a program before
1777running them so that a filter may be applied if needed. Including the flag
1778`--gtest_list_tests` overrides all other flags and lists tests in the following
1779format:
1780
1781```none
1782TestSuite1.
1783  TestName1
1784  TestName2
1785TestSuite2.
1786  TestName
1787```
1788
1789None of the tests listed are actually run if the flag is provided. There is no
1790corresponding environment variable for this flag.
1791
1792#### Running a Subset of the Tests
1793
1794By default, a googletest program runs all tests the user has defined. Sometimes,
1795you want to run only a subset of the tests (e.g. for debugging or quickly
1796verifying a change). If you set the `GTEST_FILTER` environment variable or the
1797`--gtest_filter` flag to a filter string, googletest will only run the tests
1798whose full names (in the form of `TestSuiteName.TestName`) match the filter.
1799
1800The format of a filter is a '`:`'-separated list of wildcard patterns (called
1801the *positive patterns*) optionally followed by a '`-`' and another
1802'`:`'-separated pattern list (called the *negative patterns*). A test matches
1803the filter if and only if it matches any of the positive patterns but does not
1804match any of the negative patterns.
1805
1806A pattern may contain `'*'` (matches any string) or `'?'` (matches any single
1807character). For convenience, the filter `'*-NegativePatterns'` can be also
1808written as `'-NegativePatterns'`.
1809
1810For example:
1811
1812*   `./foo_test` Has no flag, and thus runs all its tests.
1813*   `./foo_test --gtest_filter=*` Also runs everything, due to the single
1814    match-everything `*` value.
1815*   `./foo_test --gtest_filter=FooTest.*` Runs everything in test suite
1816    `FooTest` .
1817*   `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full
1818    name contains either `"Null"` or `"Constructor"` .
1819*   `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests.
1820*   `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test
1821    suite `FooTest` except `FooTest.Bar`.
1822*   `./foo_test --gtest_filter=FooTest.*:BarTest.*-FooTest.Bar:BarTest.Foo` Runs
1823    everything in test suite `FooTest` except `FooTest.Bar` and everything in
1824    test suite `BarTest` except `BarTest.Foo`.
1825
1826#### Stop test execution upon first failure
1827
1828By default, a googletest program runs all tests the user has defined. In some
1829cases (e.g. iterative test development & execution) it may be desirable stop
1830test execution upon first failure (trading improved latency for completeness).
1831If `GTEST_FAIL_FAST` environment variable or `--gtest_fail_fast` flag is set,
1832the test runner will stop execution as soon as the first test failure is found.
1833
1834#### Temporarily Disabling Tests
1835
1836If you have a broken test that you cannot fix right away, you can add the
1837`DISABLED_` prefix to its name. This will exclude it from execution. This is
1838better than commenting out the code or using `#if 0`, as disabled tests are
1839still compiled (and thus won't rot).
1840
1841If you need to disable all tests in a test suite, you can either add `DISABLED_`
1842to the front of the name of each test, or alternatively add it to the front of
1843the test suite name.
1844
1845For example, the following tests won't be run by googletest, even though they
1846will still be compiled:
1847
1848```c++
1849// Tests that Foo does Abc.
1850TEST(FooTest, DISABLED_DoesAbc) { ... }
1851
1852class DISABLED_BarTest : public testing::Test { ... };
1853
1854// Tests that Bar does Xyz.
1855TEST_F(DISABLED_BarTest, DoesXyz) { ... }
1856```
1857
1858{: .callout .note}
1859NOTE: This feature should only be used for temporary pain-relief. You still have
1860to fix the disabled tests at a later date. As a reminder, googletest will print
1861a banner warning you if a test program contains any disabled tests.
1862
1863{: .callout .tip}
1864TIP: You can easily count the number of disabled tests you have using
1865`grep`. This number can be used as a metric for
1866improving your test quality.
1867
1868#### Temporarily Enabling Disabled Tests
1869
1870To include disabled tests in test execution, just invoke the test program with
1871the `--gtest_also_run_disabled_tests` flag or set the
1872`GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other than `0`.
1873You can combine this with the `--gtest_filter` flag to further select which
1874disabled tests to run.
1875
1876### Repeating the Tests
1877
1878Once in a while you'll run into a test whose result is hit-or-miss. Perhaps it
1879will fail only 1% of the time, making it rather hard to reproduce the bug under
1880a debugger. This can be a major source of frustration.
1881
1882The `--gtest_repeat` flag allows you to repeat all (or selected) test methods in
1883a program many times. Hopefully, a flaky test will eventually fail and give you
1884a chance to debug. Here's how to use it:
1885
1886```none
1887$ foo_test --gtest_repeat=1000
1888Repeat foo_test 1000 times and don't stop at failures.
1889
1890$ foo_test --gtest_repeat=-1
1891A negative count means repeating forever.
1892
1893$ foo_test --gtest_repeat=1000 --gtest_break_on_failure
1894Repeat foo_test 1000 times, stopping at the first failure.  This
1895is especially useful when running under a debugger: when the test
1896fails, it will drop into the debugger and you can then inspect
1897variables and stacks.
1898
1899$ foo_test --gtest_repeat=1000 --gtest_filter=FooBar.*
1900Repeat the tests whose name matches the filter 1000 times.
1901```
1902
1903If your test program contains
1904[global set-up/tear-down](#global-set-up-and-tear-down) code, it will be
1905repeated in each iteration as well, as the flakiness may be in it. You can also
1906specify the repeat count by setting the `GTEST_REPEAT` environment variable.
1907
1908### Shuffling the Tests
1909
1910You can specify the `--gtest_shuffle` flag (or set the `GTEST_SHUFFLE`
1911environment variable to `1`) to run the tests in a program in a random order.
1912This helps to reveal bad dependencies between tests.
1913
1914By default, googletest uses a random seed calculated from the current time.
1915Therefore you'll get a different order every time. The console output includes
1916the random seed value, such that you can reproduce an order-related test failure
1917later. To specify the random seed explicitly, use the `--gtest_random_seed=SEED`
1918flag (or set the `GTEST_RANDOM_SEED` environment variable), where `SEED` is an
1919integer in the range [0, 99999]. The seed value 0 is special: it tells
1920googletest to do the default behavior of calculating the seed from the current
1921time.
1922
1923If you combine this with `--gtest_repeat=N`, googletest will pick a different
1924random seed and re-shuffle the tests in each iteration.
1925
1926### Distributing Test Functions to Multiple Machines
1927
1928If you have more than one machine you can use to run a test program, you might
1929want to run the test functions in parallel and get the result faster. We call
1930this technique *sharding*, where each machine is called a *shard*.
1931
1932GoogleTest is compatible with test sharding. To take advantage of this feature,
1933your test runner (not part of GoogleTest) needs to do the following:
1934
19351.  Allocate a number of machines (shards) to run the tests.
19361.  On each shard, set the `GTEST_TOTAL_SHARDS` environment variable to the total
1937    number of shards. It must be the same for all shards.
19381.  On each shard, set the `GTEST_SHARD_INDEX` environment variable to the index
1939    of the shard. Different shards must be assigned different indices, which
1940    must be in the range `[0, GTEST_TOTAL_SHARDS - 1]`.
19411.  Run the same test program on all shards. When GoogleTest sees the above two
1942    environment variables, it will select a subset of the test functions to run.
1943    Across all shards, each test function in the program will be run exactly
1944    once.
19451.  Wait for all shards to finish, then collect and report the results.
1946
1947Your project may have tests that were written without GoogleTest and thus don't
1948understand this protocol. In order for your test runner to figure out which test
1949supports sharding, it can set the environment variable `GTEST_SHARD_STATUS_FILE`
1950to a non-existent file path. If a test program supports sharding, it will create
1951this file to acknowledge that fact; otherwise it will not create it. The actual
1952contents of the file are not important at this time, although we may put some
1953useful information in it in the future.
1954
1955Here's an example to make it clear. Suppose you have a test program `foo_test`
1956that contains the following 5 test functions:
1957
1958```
1959TEST(A, V)
1960TEST(A, W)
1961TEST(B, X)
1962TEST(B, Y)
1963TEST(B, Z)
1964```
1965
1966Suppose you have 3 machines at your disposal. To run the test functions in
1967parallel, you would set `GTEST_TOTAL_SHARDS` to 3 on all machines, and set
1968`GTEST_SHARD_INDEX` to 0, 1, and 2 on the machines respectively. Then you would
1969run the same `foo_test` on each machine.
1970
1971GoogleTest reserves the right to change how the work is distributed across the
1972shards, but here's one possible scenario:
1973
1974*   Machine #0 runs `A.V` and `B.X`.
1975*   Machine #1 runs `A.W` and `B.Y`.
1976*   Machine #2 runs `B.Z`.
1977
1978### Controlling Test Output
1979
1980#### Colored Terminal Output
1981
1982googletest can use colors in its terminal output to make it easier to spot the
1983important information:
1984
1985<pre>...
1986<font color="green">[----------]</font> 1 test from FooTest
1987<font color="green">[ RUN      ]</font> FooTest.DoesAbc
1988<font color="green">[       OK ]</font> FooTest.DoesAbc
1989<font color="green">[----------]</font> 2 tests from BarTest
1990<font color="green">[ RUN      ]</font> BarTest.HasXyzProperty
1991<font color="green">[       OK ]</font> BarTest.HasXyzProperty
1992<font color="green">[ RUN      ]</font> BarTest.ReturnsTrueOnSuccess
1993... some error messages ...
1994<font color="red">[   FAILED ]</font> BarTest.ReturnsTrueOnSuccess
1995...
1996<font color="green">[==========]</font> 30 tests from 14 test suites ran.
1997<font color="green">[   PASSED ]</font> 28 tests.
1998<font color="red">[   FAILED ]</font> 2 tests, listed below:
1999<font color="red">[   FAILED ]</font> BarTest.ReturnsTrueOnSuccess
2000<font color="red">[   FAILED ]</font> AnotherTest.DoesXyz
2001
2002 2 FAILED TESTS
2003</pre>
2004
2005You can set the `GTEST_COLOR` environment variable or the `--gtest_color`
2006command line flag to `yes`, `no`, or `auto` (the default) to enable colors,
2007disable colors, or let googletest decide. When the value is `auto`, googletest
2008will use colors if and only if the output goes to a terminal and (on non-Windows
2009platforms) the `TERM` environment variable is set to `xterm` or `xterm-color`.
2010
2011#### Suppressing test passes
2012
2013By default, googletest prints 1 line of output for each test, indicating if it
2014passed or failed. To show only test failures, run the test program with
2015`--gtest_brief=1`, or set the GTEST_BRIEF environment variable to `1`.
2016
2017#### Suppressing the Elapsed Time
2018
2019By default, googletest prints the time it takes to run each test. To disable
2020that, run the test program with the `--gtest_print_time=0` command line flag, or
2021set the GTEST_PRINT_TIME environment variable to `0`.
2022
2023#### Suppressing UTF-8 Text Output
2024
2025In case of assertion failures, googletest prints expected and actual values of
2026type `string` both as hex-encoded strings as well as in readable UTF-8 text if
2027they contain valid non-ASCII UTF-8 characters. If you want to suppress the UTF-8
2028text because, for example, you don't have an UTF-8 compatible output medium, run
2029the test program with `--gtest_print_utf8=0` or set the `GTEST_PRINT_UTF8`
2030environment variable to `0`.
2031
2032#### Generating an XML Report
2033
2034googletest can emit a detailed XML report to a file in addition to its normal
2035textual output. The report contains the duration of each test, and thus can help
2036you identify slow tests.
2037
2038To generate the XML report, set the `GTEST_OUTPUT` environment variable or the
2039`--gtest_output` flag to the string `"xml:path_to_output_file"`, which will
2040create the file at the given location. You can also just use the string `"xml"`,
2041in which case the output can be found in the `test_detail.xml` file in the
2042current directory.
2043
2044If you specify a directory (for example, `"xml:output/directory/"` on Linux or
2045`"xml:output\directory\"` on Windows), googletest will create the XML file in
2046that directory, named after the test executable (e.g. `foo_test.xml` for test
2047program `foo_test` or `foo_test.exe`). If the file already exists (perhaps left
2048over from a previous run), googletest will pick a different name (e.g.
2049`foo_test_1.xml`) to avoid overwriting it.
2050
2051The report is based on the `junitreport` Ant task. Since that format was
2052originally intended for Java, a little interpretation is required to make it
2053apply to googletest tests, as shown here:
2054
2055```xml
2056<testsuites name="AllTests" ...>
2057  <testsuite name="test_case_name" ...>
2058    <testcase    name="test_name" ...>
2059      <failure message="..."/>
2060      <failure message="..."/>
2061      <failure message="..."/>
2062    </testcase>
2063  </testsuite>
2064</testsuites>
2065```
2066
2067*   The root `<testsuites>` element corresponds to the entire test program.
2068*   `<testsuite>` elements correspond to googletest test suites.
2069*   `<testcase>` elements correspond to googletest test functions.
2070
2071For instance, the following program
2072
2073```c++
2074TEST(MathTest, Addition) { ... }
2075TEST(MathTest, Subtraction) { ... }
2076TEST(LogicTest, NonContradiction) { ... }
2077```
2078
2079could generate this report:
2080
2081```xml
2082<?xml version="1.0" encoding="UTF-8"?>
2083<testsuites tests="3" failures="1" errors="0" time="0.035" timestamp="2011-10-31T18:52:42" name="AllTests">
2084  <testsuite name="MathTest" tests="2" failures="1" errors="0" time="0.015">
2085    <testcase name="Addition" file="test.cpp" line="1" status="run" time="0.007" classname="">
2086      <failure message="Value of: add(1, 1)&#x0A;  Actual: 3&#x0A;Expected: 2" type="">...</failure>
2087      <failure message="Value of: add(1, -1)&#x0A;  Actual: 1&#x0A;Expected: 0" type="">...</failure>
2088    </testcase>
2089    <testcase name="Subtraction" file="test.cpp" line="2" status="run" time="0.005" classname="">
2090    </testcase>
2091  </testsuite>
2092  <testsuite name="LogicTest" tests="1" failures="0" errors="0" time="0.005">
2093    <testcase name="NonContradiction" file="test.cpp" line="3" status="run" time="0.005" classname="">
2094    </testcase>
2095  </testsuite>
2096</testsuites>
2097```
2098
2099Things to note:
2100
2101*   The `tests` attribute of a `<testsuites>` or `<testsuite>` element tells how
2102    many test functions the googletest program or test suite contains, while the
2103    `failures` attribute tells how many of them failed.
2104
2105*   The `time` attribute expresses the duration of the test, test suite, or
2106    entire test program in seconds.
2107
2108*   The `timestamp` attribute records the local date and time of the test
2109    execution.
2110
2111*   The `file` and `line` attributes record the source file location, where the
2112    test was defined.
2113
2114*   Each `<failure>` element corresponds to a single failed googletest
2115    assertion.
2116
2117#### Generating a JSON Report
2118
2119googletest can also emit a JSON report as an alternative format to XML. To
2120generate the JSON report, set the `GTEST_OUTPUT` environment variable or the
2121`--gtest_output` flag to the string `"json:path_to_output_file"`, which will
2122create the file at the given location. You can also just use the string
2123`"json"`, in which case the output can be found in the `test_detail.json` file
2124in the current directory.
2125
2126The report format conforms to the following JSON Schema:
2127
2128```json
2129{
2130  "$schema": "http://json-schema.org/schema#",
2131  "type": "object",
2132  "definitions": {
2133    "TestCase": {
2134      "type": "object",
2135      "properties": {
2136        "name": { "type": "string" },
2137        "tests": { "type": "integer" },
2138        "failures": { "type": "integer" },
2139        "disabled": { "type": "integer" },
2140        "time": { "type": "string" },
2141        "testsuite": {
2142          "type": "array",
2143          "items": {
2144            "$ref": "#/definitions/TestInfo"
2145          }
2146        }
2147      }
2148    },
2149    "TestInfo": {
2150      "type": "object",
2151      "properties": {
2152        "name": { "type": "string" },
2153        "file": { "type": "string" },
2154        "line": { "type": "integer" },
2155        "status": {
2156          "type": "string",
2157          "enum": ["RUN", "NOTRUN"]
2158        },
2159        "time": { "type": "string" },
2160        "classname": { "type": "string" },
2161        "failures": {
2162          "type": "array",
2163          "items": {
2164            "$ref": "#/definitions/Failure"
2165          }
2166        }
2167      }
2168    },
2169    "Failure": {
2170      "type": "object",
2171      "properties": {
2172        "failures": { "type": "string" },
2173        "type": { "type": "string" }
2174      }
2175    }
2176  },
2177  "properties": {
2178    "tests": { "type": "integer" },
2179    "failures": { "type": "integer" },
2180    "disabled": { "type": "integer" },
2181    "errors": { "type": "integer" },
2182    "timestamp": {
2183      "type": "string",
2184      "format": "date-time"
2185    },
2186    "time": { "type": "string" },
2187    "name": { "type": "string" },
2188    "testsuites": {
2189      "type": "array",
2190      "items": {
2191        "$ref": "#/definitions/TestCase"
2192      }
2193    }
2194  }
2195}
2196```
2197
2198The report uses the format that conforms to the following Proto3 using the
2199[JSON encoding](https://developers.google.com/protocol-buffers/docs/proto3#json):
2200
2201```proto
2202syntax = "proto3";
2203
2204package googletest;
2205
2206import "google/protobuf/timestamp.proto";
2207import "google/protobuf/duration.proto";
2208
2209message UnitTest {
2210  int32 tests = 1;
2211  int32 failures = 2;
2212  int32 disabled = 3;
2213  int32 errors = 4;
2214  google.protobuf.Timestamp timestamp = 5;
2215  google.protobuf.Duration time = 6;
2216  string name = 7;
2217  repeated TestCase testsuites = 8;
2218}
2219
2220message TestCase {
2221  string name = 1;
2222  int32 tests = 2;
2223  int32 failures = 3;
2224  int32 disabled = 4;
2225  int32 errors = 5;
2226  google.protobuf.Duration time = 6;
2227  repeated TestInfo testsuite = 7;
2228}
2229
2230message TestInfo {
2231  string name = 1;
2232  string file = 6;
2233  int32 line = 7;
2234  enum Status {
2235    RUN = 0;
2236    NOTRUN = 1;
2237  }
2238  Status status = 2;
2239  google.protobuf.Duration time = 3;
2240  string classname = 4;
2241  message Failure {
2242    string failures = 1;
2243    string type = 2;
2244  }
2245  repeated Failure failures = 5;
2246}
2247```
2248
2249For instance, the following program
2250
2251```c++
2252TEST(MathTest, Addition) { ... }
2253TEST(MathTest, Subtraction) { ... }
2254TEST(LogicTest, NonContradiction) { ... }
2255```
2256
2257could generate this report:
2258
2259```json
2260{
2261  "tests": 3,
2262  "failures": 1,
2263  "errors": 0,
2264  "time": "0.035s",
2265  "timestamp": "2011-10-31T18:52:42Z",
2266  "name": "AllTests",
2267  "testsuites": [
2268    {
2269      "name": "MathTest",
2270      "tests": 2,
2271      "failures": 1,
2272      "errors": 0,
2273      "time": "0.015s",
2274      "testsuite": [
2275        {
2276          "name": "Addition",
2277          "file": "test.cpp",
2278          "line": 1,
2279          "status": "RUN",
2280          "time": "0.007s",
2281          "classname": "",
2282          "failures": [
2283            {
2284              "message": "Value of: add(1, 1)\n  Actual: 3\nExpected: 2",
2285              "type": ""
2286            },
2287            {
2288              "message": "Value of: add(1, -1)\n  Actual: 1\nExpected: 0",
2289              "type": ""
2290            }
2291          ]
2292        },
2293        {
2294          "name": "Subtraction",
2295          "file": "test.cpp",
2296          "line": 2,
2297          "status": "RUN",
2298          "time": "0.005s",
2299          "classname": ""
2300        }
2301      ]
2302    },
2303    {
2304      "name": "LogicTest",
2305      "tests": 1,
2306      "failures": 0,
2307      "errors": 0,
2308      "time": "0.005s",
2309      "testsuite": [
2310        {
2311          "name": "NonContradiction",
2312          "file": "test.cpp",
2313          "line": 3,
2314          "status": "RUN",
2315          "time": "0.005s",
2316          "classname": ""
2317        }
2318      ]
2319    }
2320  ]
2321}
2322```
2323
2324{: .callout .important}
2325IMPORTANT: The exact format of the JSON document is subject to change.
2326
2327### Controlling How Failures Are Reported
2328
2329#### Detecting Test Premature Exit
2330
2331Google Test implements the _premature-exit-file_ protocol for test runners to
2332catch any kind of unexpected exits of test programs. Upon start, Google Test
2333creates the file which will be automatically deleted after all work has been
2334finished. Then, the test runner can check if this file exists. In case the file
2335remains undeleted, the inspected test has exited prematurely.
2336
2337This feature is enabled only if the `TEST_PREMATURE_EXIT_FILE` environment
2338variable has been set.
2339
2340#### Turning Assertion Failures into Break-Points
2341
2342When running test programs under a debugger, it's very convenient if the
2343debugger can catch an assertion failure and automatically drop into interactive
2344mode. googletest's *break-on-failure* mode supports this behavior.
2345
2346To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value
2347other than `0`. Alternatively, you can use the `--gtest_break_on_failure`
2348command line flag.
2349
2350#### Disabling Catching Test-Thrown Exceptions
2351
2352googletest can be used either with or without exceptions enabled. If a test
2353throws a C++ exception or (on Windows) a structured exception (SEH), by default
2354googletest catches it, reports it as a test failure, and continues with the next
2355test method. This maximizes the coverage of a test run. Also, on Windows an
2356uncaught exception will cause a pop-up window, so catching the exceptions allows
2357you to run the tests automatically.
2358
2359When debugging the test failures, however, you may instead want the exceptions
2360to be handled by the debugger, such that you can examine the call stack when an
2361exception is thrown. To achieve that, set the `GTEST_CATCH_EXCEPTIONS`
2362environment variable to `0`, or use the `--gtest_catch_exceptions=0` flag when
2363running the tests.
2364
2365### Sanitizer Integration
2366
2367The
2368[Undefined Behavior Sanitizer](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html),
2369[Address Sanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizer),
2370and
2371[Thread Sanitizer](https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual)
2372all provide weak functions that you can override to trigger explicit failures
2373when they detect sanitizer errors, such as creating a reference from `nullptr`.
2374To override these functions, place definitions for them in a source file that
2375you compile as part of your main binary:
2376
2377```
2378extern "C" {
2379void __ubsan_on_report() {
2380  FAIL() << "Encountered an undefined behavior sanitizer error";
2381}
2382void __asan_on_error() {
2383  FAIL() << "Encountered an address sanitizer error";
2384}
2385void __tsan_on_report() {
2386  FAIL() << "Encountered a thread sanitizer error";
2387}
2388}  // extern "C"
2389```
2390
2391After compiling your project with one of the sanitizers enabled, if a particular
2392test triggers a sanitizer error, googletest will report that it failed.
2393