• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Advanced googletest Topics
2
3## Introduction
4
5Now that you have read the [googletest Primer](primer.md) and learned how to
6write tests using googletest, it's time to learn some new tricks. This document
7will show you more assertions as well as how to construct complex failure
8messages, propagate fatal failures, reuse and speed up your test fixtures, and
9use various flags with your tests.
10
11## More Assertions
12
13This section covers some less frequently used, but still significant,
14assertions.
15
16### Explicit Success and Failure
17
18See [Explicit Success and Failure](reference/assertions.md#success-failure) in
19the Assertions Reference.
20
21### Exception Assertions
22
23See [Exception Assertions](reference/assertions.md#exceptions) in the Assertions
24Reference.
25
26### Predicate Assertions for Better Error Messages
27
28Even though googletest has a rich set of assertions, they can never be complete,
29as it's impossible (nor a good idea) to anticipate all scenarios a user might
30run into. Therefore, sometimes a user has to use `EXPECT_TRUE()` to check a
31complex expression, for lack of a better macro. This has the problem of not
32showing you the values of the parts of the expression, making it hard to
33understand what went wrong. As a workaround, some users choose to construct the
34failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this
35is awkward especially when the expression has side-effects or is expensive to
36evaluate.
37
38googletest gives you three different options to solve this problem:
39
40#### Using an Existing Boolean Function
41
42If you already have a function or functor that returns `bool` (or a type that
43can be implicitly converted to `bool`), you can use it in a *predicate
44assertion* to get the function arguments printed for free. See
45[`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) in the Assertions
46Reference for details.
47
48#### Using a Function That Returns an AssertionResult
49
50While `EXPECT_PRED*()` and friends are handy for a quick job, the syntax is not
51satisfactory: you have to use different macros for different arities, and it
52feels more like Lisp than C++. The `::testing::AssertionResult` class solves
53this problem.
54
55An `AssertionResult` object represents the result of an assertion (whether it's
56a success or a failure, and an associated message). You can create an
57`AssertionResult` using one of these factory functions:
58
59```c++
60namespace testing {
61
62// Returns an AssertionResult object to indicate that an assertion has
63// succeeded.
64AssertionResult AssertionSuccess();
65
66// Returns an AssertionResult object to indicate that an assertion has
67// failed.
68AssertionResult AssertionFailure();
69
70}
71```
72
73You can then use the `<<` operator to stream messages to the `AssertionResult`
74object.
75
76To provide more readable messages in Boolean assertions (e.g. `EXPECT_TRUE()`),
77write a predicate function that returns `AssertionResult` instead of `bool`. For
78example, if you define `IsEven()` as:
79
80```c++
81testing::AssertionResult IsEven(int n) {
82  if ((n % 2) == 0)
83    return testing::AssertionSuccess();
84  else
85    return testing::AssertionFailure() << n << " is odd";
86}
87```
88
89instead of:
90
91```c++
92bool IsEven(int n) {
93  return (n % 2) == 0;
94}
95```
96
97the failed assertion `EXPECT_TRUE(IsEven(Fib(4)))` will print:
98
99```none
100Value of: IsEven(Fib(4))
101  Actual: false (3 is odd)
102Expected: true
103```
104
105instead of a more opaque
106
107```none
108Value of: IsEven(Fib(4))
109  Actual: false
110Expected: true
111```
112
113If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE` as well
114(one third of Boolean assertions in the Google code base are negative ones), and
115are fine with making the predicate slower in the success case, you can supply a
116success message:
117
118```c++
119testing::AssertionResult IsEven(int n) {
120  if ((n % 2) == 0)
121    return testing::AssertionSuccess() << n << " is even";
122  else
123    return testing::AssertionFailure() << n << " is odd";
124}
125```
126
127Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print
128
129```none
130  Value of: IsEven(Fib(6))
131     Actual: true (8 is even)
132  Expected: false
133```
134
135#### Using a Predicate-Formatter
136
137If you find the default message generated by
138[`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) and
139[`EXPECT_TRUE`](reference/assertions.md#EXPECT_TRUE) unsatisfactory, or some
140arguments to your predicate do not support streaming to `ostream`, you can
141instead use *predicate-formatter assertions* to *fully* customize how the
142message is formatted. See
143[`EXPECT_PRED_FORMAT*`](reference/assertions.md#EXPECT_PRED_FORMAT) in the
144Assertions Reference for details.
145
146### Floating-Point Comparison
147
148See [Floating-Point Comparison](reference/assertions.md#floating-point) in the
149Assertions Reference.
150
151#### Floating-Point Predicate-Format Functions
152
153Some floating-point operations are useful, but not that often used. In order to
154avoid an explosion of new macros, we provide them as predicate-format functions
155that can be used in the predicate assertion macro
156[`EXPECT_PRED_FORMAT2`](reference/assertions.md#EXPECT_PRED_FORMAT), for
157example:
158
159```c++
160using ::testing::FloatLE;
161using ::testing::DoubleLE;
162...
163EXPECT_PRED_FORMAT2(FloatLE, val1, val2);
164EXPECT_PRED_FORMAT2(DoubleLE, val1, val2);
165```
166
167The above code verifies that `val1` is less than, or approximately equal to,
168`val2`.
169
170### Asserting Using gMock Matchers
171
172See [`EXPECT_THAT`](reference/assertions.md#EXPECT_THAT) in the Assertions
173Reference.
174
175### More String Assertions
176
177(Please read the [previous](#asserting-using-gmock-matchers) section first if
178you haven't.)
179
180You can use the gMock [string matchers](reference/matchers.md#string-matchers)
181with [`EXPECT_THAT`](reference/assertions.md#EXPECT_THAT) to do more string
182comparison tricks (sub-string, prefix, suffix, regular expression, and etc). For
183example,
184
185```c++
186using ::testing::HasSubstr;
187using ::testing::MatchesRegex;
188...
189  ASSERT_THAT(foo_string, HasSubstr("needle"));
190  EXPECT_THAT(bar_string, MatchesRegex("\\w*\\d+"));
191```
192
193### Windows HRESULT assertions
194
195See [Windows HRESULT Assertions](reference/assertions.md#HRESULT) in the
196Assertions Reference.
197
198### Type Assertions
199
200You can call the function
201
202```c++
203::testing::StaticAssertTypeEq<T1, T2>();
204```
205
206to assert that types `T1` and `T2` are the same. The function does nothing if
207the assertion is satisfied. If the types are different, the function call will
208fail to compile, the compiler error message will say that `T1 and T2 are not the
209same type` and most likely (depending on the compiler) show you the actual
210values of `T1` and `T2`. This is mainly useful inside template code.
211
212**Caveat**: When used inside a member function of a class template or a function
213template, `StaticAssertTypeEq<T1, T2>()` is effective only if the function is
214instantiated. For example, given:
215
216```c++
217template <typename T> class Foo {
218 public:
219  void Bar() { testing::StaticAssertTypeEq<int, T>(); }
220};
221```
222
223the code:
224
225```c++
226void Test1() { Foo<bool> foo; }
227```
228
229will not generate a compiler error, as `Foo<bool>::Bar()` is never actually
230instantiated. Instead, you need:
231
232```c++
233void Test2() { Foo<bool> foo; foo.Bar(); }
234```
235
236to cause a compiler error.
237
238### Assertion Placement
239
240You can use assertions in any C++ function. In particular, it doesn't have to be
241a method of the test fixture class. The one constraint is that assertions that
242generate a fatal failure (`FAIL*` and `ASSERT_*`) can only be used in
243void-returning functions. This is a consequence of Google's not using
244exceptions. By placing it in a non-void function you'll get a confusing compile
245error like `"error: void value not ignored as it ought to be"` or `"cannot
246initialize return object of type 'bool' with an rvalue of type 'void'"` or
247`"error: no viable conversion from 'void' to 'string'"`.
248
249If you need to use fatal assertions in a function that returns non-void, one
250option is to make the function return the value in an out parameter instead. For
251example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You
252need to make sure that `*result` contains some sensible value even when the
253function returns prematurely. As the function now returns `void`, you can use
254any assertion inside of it.
255
256If changing the function's type is not an option, you should just use assertions
257that generate non-fatal failures, such as `ADD_FAILURE*` and `EXPECT_*`.
258
259{: .callout .note}
260NOTE: Constructors and destructors are not considered void-returning functions,
261according to the C++ language specification, and so you may not use fatal
262assertions in them; you'll get a compilation error if you try. Instead, either
263call `abort` and crash the entire test executable, or put the fatal assertion in
264a `SetUp`/`TearDown` function; see
265[constructor/destructor vs. `SetUp`/`TearDown`](faq.md#CtorVsSetUp)
266
267{: .callout .warning}
268WARNING: A fatal assertion in a helper function (private void-returning method)
269called from a constructor or destructor does not terminate the current test, as
270your intuition might suggest: it merely returns from the constructor or
271destructor early, possibly leaving your object in a partially-constructed or
272partially-destructed state! You almost certainly want to `abort` or use
273`SetUp`/`TearDown` instead.
274
275## Skipping test execution
276
277Related to the assertions `SUCCEED()` and `FAIL()`, you can prevent further test
278execution at runtime with the `GTEST_SKIP()` macro. This is useful when you need
279to check for preconditions of the system under test during runtime and skip
280tests in a meaningful way.
281
282`GTEST_SKIP()` can be used in individual test cases or in the `SetUp()` methods
283of classes derived from either `::testing::Environment` or `::testing::Test`.
284For example:
285
286```c++
287TEST(SkipTest, DoesSkip) {
288  GTEST_SKIP() << "Skipping single test";
289  EXPECT_EQ(0, 1);  // Won't fail; it won't be executed
290}
291
292class SkipFixture : public ::testing::Test {
293 protected:
294  void SetUp() override {
295    GTEST_SKIP() << "Skipping all tests for this fixture";
296  }
297};
298
299// Tests for SkipFixture won't be executed.
300TEST_F(SkipFixture, SkipsOneTest) {
301  EXPECT_EQ(5, 7);  // Won't fail
302}
303```
304
305As with assertion macros, you can stream a custom message into `GTEST_SKIP()`.
306
307## Teaching googletest How to Print Your Values
308
309When a test assertion such as `EXPECT_EQ` fails, googletest prints the argument
310values to help you debug. It does this using a user-extensible value printer.
311
312This printer knows how to print built-in C++ types, native arrays, STL
313containers, and any type that supports the `<<` operator. For other types, it
314prints the raw bytes in the value and hopes that you the user can figure it out.
315
316As mentioned earlier, the printer is *extensible*. That means you can teach it
317to do a better job at printing your particular type than to dump the bytes. To
318do that, define `<<` for your type:
319
320```c++
321#include <ostream>
322
323namespace foo {
324
325class Bar {  // We want googletest to be able to print instances of this.
326...
327  // Create a free inline friend function.
328  friend std::ostream& operator<<(std::ostream& os, const Bar& bar) {
329    return os << bar.DebugString();  // whatever needed to print bar to os
330  }
331};
332
333// If you can't declare the function in the class it's important that the
334// << operator is defined in the SAME namespace that defines Bar.  C++'s look-up
335// rules rely on that.
336std::ostream& operator<<(std::ostream& os, const Bar& bar) {
337  return os << bar.DebugString();  // whatever needed to print bar to os
338}
339
340}  // namespace foo
341```
342
343Sometimes, this might not be an option: your team may consider it bad style to
344have a `<<` operator for `Bar`, or `Bar` may already have a `<<` operator that
345doesn't do what you want (and you cannot change it). If so, you can instead
346define a `PrintTo()` function like this:
347
348```c++
349#include <ostream>
350
351namespace foo {
352
353class Bar {
354  ...
355  friend void PrintTo(const Bar& bar, std::ostream* os) {
356    *os << bar.DebugString();  // whatever needed to print bar to os
357  }
358};
359
360// If you can't declare the function in the class it's important that PrintTo()
361// is defined in the SAME namespace that defines Bar.  C++'s look-up rules rely
362// on that.
363void PrintTo(const Bar& bar, std::ostream* os) {
364  *os << bar.DebugString();  // whatever needed to print bar to os
365}
366
367}  // namespace foo
368```
369
370If you have defined both `<<` and `PrintTo()`, the latter will be used when
371googletest is concerned. This allows you to customize how the value appears in
372googletest's output without affecting code that relies on the behavior of its
373`<<` operator.
374
375If you want to print a value `x` using googletest's value printer yourself, just
376call `::testing::PrintToString(x)`, which returns an `std::string`:
377
378```c++
379vector<pair<Bar, int> > bar_ints = GetBarIntVector();
380
381EXPECT_TRUE(IsCorrectBarIntVector(bar_ints))
382    << "bar_ints = " << testing::PrintToString(bar_ints);
383```
384
385## Death Tests
386
387In many applications, there are assertions that can cause application failure if
388a condition is not met. These consistency checks, which ensure that the program
389is in a known good state, are there to fail at the earliest possible time after
390some program state is corrupted. If the assertion checks the wrong condition,
391then the program may proceed in an erroneous state, which could lead to memory
392corruption, security holes, or worse. Hence it is vitally important to test that
393such assertion statements work as expected.
394
395Since these precondition checks cause the processes to die, we call such tests
396_death tests_. More generally, any test that checks that a program terminates
397(except by throwing an exception) in an expected fashion is also a death test.
398
399Note that if a piece of code throws an exception, we don't consider it "death"
400for the purpose of death tests, as the caller of the code could catch the
401exception and avoid the crash. If you want to verify exceptions thrown by your
402code, see [Exception Assertions](#ExceptionAssertions).
403
404If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, see
405["Catching" Failures](#catching-failures).
406
407### How to Write a Death Test
408
409GoogleTest provides assertion macros to support death tests. See
410[Death Assertions](reference/assertions.md#death) in the Assertions Reference
411for details.
412
413To write a death test, simply use one of the macros inside your test function.
414For example,
415
416```c++
417TEST(MyDeathTest, Foo) {
418  // This death test uses a compound statement.
419  ASSERT_DEATH({
420    int n = 5;
421    Foo(&n);
422  }, "Error on line .* of Foo()");
423}
424
425TEST(MyDeathTest, NormalExit) {
426  EXPECT_EXIT(NormalExit(), testing::ExitedWithCode(0), "Success");
427}
428
429TEST(MyDeathTest, KillProcess) {
430  EXPECT_EXIT(KillProcess(), testing::KilledBySignal(SIGKILL),
431              "Sending myself unblockable signal");
432}
433```
434
435verifies that:
436
437*   calling `Foo(5)` causes the process to die with the given error message,
438*   calling `NormalExit()` causes the process to print `"Success"` to stderr and
439    exit with exit code 0, and
440*   calling `KillProcess()` kills the process with signal `SIGKILL`.
441
442The test function body may contain other assertions and statements as well, if
443necessary.
444
445Note that a death test only cares about three things:
446
4471.  does `statement` abort or exit the process?
4482.  (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status
449    satisfy `predicate`? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`)
450    is the exit status non-zero? And
4513.  does the stderr output match `matcher`?
452
453In particular, if `statement` generates an `ASSERT_*` or `EXPECT_*` failure, it
454will **not** cause the death test to fail, as googletest assertions don't abort
455the process.
456
457### Death Test Naming
458
459{: .callout .important}
460IMPORTANT: We strongly recommend you to follow the convention of naming your
461**test suite** (not test) `*DeathTest` when it contains a death test, as
462demonstrated in the above example. The
463[Death Tests And Threads](#death-tests-and-threads) section below explains why.
464
465If a test fixture class is shared by normal tests and death tests, you can use
466`using` or `typedef` to introduce an alias for the fixture class and avoid
467duplicating its code:
468
469```c++
470class FooTest : public testing::Test { ... };
471
472using FooDeathTest = FooTest;
473
474TEST_F(FooTest, DoesThis) {
475  // normal test
476}
477
478TEST_F(FooDeathTest, DoesThat) {
479  // death test
480}
481```
482
483### Regular Expression Syntax
484
485When built with Bazel and using Abseil, googletest uses the
486[RE2](https://github.com/google/re2/wiki/Syntax) syntax. Otherwise, for POSIX
487systems (Linux, Cygwin, Mac), googletest uses the
488[POSIX extended regular expression](http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html#tag_09_04)
489syntax. To learn about POSIX syntax, you may want to read this
490[Wikipedia entry](http://en.wikipedia.org/wiki/Regular_expression#POSIX_extended).
491
492On Windows, googletest uses its own simple regular expression implementation. It
493lacks many features. For example, we don't support union (`"x|y"`), grouping
494(`"(xy)"`), brackets (`"[xy]"`), and repetition count (`"x{5,7}"`), among
495others. Below is what we do support (`A` denotes a literal character, period
496(`.`), or a single `\\ ` escape sequence; `x` and `y` denote regular
497expressions.):
498
499Expression | Meaning
500---------- | --------------------------------------------------------------
501`c`        | matches any literal character `c`
502`\\d`      | matches any decimal digit
503`\\D`      | matches any character that's not a decimal digit
504`\\f`      | matches `\f`
505`\\n`      | matches `\n`
506`\\r`      | matches `\r`
507`\\s`      | matches any ASCII whitespace, including `\n`
508`\\S`      | matches any character that's not a whitespace
509`\\t`      | matches `\t`
510`\\v`      | matches `\v`
511`\\w`      | matches any letter, `_`, or decimal digit
512`\\W`      | matches any character that `\\w` doesn't match
513`\\c`      | matches any literal character `c`, which must be a punctuation
514`.`        | matches any single character except `\n`
515`A?`       | matches 0 or 1 occurrences of `A`
516`A*`       | matches 0 or many occurrences of `A`
517`A+`       | matches 1 or many occurrences of `A`
518`^`        | matches the beginning of a string (not that of each line)
519`$`        | matches the end of a string (not that of each line)
520`xy`       | matches `x` followed by `y`
521
522To help you determine which capability is available on your system, googletest
523defines macros to govern which regular expression it is using. The macros are:
524`GTEST_USES_SIMPLE_RE=1` or `GTEST_USES_POSIX_RE=1`. If you want your death
525tests to work in all cases, you can either `#if` on these macros or use the more
526limited syntax only.
527
528### How It Works
529
530See [Death Assertions](reference/assertions.md#death) in the Assertions
531Reference.
532
533### Death Tests And Threads
534
535The reason for the two death test styles has to do with thread safety. Due to
536well-known problems with forking in the presence of threads, death tests should
537be run in a single-threaded context. Sometimes, however, it isn't feasible to
538arrange that kind of environment. For example, statically-initialized modules
539may start threads before main is ever reached. Once threads have been created,
540it may be difficult or impossible to clean them up.
541
542googletest has three features intended to raise awareness of threading issues.
543
5441.  A warning is emitted if multiple threads are running when a death test is
545    encountered.
5462.  Test suites with a name ending in "DeathTest" are run before all other
547    tests.
5483.  It uses `clone()` instead of `fork()` to spawn the child process on Linux
549    (`clone()` is not available on Cygwin and Mac), as `fork()` is more likely
550    to cause the child to hang when the parent process has multiple threads.
551
552It's perfectly fine to create threads inside a death test statement; they are
553executed in a separate process and cannot affect the parent.
554
555### Death Test Styles
556
557The "threadsafe" death test style was introduced in order to help mitigate the
558risks of testing in a possibly multithreaded environment. It trades increased
559test execution time (potentially dramatically so) for improved thread safety.
560
561The automated testing framework does not set the style flag. You can choose a
562particular style of death tests by setting the flag programmatically:
563
564```c++
565GTEST_FLAG_SET(death_test_style, "threadsafe")
566```
567
568You can do this in `main()` to set the style for all death tests in the binary,
569or in individual tests. Recall that flags are saved before running each test and
570restored afterwards, so you need not do that yourself. For example:
571
572```c++
573int main(int argc, char** argv) {
574  testing::InitGoogleTest(&argc, argv);
575  GTEST_FLAG_SET(death_test_style, "fast");
576  return RUN_ALL_TESTS();
577}
578
579TEST(MyDeathTest, TestOne) {
580  GTEST_FLAG_SET(death_test_style, "threadsafe");
581  // This test is run in the "threadsafe" style:
582  ASSERT_DEATH(ThisShouldDie(), "");
583}
584
585TEST(MyDeathTest, TestTwo) {
586  // This test is run in the "fast" style:
587  ASSERT_DEATH(ThisShouldDie(), "");
588}
589```
590
591### Caveats
592
593The `statement` argument of `ASSERT_EXIT()` can be any valid C++ statement. If
594it leaves the current function via a `return` statement or by throwing an
595exception, the death test is considered to have failed. Some googletest macros
596may return from the current function (e.g. `ASSERT_TRUE()`), so be sure to avoid
597them in `statement`.
598
599Since `statement` runs in the child process, any in-memory side effect (e.g.
600modifying a variable, releasing memory, etc) it causes will *not* be observable
601in the parent process. In particular, if you release memory in a death test,
602your program will fail the heap check as the parent process will never see the
603memory reclaimed. To solve this problem, you can
604
6051.  try not to free memory in a death test;
6062.  free the memory again in the parent process; or
6073.  do not use the heap checker in your program.
608
609Due to an implementation detail, you cannot place multiple death test assertions
610on the same line; otherwise, compilation will fail with an unobvious error
611message.
612
613Despite the improved thread safety afforded by the "threadsafe" style of death
614test, thread problems such as deadlock are still possible in the presence of
615handlers registered with `pthread_atfork(3)`.
616
617## Using Assertions in Sub-routines
618
619{: .callout .note}
620Note: If you want to put a series of test assertions in a subroutine to check
621for a complex condition, consider using
622[a custom GMock matcher](gmock_cook_book.md#NewMatchers) instead. This lets you
623provide a more readable error message in case of failure and avoid all of the
624issues described below.
625
626### Adding Traces to Assertions
627
628If a test sub-routine is called from several places, when an assertion inside it
629fails, it can be hard to tell which invocation of the sub-routine the failure is
630from. You can alleviate this problem using extra logging or custom failure
631messages, but that usually clutters up your tests. A better solution is to use
632the `SCOPED_TRACE` macro or the `ScopedTrace` utility:
633
634```c++
635SCOPED_TRACE(message);
636```
637
638```c++
639ScopedTrace trace("file_path", line_number, message);
640```
641
642where `message` can be anything streamable to `std::ostream`. `SCOPED_TRACE`
643macro will cause the current file name, line number, and the given message to be
644added in every failure message. `ScopedTrace` accepts explicit file name and
645line number in arguments, which is useful for writing test helpers. The effect
646will be undone when the control leaves the current lexical scope.
647
648For example,
649
650```c++
65110: void Sub1(int n) {
65211:   EXPECT_EQ(Bar(n), 1);
65312:   EXPECT_EQ(Bar(n + 1), 2);
65413: }
65514:
65615: TEST(FooTest, Bar) {
65716:   {
65817:     SCOPED_TRACE("A");  // This trace point will be included in
65918:                         // every failure in this scope.
66019:     Sub1(1);
66120:   }
66221:   // Now it won't.
66322:   Sub1(9);
66423: }
665```
666
667could result in messages like these:
668
669```none
670path/to/foo_test.cc:11: Failure
671Value of: Bar(n)
672Expected: 1
673  Actual: 2
674Google Test trace:
675path/to/foo_test.cc:17: A
676
677path/to/foo_test.cc:12: Failure
678Value of: Bar(n + 1)
679Expected: 2
680  Actual: 3
681```
682
683Without the trace, it would've been difficult to know which invocation of
684`Sub1()` the two failures come from respectively. (You could add an extra
685message to each assertion in `Sub1()` to indicate the value of `n`, but that's
686tedious.)
687
688Some tips on using `SCOPED_TRACE`:
689
6901.  With a suitable message, it's often enough to use `SCOPED_TRACE` at the
691    beginning of a sub-routine, instead of at each call site.
6922.  When calling sub-routines inside a loop, make the loop iterator part of the
693    message in `SCOPED_TRACE` such that you can know which iteration the failure
694    is from.
6953.  Sometimes the line number of the trace point is enough for identifying the
696    particular invocation of a sub-routine. In this case, you don't have to
697    choose a unique message for `SCOPED_TRACE`. You can simply use `""`.
6984.  You can use `SCOPED_TRACE` in an inner scope when there is one in the outer
699    scope. In this case, all active trace points will be included in the failure
700    messages, in reverse order they are encountered.
7015.  The trace dump is clickable in Emacs - hit `return` on a line number and
702    you'll be taken to that line in the source file!
703
704### Propagating Fatal Failures
705
706A common pitfall when using `ASSERT_*` and `FAIL*` is not understanding that
707when they fail they only abort the _current function_, not the entire test. For
708example, the following test will segfault:
709
710```c++
711void Subroutine() {
712  // Generates a fatal failure and aborts the current function.
713  ASSERT_EQ(1, 2);
714
715  // The following won't be executed.
716  ...
717}
718
719TEST(FooTest, Bar) {
720  Subroutine();  // The intended behavior is for the fatal failure
721                 // in Subroutine() to abort the entire test.
722
723  // The actual behavior: the function goes on after Subroutine() returns.
724  int* p = nullptr;
725  *p = 3;  // Segfault!
726}
727```
728
729To alleviate this, googletest provides three different solutions. You could use
730either exceptions, the `(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the
731`HasFatalFailure()` function. They are described in the following two
732subsections.
733
734#### Asserting on Subroutines with an exception
735
736The following code can turn ASSERT-failure into an exception:
737
738```c++
739class ThrowListener : public testing::EmptyTestEventListener {
740  void OnTestPartResult(const testing::TestPartResult& result) override {
741    if (result.type() == testing::TestPartResult::kFatalFailure) {
742      throw testing::AssertionException(result);
743    }
744  }
745};
746int main(int argc, char** argv) {
747  ...
748  testing::UnitTest::GetInstance()->listeners().Append(new ThrowListener);
749  return RUN_ALL_TESTS();
750}
751```
752
753This listener should be added after other listeners if you have any, otherwise
754they won't see failed `OnTestPartResult`.
755
756#### Asserting on Subroutines
757
758As shown above, if your test calls a subroutine that has an `ASSERT_*` failure
759in it, the test will continue after the subroutine returns. This may not be what
760you want.
761
762Often people want fatal failures to propagate like exceptions. For that
763googletest offers the following macros:
764
765Fatal assertion                       | Nonfatal assertion                    | Verifies
766------------------------------------- | ------------------------------------- | --------
767`ASSERT_NO_FATAL_FAILURE(statement);` | `EXPECT_NO_FATAL_FAILURE(statement);` | `statement` doesn't generate any new fatal failures in the current thread.
768
769Only failures in the thread that executes the assertion are checked to determine
770the result of this type of assertions. If `statement` creates new threads,
771failures in these threads are ignored.
772
773Examples:
774
775```c++
776ASSERT_NO_FATAL_FAILURE(Foo());
777
778int i;
779EXPECT_NO_FATAL_FAILURE({
780  i = Bar();
781});
782```
783
784Assertions from multiple threads are currently not supported on Windows.
785
786#### Checking for Failures in the Current Test
787
788`HasFatalFailure()` in the `::testing::Test` class returns `true` if an
789assertion in the current test has suffered a fatal failure. This allows
790functions to catch fatal failures in a sub-routine and return early.
791
792```c++
793class Test {
794 public:
795  ...
796  static bool HasFatalFailure();
797};
798```
799
800The typical usage, which basically simulates the behavior of a thrown exception,
801is:
802
803```c++
804TEST(FooTest, Bar) {
805  Subroutine();
806  // Aborts if Subroutine() had a fatal failure.
807  if (HasFatalFailure()) return;
808
809  // The following won't be executed.
810  ...
811}
812```
813
814If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test
815fixture, you must add the `::testing::Test::` prefix, as in:
816
817```c++
818if (testing::Test::HasFatalFailure()) return;
819```
820
821Similarly, `HasNonfatalFailure()` returns `true` if the current test has at
822least one non-fatal failure, and `HasFailure()` returns `true` if the current
823test has at least one failure of either kind.
824
825## Logging Additional Information
826
827In your test code, you can call `RecordProperty("key", value)` to log additional
828information, where `value` can be either a string or an `int`. The *last* value
829recorded for a key will be emitted to the
830[XML output](#generating-an-xml-report) if you specify one. For example, the
831test
832
833```c++
834TEST_F(WidgetUsageTest, MinAndMaxWidgets) {
835  RecordProperty("MaximumWidgets", ComputeMaxUsage());
836  RecordProperty("MinimumWidgets", ComputeMinUsage());
837}
838```
839
840will output XML like this:
841
842```xml
843  ...
844    <testcase name="MinAndMaxWidgets" file="test.cpp" line="1" status="run" time="0.006" classname="WidgetUsageTest" MaximumWidgets="12" MinimumWidgets="9" />
845  ...
846```
847
848{: .callout .note}
849> NOTE:
850>
851> *   `RecordProperty()` is a static member of the `Test` class. Therefore it
852>     needs to be prefixed with `::testing::Test::` if used outside of the
853>     `TEST` body and the test fixture class.
854> *   *`key`* must be a valid XML attribute name, and cannot conflict with the
855>     ones already used by googletest (`name`, `status`, `time`, `classname`,
856>     `type_param`, and `value_param`).
857> *   Calling `RecordProperty()` outside of the lifespan of a test is allowed.
858>     If it's called outside of a test but between a test suite's
859>     `SetUpTestSuite()` and `TearDownTestSuite()` methods, it will be
860>     attributed to the XML element for the test suite. If it's called outside
861>     of all test suites (e.g. in a test environment), it will be attributed to
862>     the top-level XML element.
863
864## Sharing Resources Between Tests in the Same Test Suite
865
866googletest creates a new test fixture object for each test in order to make
867tests independent and easier to debug. However, sometimes tests use resources
868that are expensive to set up, making the one-copy-per-test model prohibitively
869expensive.
870
871If the tests don't change the resource, there's no harm in their sharing a
872single resource copy. So, in addition to per-test set-up/tear-down, googletest
873also supports per-test-suite set-up/tear-down. To use it:
874
8751.  In your test fixture class (say `FooTest` ), declare as `static` some member
876    variables to hold the shared resources.
8772.  Outside your test fixture class (typically just below it), define those
878    member variables, optionally giving them initial values.
8793.  In the same test fixture class, define a `static void SetUpTestSuite()`
880    function (remember not to spell it as **`SetupTestSuite`** with a small
881    `u`!) to set up the shared resources and a `static void TearDownTestSuite()`
882    function to tear them down.
883
884That's it! googletest automatically calls `SetUpTestSuite()` before running the
885*first test* in the `FooTest` test suite (i.e. before creating the first
886`FooTest` object), and calls `TearDownTestSuite()` after running the *last test*
887in it (i.e. after deleting the last `FooTest` object). In between, the tests can
888use the shared resources.
889
890Remember that the test order is undefined, so your code can't depend on a test
891preceding or following another. Also, the tests must either not modify the state
892of any shared resource, or, if they do modify the state, they must restore the
893state to its original value before passing control to the next test.
894
895Note that `SetUpTestSuite()` may be called multiple times for a test fixture
896class that has derived classes, so you should not expect code in the function
897body to be run only once. Also, derived classes still have access to shared
898resources defined as static members, so careful consideration is needed when
899managing shared resources to avoid memory leaks.
900
901Here's an example of per-test-suite set-up and tear-down:
902
903```c++
904class FooTest : public testing::Test {
905 protected:
906  // Per-test-suite set-up.
907  // Called before the first test in this test suite.
908  // Can be omitted if not needed.
909  static void SetUpTestSuite() {
910    // Avoid reallocating static objects if called in subclasses of FooTest.
911    if (shared_resource_ == nullptr) {
912      shared_resource_ = new ...;
913    }
914  }
915
916  // Per-test-suite tear-down.
917  // Called after the last test in this test suite.
918  // Can be omitted if not needed.
919  static void TearDownTestSuite() {
920    delete shared_resource_;
921    shared_resource_ = nullptr;
922  }
923
924  // You can define per-test set-up logic as usual.
925  void SetUp() override { ... }
926
927  // You can define per-test tear-down logic as usual.
928  void TearDown() override { ... }
929
930  // Some expensive resource shared by all tests.
931  static T* shared_resource_;
932};
933
934T* FooTest::shared_resource_ = nullptr;
935
936TEST_F(FooTest, Test1) {
937  ... you can refer to shared_resource_ here ...
938}
939
940TEST_F(FooTest, Test2) {
941  ... you can refer to shared_resource_ here ...
942}
943```
944
945{: .callout .note}
946NOTE: Though the above code declares `SetUpTestSuite()` protected, it may
947sometimes be necessary to declare it public, such as when using it with
948`TEST_P`.
949
950## Global Set-Up and Tear-Down
951
952Just as you can do set-up and tear-down at the test level and the test suite
953level, you can also do it at the test program level. Here's how.
954
955First, you subclass the `::testing::Environment` class to define a test
956environment, which knows how to set-up and tear-down:
957
958```c++
959class Environment : public ::testing::Environment {
960 public:
961  ~Environment() override {}
962
963  // Override this to define how to set up the environment.
964  void SetUp() override {}
965
966  // Override this to define how to tear down the environment.
967  void TearDown() override {}
968};
969```
970
971Then, you register an instance of your environment class with googletest by
972calling the `::testing::AddGlobalTestEnvironment()` function:
973
974```c++
975Environment* AddGlobalTestEnvironment(Environment* env);
976```
977
978Now, when `RUN_ALL_TESTS()` is called, it first calls the `SetUp()` method of
979each environment object, then runs the tests if none of the environments
980reported fatal failures and `GTEST_SKIP()` was not called. `RUN_ALL_TESTS()`
981always calls `TearDown()` with each environment object, regardless of whether or
982not the tests were run.
983
984It's OK to register multiple environment objects. In this suite, their `SetUp()`
985will be called in the order they are registered, and their `TearDown()` will be
986called in the reverse order.
987
988Note that googletest takes ownership of the registered environment objects.
989Therefore **do not delete them** by yourself.
990
991You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is called,
992probably in `main()`. If you use `gtest_main`, you need to call this before
993`main()` starts for it to take effect. One way to do this is to define a global
994variable like this:
995
996```c++
997testing::Environment* const foo_env =
998    testing::AddGlobalTestEnvironment(new FooEnvironment);
999```
1000
1001However, we strongly recommend you to write your own `main()` and call
1002`AddGlobalTestEnvironment()` there, as relying on initialization of global
1003variables makes the code harder to read and may cause problems when you register
1004multiple environments from different translation units and the environments have
1005dependencies among them (remember that the compiler doesn't guarantee the order
1006in which global variables from different translation units are initialized).
1007
1008## Value-Parameterized Tests
1009
1010*Value-parameterized tests* allow you to test your code with different
1011parameters without writing multiple copies of the same test. This is useful in a
1012number of situations, for example:
1013
1014*   You have a piece of code whose behavior is affected by one or more
1015    command-line flags. You want to make sure your code performs correctly for
1016    various values of those flags.
1017*   You want to test different implementations of an OO interface.
1018*   You want to test your code over various inputs (a.k.a. data-driven testing).
1019    This feature is easy to abuse, so please exercise your good sense when doing
1020    it!
1021
1022### How to Write Value-Parameterized Tests
1023
1024To write value-parameterized tests, first you should define a fixture class. It
1025must be derived from both `testing::Test` and `testing::WithParamInterface<T>`
1026(the latter is a pure interface), where `T` is the type of your parameter
1027values. For convenience, you can just derive the fixture class from
1028`testing::TestWithParam<T>`, which itself is derived from both `testing::Test`
1029and `testing::WithParamInterface<T>`. `T` can be any copyable type. If it's a
1030raw pointer, you are responsible for managing the lifespan of the pointed
1031values.
1032
1033{: .callout .note}
1034NOTE: If your test fixture defines `SetUpTestSuite()` or `TearDownTestSuite()`
1035they must be declared **public** rather than **protected** in order to use
1036`TEST_P`.
1037
1038```c++
1039class FooTest :
1040    public testing::TestWithParam<const char*> {
1041  // You can implement all the usual fixture class members here.
1042  // To access the test parameter, call GetParam() from class
1043  // TestWithParam<T>.
1044};
1045
1046// Or, when you want to add parameters to a pre-existing fixture class:
1047class BaseTest : public testing::Test {
1048  ...
1049};
1050class BarTest : public BaseTest,
1051                public testing::WithParamInterface<const char*> {
1052  ...
1053};
1054```
1055
1056Then, use the `TEST_P` macro to define as many test patterns using this fixture
1057as you want. The `_P` suffix is for "parameterized" or "pattern", whichever you
1058prefer to think.
1059
1060```c++
1061TEST_P(FooTest, DoesBlah) {
1062  // Inside a test, access the test parameter with the GetParam() method
1063  // of the TestWithParam<T> class:
1064  EXPECT_TRUE(foo.Blah(GetParam()));
1065  ...
1066}
1067
1068TEST_P(FooTest, HasBlahBlah) {
1069  ...
1070}
1071```
1072
1073Finally, you can use the `INSTANTIATE_TEST_SUITE_P` macro to instantiate the
1074test suite with any set of parameters you want. GoogleTest defines a number of
1075functions for generating test parameters—see details at
1076[`INSTANTIATE_TEST_SUITE_P`](reference/testing.md#INSTANTIATE_TEST_SUITE_P) in
1077the Testing Reference.
1078
1079For example, the following statement will instantiate tests from the `FooTest`
1080test suite each with parameter values `"meeny"`, `"miny"`, and `"moe"` using the
1081[`Values`](reference/testing.md#param-generators) parameter generator:
1082
1083```c++
1084INSTANTIATE_TEST_SUITE_P(MeenyMinyMoe,
1085                         FooTest,
1086                         testing::Values("meeny", "miny", "moe"));
1087```
1088
1089{: .callout .note}
1090NOTE: The code above must be placed at global or namespace scope, not at
1091function scope.
1092
1093The first argument to `INSTANTIATE_TEST_SUITE_P` is a unique name for the
1094instantiation of the test suite. The next argument is the name of the test
1095pattern, and the last is the
1096[parameter generator](reference/testing.md#param-generators).
1097
1098The parameter generator expression is not evaluated until GoogleTest is
1099initialized (via `InitGoogleTest()`). Any prior initialization done in the
1100`main` function will be accessible from the parameter generator, for example,
1101the results of flag parsing.
1102
1103You can instantiate a test pattern more than once, so to distinguish different
1104instances of the pattern, the instantiation name is added as a prefix to the
1105actual test suite name. Remember to pick unique prefixes for different
1106instantiations. The tests from the instantiation above will have these names:
1107
1108*   `MeenyMinyMoe/FooTest.DoesBlah/0` for `"meeny"`
1109*   `MeenyMinyMoe/FooTest.DoesBlah/1` for `"miny"`
1110*   `MeenyMinyMoe/FooTest.DoesBlah/2` for `"moe"`
1111*   `MeenyMinyMoe/FooTest.HasBlahBlah/0` for `"meeny"`
1112*   `MeenyMinyMoe/FooTest.HasBlahBlah/1` for `"miny"`
1113*   `MeenyMinyMoe/FooTest.HasBlahBlah/2` for `"moe"`
1114
1115You can use these names in [`--gtest_filter`](#running-a-subset-of-the-tests).
1116
1117The following statement will instantiate all tests from `FooTest` again, each
1118with parameter values `"cat"` and `"dog"` using the
1119[`ValuesIn`](reference/testing.md#param-generators) parameter generator:
1120
1121```c++
1122const char* pets[] = {"cat", "dog"};
1123INSTANTIATE_TEST_SUITE_P(Pets, FooTest, testing::ValuesIn(pets));
1124```
1125
1126The tests from the instantiation above will have these names:
1127
1128*   `Pets/FooTest.DoesBlah/0` for `"cat"`
1129*   `Pets/FooTest.DoesBlah/1` for `"dog"`
1130*   `Pets/FooTest.HasBlahBlah/0` for `"cat"`
1131*   `Pets/FooTest.HasBlahBlah/1` for `"dog"`
1132
1133Please note that `INSTANTIATE_TEST_SUITE_P` will instantiate *all* tests in the
1134given test suite, whether their definitions come before or *after* the
1135`INSTANTIATE_TEST_SUITE_P` statement.
1136
1137Additionally, by default, every `TEST_P` without a corresponding
1138`INSTANTIATE_TEST_SUITE_P` causes a failing test in test suite
1139`GoogleTestVerification`. If you have a test suite where that omission is not an
1140error, for example it is in a library that may be linked in for other reasons or
1141where the list of test cases is dynamic and may be empty, then this check can be
1142suppressed by tagging the test suite:
1143
1144```c++
1145GTEST_ALLOW_UNINSTANTIATED_PARAMETERIZED_TEST(FooTest);
1146```
1147
1148You can see [sample7_unittest.cc] and [sample8_unittest.cc] for more examples.
1149
1150[sample7_unittest.cc]: https://github.com/google/googletest/blob/main/googletest/samples/sample7_unittest.cc "Parameterized Test example"
1151[sample8_unittest.cc]: https://github.com/google/googletest/blob/main/googletest/samples/sample8_unittest.cc "Parameterized Test example with multiple parameters"
1152
1153### Creating Value-Parameterized Abstract Tests
1154
1155In the above, we define and instantiate `FooTest` in the *same* source file.
1156Sometimes you may want to define value-parameterized tests in a library and let
1157other people instantiate them later. This pattern is known as *abstract tests*.
1158As an example of its application, when you are designing an interface you can
1159write a standard suite of abstract tests (perhaps using a factory function as
1160the test parameter) that all implementations of the interface are expected to
1161pass. When someone implements the interface, they can instantiate your suite to
1162get all the interface-conformance tests for free.
1163
1164To define abstract tests, you should organize your code like this:
1165
11661.  Put the definition of the parameterized test fixture class (e.g. `FooTest`)
1167    in a header file, say `foo_param_test.h`. Think of this as *declaring* your
1168    abstract tests.
11692.  Put the `TEST_P` definitions in `foo_param_test.cc`, which includes
1170    `foo_param_test.h`. Think of this as *implementing* your abstract tests.
1171
1172Once they are defined, you can instantiate them by including `foo_param_test.h`,
1173invoking `INSTANTIATE_TEST_SUITE_P()`, and depending on the library target that
1174contains `foo_param_test.cc`. You can instantiate the same abstract test suite
1175multiple times, possibly in different source files.
1176
1177### Specifying Names for Value-Parameterized Test Parameters
1178
1179The optional last argument to `INSTANTIATE_TEST_SUITE_P()` allows the user to
1180specify a function or functor that generates custom test name suffixes based on
1181the test parameters. The function should accept one argument of type
1182`testing::TestParamInfo<class ParamType>`, and return `std::string`.
1183
1184`testing::PrintToStringParamName` is a builtin test suffix generator that
1185returns the value of `testing::PrintToString(GetParam())`. It does not work for
1186`std::string` or C strings.
1187
1188{: .callout .note}
1189NOTE: test names must be non-empty, unique, and may only contain ASCII
1190alphanumeric characters. In particular, they
1191[should not contain underscores](faq.md#why-should-test-suite-names-and-test-names-not-contain-underscore)
1192
1193```c++
1194class MyTestSuite : public testing::TestWithParam<int> {};
1195
1196TEST_P(MyTestSuite, MyTest)
1197{
1198  std::cout << "Example Test Param: " << GetParam() << std::endl;
1199}
1200
1201INSTANTIATE_TEST_SUITE_P(MyGroup, MyTestSuite, testing::Range(0, 10),
1202                         testing::PrintToStringParamName());
1203```
1204
1205Providing a custom functor allows for more control over test parameter name
1206generation, especially for types where the automatic conversion does not
1207generate helpful parameter names (e.g. strings as demonstrated above). The
1208following example illustrates this for multiple parameters, an enumeration type
1209and a string, and also demonstrates how to combine generators. It uses a lambda
1210for conciseness:
1211
1212```c++
1213enum class MyType { MY_FOO = 0, MY_BAR = 1 };
1214
1215class MyTestSuite : public testing::TestWithParam<std::tuple<MyType, std::string>> {
1216};
1217
1218INSTANTIATE_TEST_SUITE_P(
1219    MyGroup, MyTestSuite,
1220    testing::Combine(
1221        testing::Values(MyType::MY_FOO, MyType::MY_BAR),
1222        testing::Values("A", "B")),
1223    [](const testing::TestParamInfo<MyTestSuite::ParamType>& info) {
1224      std::string name = absl::StrCat(
1225          std::get<0>(info.param) == MyType::MY_FOO ? "Foo" : "Bar",
1226          std::get<1>(info.param));
1227      absl::c_replace_if(name, [](char c) { return !std::isalnum(c); }, '_');
1228      return name;
1229    });
1230```
1231
1232## Typed Tests
1233
1234Suppose you have multiple implementations of the same interface and want to make
1235sure that all of them satisfy some common requirements. Or, you may have defined
1236several types that are supposed to conform to the same "concept" and you want to
1237verify it. In both cases, you want the same test logic repeated for different
1238types.
1239
1240While you can write one `TEST` or `TEST_F` for each type you want to test (and
1241you may even factor the test logic into a function template that you invoke from
1242the `TEST`), it's tedious and doesn't scale: if you want `m` tests over `n`
1243types, you'll end up writing `m*n` `TEST`s.
1244
1245*Typed tests* allow you to repeat the same test logic over a list of types. You
1246only need to write the test logic once, although you must know the type list
1247when writing typed tests. Here's how you do it:
1248
1249First, define a fixture class template. It should be parameterized by a type.
1250Remember to derive it from `::testing::Test`:
1251
1252```c++
1253template <typename T>
1254class FooTest : public testing::Test {
1255 public:
1256  ...
1257  using List = std::list<T>;
1258  static T shared_;
1259  T value_;
1260};
1261```
1262
1263Next, associate a list of types with the test suite, which will be repeated for
1264each type in the list:
1265
1266```c++
1267using MyTypes = ::testing::Types<char, int, unsigned int>;
1268TYPED_TEST_SUITE(FooTest, MyTypes);
1269```
1270
1271The type alias (`using` or `typedef`) is necessary for the `TYPED_TEST_SUITE`
1272macro to parse correctly. Otherwise the compiler will think that each comma in
1273the type list introduces a new macro argument.
1274
1275Then, use `TYPED_TEST()` instead of `TEST_F()` to define a typed test for this
1276test suite. You can repeat this as many times as you want:
1277
1278```c++
1279TYPED_TEST(FooTest, DoesBlah) {
1280  // Inside a test, refer to the special name TypeParam to get the type
1281  // parameter.  Since we are inside a derived class template, C++ requires
1282  // us to visit the members of FooTest via 'this'.
1283  TypeParam n = this->value_;
1284
1285  // To visit static members of the fixture, add the 'TestFixture::'
1286  // prefix.
1287  n += TestFixture::shared_;
1288
1289  // To refer to typedefs in the fixture, add the 'typename TestFixture::'
1290  // prefix.  The 'typename' is required to satisfy the compiler.
1291  typename TestFixture::List values;
1292
1293  values.push_back(n);
1294  ...
1295}
1296
1297TYPED_TEST(FooTest, HasPropertyA) { ... }
1298```
1299
1300You can see [sample6_unittest.cc] for a complete example.
1301
1302[sample6_unittest.cc]: https://github.com/google/googletest/blob/main/googletest/samples/sample6_unittest.cc "Typed Test example"
1303
1304## Type-Parameterized Tests
1305
1306*Type-parameterized tests* are like typed tests, except that they don't require
1307you to know the list of types ahead of time. Instead, you can define the test
1308logic first and instantiate it with different type lists later. You can even
1309instantiate it more than once in the same program.
1310
1311If you are designing an interface or concept, you can define a suite of
1312type-parameterized tests to verify properties that any valid implementation of
1313the interface/concept should have. Then, the author of each implementation can
1314just instantiate the test suite with their type to verify that it conforms to
1315the requirements, without having to write similar tests repeatedly. Here's an
1316example:
1317
1318First, define a fixture class template, as we did with typed tests:
1319
1320```c++
1321template <typename T>
1322class FooTest : public testing::Test {
1323  void DoSomethingInteresting();
1324  ...
1325};
1326```
1327
1328Next, declare that you will define a type-parameterized test suite:
1329
1330```c++
1331TYPED_TEST_SUITE_P(FooTest);
1332```
1333
1334Then, use `TYPED_TEST_P()` to define a type-parameterized test. You can repeat
1335this as many times as you want:
1336
1337```c++
1338TYPED_TEST_P(FooTest, DoesBlah) {
1339  // Inside a test, refer to TypeParam to get the type parameter.
1340  TypeParam n = 0;
1341
1342  // You will need to use `this` explicitly to refer to fixture members.
1343  this->DoSomethingInteresting()
1344  ...
1345}
1346
1347TYPED_TEST_P(FooTest, HasPropertyA) { ... }
1348```
1349
1350Now the tricky part: you need to register all test patterns using the
1351`REGISTER_TYPED_TEST_SUITE_P` macro before you can instantiate them. The first
1352argument of the macro is the test suite name; the rest are the names of the
1353tests in this test suite:
1354
1355```c++
1356REGISTER_TYPED_TEST_SUITE_P(FooTest,
1357                            DoesBlah, HasPropertyA);
1358```
1359
1360Finally, you are free to instantiate the pattern with the types you want. If you
1361put the above code in a header file, you can `#include` it in multiple C++
1362source files and instantiate it multiple times.
1363
1364```c++
1365using MyTypes = ::testing::Types<char, int, unsigned int>;
1366INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, MyTypes);
1367```
1368
1369To distinguish different instances of the pattern, the first argument to the
1370`INSTANTIATE_TYPED_TEST_SUITE_P` macro is a prefix that will be added to the
1371actual test suite name. Remember to pick unique prefixes for different
1372instances.
1373
1374In the special case where the type list contains only one type, you can write
1375that type directly without `::testing::Types<...>`, like this:
1376
1377```c++
1378INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, int);
1379```
1380
1381You can see [sample6_unittest.cc] for a complete example.
1382
1383## Testing Private Code
1384
1385If you change your software's internal implementation, your tests should not
1386break as long as the change is not observable by users. Therefore, **per the
1387black-box testing principle, most of the time you should test your code through
1388its public interfaces.**
1389
1390**If you still find yourself needing to test internal implementation code,
1391consider if there's a better design.** The desire to test internal
1392implementation is often a sign that the class is doing too much. Consider
1393extracting an implementation class, and testing it. Then use that implementation
1394class in the original class.
1395
1396If you absolutely have to test non-public interface code though, you can. There
1397are two cases to consider:
1398
1399*   Static functions ( *not* the same as static member functions!) or unnamed
1400    namespaces, and
1401*   Private or protected class members
1402
1403To test them, we use the following special techniques:
1404
1405*   Both static functions and definitions/declarations in an unnamed namespace
1406    are only visible within the same translation unit. To test them, you can
1407    `#include` the entire `.cc` file being tested in your `*_test.cc` file.
1408    (#including `.cc` files is not a good way to reuse code - you should not do
1409    this in production code!)
1410
1411    However, a better approach is to move the private code into the
1412    `foo::internal` namespace, where `foo` is the namespace your project
1413    normally uses, and put the private declarations in a `*-internal.h` file.
1414    Your production `.cc` files and your tests are allowed to include this
1415    internal header, but your clients are not. This way, you can fully test your
1416    internal implementation without leaking it to your clients.
1417
1418*   Private class members are only accessible from within the class or by
1419    friends. To access a class' private members, you can declare your test
1420    fixture as a friend to the class and define accessors in your fixture. Tests
1421    using the fixture can then access the private members of your production
1422    class via the accessors in the fixture. Note that even though your fixture
1423    is a friend to your production class, your tests are not automatically
1424    friends to it, as they are technically defined in sub-classes of the
1425    fixture.
1426
1427    Another way to test private members is to refactor them into an
1428    implementation class, which is then declared in a `*-internal.h` file. Your
1429    clients aren't allowed to include this header but your tests can. Such is
1430    called the
1431    [Pimpl](https://www.gamedev.net/articles/programming/general-and-gameplay-programming/the-c-pimpl-r1794/)
1432    (Private Implementation) idiom.
1433
1434    Or, you can declare an individual test as a friend of your class by adding
1435    this line in the class body:
1436
1437    ```c++
1438        FRIEND_TEST(TestSuiteName, TestName);
1439    ```
1440
1441    For example,
1442
1443    ```c++
1444    // foo.h
1445    class Foo {
1446      ...
1447     private:
1448      FRIEND_TEST(FooTest, BarReturnsZeroOnNull);
1449
1450      int Bar(void* x);
1451    };
1452
1453    // foo_test.cc
1454    ...
1455    TEST(FooTest, BarReturnsZeroOnNull) {
1456      Foo foo;
1457      EXPECT_EQ(foo.Bar(NULL), 0);  // Uses Foo's private member Bar().
1458    }
1459    ```
1460
1461    Pay special attention when your class is defined in a namespace. If you want
1462    your test fixtures and tests to be friends of your class, then they must be
1463    defined in the exact same namespace (no anonymous or inline namespaces).
1464
1465    For example, if the code to be tested looks like:
1466
1467    ```c++
1468    namespace my_namespace {
1469
1470    class Foo {
1471      friend class FooTest;
1472      FRIEND_TEST(FooTest, Bar);
1473      FRIEND_TEST(FooTest, Baz);
1474      ... definition of the class Foo ...
1475    };
1476
1477    }  // namespace my_namespace
1478    ```
1479
1480    Your test code should be something like:
1481
1482    ```c++
1483    namespace my_namespace {
1484
1485    class FooTest : public testing::Test {
1486     protected:
1487      ...
1488    };
1489
1490    TEST_F(FooTest, Bar) { ... }
1491    TEST_F(FooTest, Baz) { ... }
1492
1493    }  // namespace my_namespace
1494    ```
1495
1496## "Catching" Failures
1497
1498If you are building a testing utility on top of googletest, you'll want to test
1499your utility. What framework would you use to test it? googletest, of course.
1500
1501The challenge is to verify that your testing utility reports failures correctly.
1502In frameworks that report a failure by throwing an exception, you could catch
1503the exception and assert on it. But googletest doesn't use exceptions, so how do
1504we test that a piece of code generates an expected failure?
1505
1506`"gtest/gtest-spi.h"` contains some constructs to do this.
1507After #including this header, you can use
1508
1509```c++
1510  EXPECT_FATAL_FAILURE(statement, substring);
1511```
1512
1513to assert that `statement` generates a fatal (e.g. `ASSERT_*`) failure in the
1514current thread whose message contains the given `substring`, or use
1515
1516```c++
1517  EXPECT_NONFATAL_FAILURE(statement, substring);
1518```
1519
1520if you are expecting a non-fatal (e.g. `EXPECT_*`) failure.
1521
1522Only failures in the current thread are checked to determine the result of this
1523type of expectations. If `statement` creates new threads, failures in these
1524threads are also ignored. If you want to catch failures in other threads as
1525well, use one of the following macros instead:
1526
1527```c++
1528  EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substring);
1529  EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substring);
1530```
1531
1532{: .callout .note}
1533NOTE: Assertions from multiple threads are currently not supported on Windows.
1534
1535For technical reasons, there are some caveats:
1536
15371.  You cannot stream a failure message to either macro.
1538
15392.  `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot reference
1540    local non-static variables or non-static members of `this` object.
1541
15423.  `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot return a
1543    value.
1544
1545## Registering tests programmatically
1546
1547The `TEST` macros handle the vast majority of all use cases, but there are few
1548where runtime registration logic is required. For those cases, the framework
1549provides the `::testing::RegisterTest` that allows callers to register arbitrary
1550tests dynamically.
1551
1552This is an advanced API only to be used when the `TEST` macros are insufficient.
1553The macros should be preferred when possible, as they avoid most of the
1554complexity of calling this function.
1555
1556It provides the following signature:
1557
1558```c++
1559template <typename Factory>
1560TestInfo* RegisterTest(const char* test_suite_name, const char* test_name,
1561                       const char* type_param, const char* value_param,
1562                       const char* file, int line, Factory factory);
1563```
1564
1565The `factory` argument is a factory callable (move-constructible) object or
1566function pointer that creates a new instance of the Test object. It handles
1567ownership to the caller. The signature of the callable is `Fixture*()`, where
1568`Fixture` is the test fixture class for the test. All tests registered with the
1569same `test_suite_name` must return the same fixture type. This is checked at
1570runtime.
1571
1572The framework will infer the fixture class from the factory and will call the
1573`SetUpTestSuite` and `TearDownTestSuite` for it.
1574
1575Must be called before `RUN_ALL_TESTS()` is invoked, otherwise behavior is
1576undefined.
1577
1578Use case example:
1579
1580```c++
1581class MyFixture : public testing::Test {
1582 public:
1583  // All of these optional, just like in regular macro usage.
1584  static void SetUpTestSuite() { ... }
1585  static void TearDownTestSuite() { ... }
1586  void SetUp() override { ... }
1587  void TearDown() override { ... }
1588};
1589
1590class MyTest : public MyFixture {
1591 public:
1592  explicit MyTest(int data) : data_(data) {}
1593  void TestBody() override { ... }
1594
1595 private:
1596  int data_;
1597};
1598
1599void RegisterMyTests(const std::vector<int>& values) {
1600  for (int v : values) {
1601    testing::RegisterTest(
1602        "MyFixture", ("Test" + std::to_string(v)).c_str(), nullptr,
1603        std::to_string(v).c_str(),
1604        __FILE__, __LINE__,
1605        // Important to use the fixture type as the return type here.
1606        [=]() -> MyFixture* { return new MyTest(v); });
1607  }
1608}
1609...
1610int main(int argc, char** argv) {
1611  testing::InitGoogleTest(&argc, argv);
1612  std::vector<int> values_to_test = LoadValuesFromConfig();
1613  RegisterMyTests(values_to_test);
1614  ...
1615  return RUN_ALL_TESTS();
1616}
1617```
1618
1619## Getting the Current Test's Name
1620
1621Sometimes a function may need to know the name of the currently running test.
1622For example, you may be using the `SetUp()` method of your test fixture to set
1623the golden file name based on which test is running. The
1624[`TestInfo`](reference/testing.md#TestInfo) class has this information.
1625
1626To obtain a `TestInfo` object for the currently running test, call
1627`current_test_info()` on the [`UnitTest`](reference/testing.md#UnitTest)
1628singleton object:
1629
1630```c++
1631  // Gets information about the currently running test.
1632  // Do NOT delete the returned object - it's managed by the UnitTest class.
1633  const testing::TestInfo* const test_info =
1634      testing::UnitTest::GetInstance()->current_test_info();
1635
1636  printf("We are in test %s of test suite %s.\n",
1637         test_info->name(),
1638         test_info->test_suite_name());
1639```
1640
1641`current_test_info()` returns a null pointer if no test is running. In
1642particular, you cannot find the test suite name in `SetUpTestSuite()`,
1643`TearDownTestSuite()` (where you know the test suite name implicitly), or
1644functions called from them.
1645
1646## Extending googletest by Handling Test Events
1647
1648googletest provides an **event listener API** to let you receive notifications
1649about the progress of a test program and test failures. The events you can
1650listen to include the start and end of the test program, a test suite, or a test
1651method, among others. You may use this API to augment or replace the standard
1652console output, replace the XML output, or provide a completely different form
1653of output, such as a GUI or a database. You can also use test events as
1654checkpoints to implement a resource leak checker, for example.
1655
1656### Defining Event Listeners
1657
1658To define a event listener, you subclass either
1659[`testing::TestEventListener`](reference/testing.md#TestEventListener) or
1660[`testing::EmptyTestEventListener`](reference/testing.md#EmptyTestEventListener)
1661The former is an (abstract) interface, where *each pure virtual method can be
1662overridden to handle a test event* (For example, when a test starts, the
1663`OnTestStart()` method will be called.). The latter provides an empty
1664implementation of all methods in the interface, such that a subclass only needs
1665to override the methods it cares about.
1666
1667When an event is fired, its context is passed to the handler function as an
1668argument. The following argument types are used:
1669
1670*   UnitTest reflects the state of the entire test program,
1671*   TestSuite has information about a test suite, which can contain one or more
1672    tests,
1673*   TestInfo contains the state of a test, and
1674*   TestPartResult represents the result of a test assertion.
1675
1676An event handler function can examine the argument it receives to find out
1677interesting information about the event and the test program's state.
1678
1679Here's an example:
1680
1681```c++
1682  class MinimalistPrinter : public testing::EmptyTestEventListener {
1683    // Called before a test starts.
1684    void OnTestStart(const testing::TestInfo& test_info) override {
1685      printf("*** Test %s.%s starting.\n",
1686             test_info.test_suite_name(), test_info.name());
1687    }
1688
1689    // Called after a failed assertion or a SUCCESS().
1690    void OnTestPartResult(const testing::TestPartResult& test_part_result) override {
1691      printf("%s in %s:%d\n%s\n",
1692             test_part_result.failed() ? "*** Failure" : "Success",
1693             test_part_result.file_name(),
1694             test_part_result.line_number(),
1695             test_part_result.summary());
1696    }
1697
1698    // Called after a test ends.
1699    void OnTestEnd(const testing::TestInfo& test_info) override {
1700      printf("*** Test %s.%s ending.\n",
1701             test_info.test_suite_name(), test_info.name());
1702    }
1703  };
1704```
1705
1706### Using Event Listeners
1707
1708To use the event listener you have defined, add an instance of it to the
1709googletest event listener list (represented by class
1710[`TestEventListeners`](reference/testing.md#TestEventListeners) - note the "s"
1711at the end of the name) in your `main()` function, before calling
1712`RUN_ALL_TESTS()`:
1713
1714```c++
1715int main(int argc, char** argv) {
1716  testing::InitGoogleTest(&argc, argv);
1717  // Gets hold of the event listener list.
1718  testing::TestEventListeners& listeners =
1719      testing::UnitTest::GetInstance()->listeners();
1720  // Adds a listener to the end.  googletest takes the ownership.
1721  listeners.Append(new MinimalistPrinter);
1722  return RUN_ALL_TESTS();
1723}
1724```
1725
1726There's only one problem: the default test result printer is still in effect, so
1727its output will mingle with the output from your minimalist printer. To suppress
1728the default printer, just release it from the event listener list and delete it.
1729You can do so by adding one line:
1730
1731```c++
1732  ...
1733  delete listeners.Release(listeners.default_result_printer());
1734  listeners.Append(new MinimalistPrinter);
1735  return RUN_ALL_TESTS();
1736```
1737
1738Now, sit back and enjoy a completely different output from your tests. For more
1739details, see [sample9_unittest.cc].
1740
1741[sample9_unittest.cc]: https://github.com/google/googletest/blob/main/googletest/samples/sample9_unittest.cc "Event listener example"
1742
1743You may append more than one listener to the list. When an `On*Start()` or
1744`OnTestPartResult()` event is fired, the listeners will receive it in the order
1745they appear in the list (since new listeners are added to the end of the list,
1746the default text printer and the default XML generator will receive the event
1747first). An `On*End()` event will be received by the listeners in the *reverse*
1748order. This allows output by listeners added later to be framed by output from
1749listeners added earlier.
1750
1751### Generating Failures in Listeners
1752
1753You may use failure-raising macros (`EXPECT_*()`, `ASSERT_*()`, `FAIL()`, etc)
1754when processing an event. There are some restrictions:
1755
17561.  You cannot generate any failure in `OnTestPartResult()` (otherwise it will
1757    cause `OnTestPartResult()` to be called recursively).
17582.  A listener that handles `OnTestPartResult()` is not allowed to generate any
1759    failure.
1760
1761When you add listeners to the listener list, you should put listeners that
1762handle `OnTestPartResult()` *before* listeners that can generate failures. This
1763ensures that failures generated by the latter are attributed to the right test
1764by the former.
1765
1766See [sample10_unittest.cc] for an example of a failure-raising listener.
1767
1768[sample10_unittest.cc]: https://github.com/google/googletest/blob/main/googletest/samples/sample10_unittest.cc "Failure-raising listener example"
1769
1770## Running Test Programs: Advanced Options
1771
1772googletest test programs are ordinary executables. Once built, you can run them
1773directly and affect their behavior via the following environment variables
1774and/or command line flags. For the flags to work, your programs must call
1775`::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`.
1776
1777To see a list of supported flags and their usage, please run your test program
1778with the `--help` flag. You can also use `-h`, `-?`, or `/?` for short.
1779
1780If an option is specified both by an environment variable and by a flag, the
1781latter takes precedence.
1782
1783### Selecting Tests
1784
1785#### Listing Test Names
1786
1787Sometimes it is necessary to list the available tests in a program before
1788running them so that a filter may be applied if needed. Including the flag
1789`--gtest_list_tests` overrides all other flags and lists tests in the following
1790format:
1791
1792```none
1793TestSuite1.
1794  TestName1
1795  TestName2
1796TestSuite2.
1797  TestName
1798```
1799
1800None of the tests listed are actually run if the flag is provided. There is no
1801corresponding environment variable for this flag.
1802
1803#### Running a Subset of the Tests
1804
1805By default, a googletest program runs all tests the user has defined. Sometimes,
1806you want to run only a subset of the tests (e.g. for debugging or quickly
1807verifying a change). If you set the `GTEST_FILTER` environment variable or the
1808`--gtest_filter` flag to a filter string, googletest will only run the tests
1809whose full names (in the form of `TestSuiteName.TestName`) match the filter.
1810
1811The format of a filter is a '`:`'-separated list of wildcard patterns (called
1812the *positive patterns*) optionally followed by a '`-`' and another
1813'`:`'-separated pattern list (called the *negative patterns*). A test matches
1814the filter if and only if it matches any of the positive patterns but does not
1815match any of the negative patterns.
1816
1817A pattern may contain `'*'` (matches any string) or `'?'` (matches any single
1818character). For convenience, the filter `'*-NegativePatterns'` can be also
1819written as `'-NegativePatterns'`.
1820
1821For example:
1822
1823*   `./foo_test` Has no flag, and thus runs all its tests.
1824*   `./foo_test --gtest_filter=*` Also runs everything, due to the single
1825    match-everything `*` value.
1826*   `./foo_test --gtest_filter=FooTest.*` Runs everything in test suite
1827    `FooTest` .
1828*   `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full
1829    name contains either `"Null"` or `"Constructor"` .
1830*   `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests.
1831*   `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test
1832    suite `FooTest` except `FooTest.Bar`.
1833*   `./foo_test --gtest_filter=FooTest.*:BarTest.*-FooTest.Bar:BarTest.Foo` Runs
1834    everything in test suite `FooTest` except `FooTest.Bar` and everything in
1835    test suite `BarTest` except `BarTest.Foo`.
1836
1837#### Stop test execution upon first failure
1838
1839By default, a googletest program runs all tests the user has defined. In some
1840cases (e.g. iterative test development & execution) it may be desirable stop
1841test execution upon first failure (trading improved latency for completeness).
1842If `GTEST_FAIL_FAST` environment variable or `--gtest_fail_fast` flag is set,
1843the test runner will stop execution as soon as the first test failure is found.
1844
1845#### Temporarily Disabling Tests
1846
1847If you have a broken test that you cannot fix right away, you can add the
1848`DISABLED_` prefix to its name. This will exclude it from execution. This is
1849better than commenting out the code or using `#if 0`, as disabled tests are
1850still compiled (and thus won't rot).
1851
1852If you need to disable all tests in a test suite, you can either add `DISABLED_`
1853to the front of the name of each test, or alternatively add it to the front of
1854the test suite name.
1855
1856For example, the following tests won't be run by googletest, even though they
1857will still be compiled:
1858
1859```c++
1860// Tests that Foo does Abc.
1861TEST(FooTest, DISABLED_DoesAbc) { ... }
1862
1863class DISABLED_BarTest : public testing::Test { ... };
1864
1865// Tests that Bar does Xyz.
1866TEST_F(DISABLED_BarTest, DoesXyz) { ... }
1867```
1868
1869{: .callout .note}
1870NOTE: This feature should only be used for temporary pain-relief. You still have
1871to fix the disabled tests at a later date. As a reminder, googletest will print
1872a banner warning you if a test program contains any disabled tests.
1873
1874{: .callout .tip}
1875TIP: You can easily count the number of disabled tests you have using
1876`grep`. This number can be used as a metric for
1877improving your test quality.
1878
1879#### Temporarily Enabling Disabled Tests
1880
1881To include disabled tests in test execution, just invoke the test program with
1882the `--gtest_also_run_disabled_tests` flag or set the
1883`GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other than `0`.
1884You can combine this with the `--gtest_filter` flag to further select which
1885disabled tests to run.
1886
1887### Repeating the Tests
1888
1889Once in a while you'll run into a test whose result is hit-or-miss. Perhaps it
1890will fail only 1% of the time, making it rather hard to reproduce the bug under
1891a debugger. This can be a major source of frustration.
1892
1893The `--gtest_repeat` flag allows you to repeat all (or selected) test methods in
1894a program many times. Hopefully, a flaky test will eventually fail and give you
1895a chance to debug. Here's how to use it:
1896
1897```none
1898$ foo_test --gtest_repeat=1000
1899Repeat foo_test 1000 times and don't stop at failures.
1900
1901$ foo_test --gtest_repeat=-1
1902A negative count means repeating forever.
1903
1904$ foo_test --gtest_repeat=1000 --gtest_break_on_failure
1905Repeat foo_test 1000 times, stopping at the first failure.  This
1906is especially useful when running under a debugger: when the test
1907fails, it will drop into the debugger and you can then inspect
1908variables and stacks.
1909
1910$ foo_test --gtest_repeat=1000 --gtest_filter=FooBar.*
1911Repeat the tests whose name matches the filter 1000 times.
1912```
1913
1914If your test program contains
1915[global set-up/tear-down](#global-set-up-and-tear-down) code, it will be
1916repeated in each iteration as well, as the flakiness may be in it. To avoid
1917repeating global set-up/tear-down, specify
1918`--gtest_recreate_environments_when_repeating=false`{.nowrap}.
1919
1920You can also specify the repeat count by setting the `GTEST_REPEAT` environment
1921variable.
1922
1923### Shuffling the Tests
1924
1925You can specify the `--gtest_shuffle` flag (or set the `GTEST_SHUFFLE`
1926environment variable to `1`) to run the tests in a program in a random order.
1927This helps to reveal bad dependencies between tests.
1928
1929By default, googletest uses a random seed calculated from the current time.
1930Therefore you'll get a different order every time. The console output includes
1931the random seed value, such that you can reproduce an order-related test failure
1932later. To specify the random seed explicitly, use the `--gtest_random_seed=SEED`
1933flag (or set the `GTEST_RANDOM_SEED` environment variable), where `SEED` is an
1934integer in the range [0, 99999]. The seed value 0 is special: it tells
1935googletest to do the default behavior of calculating the seed from the current
1936time.
1937
1938If you combine this with `--gtest_repeat=N`, googletest will pick a different
1939random seed and re-shuffle the tests in each iteration.
1940
1941### Distributing Test Functions to Multiple Machines
1942
1943If you have more than one machine you can use to run a test program, you might
1944want to run the test functions in parallel and get the result faster. We call
1945this technique *sharding*, where each machine is called a *shard*.
1946
1947GoogleTest is compatible with test sharding. To take advantage of this feature,
1948your test runner (not part of GoogleTest) needs to do the following:
1949
19501.  Allocate a number of machines (shards) to run the tests.
19511.  On each shard, set the `GTEST_TOTAL_SHARDS` environment variable to the total
1952    number of shards. It must be the same for all shards.
19531.  On each shard, set the `GTEST_SHARD_INDEX` environment variable to the index
1954    of the shard. Different shards must be assigned different indices, which
1955    must be in the range `[0, GTEST_TOTAL_SHARDS - 1]`.
19561.  Run the same test program on all shards. When GoogleTest sees the above two
1957    environment variables, it will select a subset of the test functions to run.
1958    Across all shards, each test function in the program will be run exactly
1959    once.
19601.  Wait for all shards to finish, then collect and report the results.
1961
1962Your project may have tests that were written without GoogleTest and thus don't
1963understand this protocol. In order for your test runner to figure out which test
1964supports sharding, it can set the environment variable `GTEST_SHARD_STATUS_FILE`
1965to a non-existent file path. If a test program supports sharding, it will create
1966this file to acknowledge that fact; otherwise it will not create it. The actual
1967contents of the file are not important at this time, although we may put some
1968useful information in it in the future.
1969
1970Here's an example to make it clear. Suppose you have a test program `foo_test`
1971that contains the following 5 test functions:
1972
1973```
1974TEST(A, V)
1975TEST(A, W)
1976TEST(B, X)
1977TEST(B, Y)
1978TEST(B, Z)
1979```
1980
1981Suppose you have 3 machines at your disposal. To run the test functions in
1982parallel, you would set `GTEST_TOTAL_SHARDS` to 3 on all machines, and set
1983`GTEST_SHARD_INDEX` to 0, 1, and 2 on the machines respectively. Then you would
1984run the same `foo_test` on each machine.
1985
1986GoogleTest reserves the right to change how the work is distributed across the
1987shards, but here's one possible scenario:
1988
1989*   Machine #0 runs `A.V` and `B.X`.
1990*   Machine #1 runs `A.W` and `B.Y`.
1991*   Machine #2 runs `B.Z`.
1992
1993### Controlling Test Output
1994
1995#### Colored Terminal Output
1996
1997googletest can use colors in its terminal output to make it easier to spot the
1998important information:
1999
2000<pre>...
2001<font color="green">[----------]</font> 1 test from FooTest
2002<font color="green">[ RUN      ]</font> FooTest.DoesAbc
2003<font color="green">[       OK ]</font> FooTest.DoesAbc
2004<font color="green">[----------]</font> 2 tests from BarTest
2005<font color="green">[ RUN      ]</font> BarTest.HasXyzProperty
2006<font color="green">[       OK ]</font> BarTest.HasXyzProperty
2007<font color="green">[ RUN      ]</font> BarTest.ReturnsTrueOnSuccess
2008... some error messages ...
2009<font color="red">[   FAILED ]</font> BarTest.ReturnsTrueOnSuccess
2010...
2011<font color="green">[==========]</font> 30 tests from 14 test suites ran.
2012<font color="green">[   PASSED ]</font> 28 tests.
2013<font color="red">[   FAILED ]</font> 2 tests, listed below:
2014<font color="red">[   FAILED ]</font> BarTest.ReturnsTrueOnSuccess
2015<font color="red">[   FAILED ]</font> AnotherTest.DoesXyz
2016
2017 2 FAILED TESTS
2018</pre>
2019
2020You can set the `GTEST_COLOR` environment variable or the `--gtest_color`
2021command line flag to `yes`, `no`, or `auto` (the default) to enable colors,
2022disable colors, or let googletest decide. When the value is `auto`, googletest
2023will use colors if and only if the output goes to a terminal and (on non-Windows
2024platforms) the `TERM` environment variable is set to `xterm` or `xterm-color`.
2025
2026#### Suppressing test passes
2027
2028By default, googletest prints 1 line of output for each test, indicating if it
2029passed or failed. To show only test failures, run the test program with
2030`--gtest_brief=1`, or set the GTEST_BRIEF environment variable to `1`.
2031
2032#### Suppressing the Elapsed Time
2033
2034By default, googletest prints the time it takes to run each test. To disable
2035that, run the test program with the `--gtest_print_time=0` command line flag, or
2036set the GTEST_PRINT_TIME environment variable to `0`.
2037
2038#### Suppressing UTF-8 Text Output
2039
2040In case of assertion failures, googletest prints expected and actual values of
2041type `string` both as hex-encoded strings as well as in readable UTF-8 text if
2042they contain valid non-ASCII UTF-8 characters. If you want to suppress the UTF-8
2043text because, for example, you don't have an UTF-8 compatible output medium, run
2044the test program with `--gtest_print_utf8=0` or set the `GTEST_PRINT_UTF8`
2045environment variable to `0`.
2046
2047#### Generating an XML Report
2048
2049googletest can emit a detailed XML report to a file in addition to its normal
2050textual output. The report contains the duration of each test, and thus can help
2051you identify slow tests.
2052
2053To generate the XML report, set the `GTEST_OUTPUT` environment variable or the
2054`--gtest_output` flag to the string `"xml:path_to_output_file"`, which will
2055create the file at the given location. You can also just use the string `"xml"`,
2056in which case the output can be found in the `test_detail.xml` file in the
2057current directory.
2058
2059If you specify a directory (for example, `"xml:output/directory/"` on Linux or
2060`"xml:output\directory\"` on Windows), googletest will create the XML file in
2061that directory, named after the test executable (e.g. `foo_test.xml` for test
2062program `foo_test` or `foo_test.exe`). If the file already exists (perhaps left
2063over from a previous run), googletest will pick a different name (e.g.
2064`foo_test_1.xml`) to avoid overwriting it.
2065
2066The report is based on the `junitreport` Ant task. Since that format was
2067originally intended for Java, a little interpretation is required to make it
2068apply to googletest tests, as shown here:
2069
2070```xml
2071<testsuites name="AllTests" ...>
2072  <testsuite name="test_case_name" ...>
2073    <testcase    name="test_name" ...>
2074      <failure message="..."/>
2075      <failure message="..."/>
2076      <failure message="..."/>
2077    </testcase>
2078  </testsuite>
2079</testsuites>
2080```
2081
2082*   The root `<testsuites>` element corresponds to the entire test program.
2083*   `<testsuite>` elements correspond to googletest test suites.
2084*   `<testcase>` elements correspond to googletest test functions.
2085
2086For instance, the following program
2087
2088```c++
2089TEST(MathTest, Addition) { ... }
2090TEST(MathTest, Subtraction) { ... }
2091TEST(LogicTest, NonContradiction) { ... }
2092```
2093
2094could generate this report:
2095
2096```xml
2097<?xml version="1.0" encoding="UTF-8"?>
2098<testsuites tests="3" failures="1" errors="0" time="0.035" timestamp="2011-10-31T18:52:42" name="AllTests">
2099  <testsuite name="MathTest" tests="2" failures="1" errors="0" time="0.015">
2100    <testcase name="Addition" file="test.cpp" line="1" status="run" time="0.007" classname="">
2101      <failure message="Value of: add(1, 1)&#x0A;  Actual: 3&#x0A;Expected: 2" type="">...</failure>
2102      <failure message="Value of: add(1, -1)&#x0A;  Actual: 1&#x0A;Expected: 0" type="">...</failure>
2103    </testcase>
2104    <testcase name="Subtraction" file="test.cpp" line="2" status="run" time="0.005" classname="">
2105    </testcase>
2106  </testsuite>
2107  <testsuite name="LogicTest" tests="1" failures="0" errors="0" time="0.005">
2108    <testcase name="NonContradiction" file="test.cpp" line="3" status="run" time="0.005" classname="">
2109    </testcase>
2110  </testsuite>
2111</testsuites>
2112```
2113
2114Things to note:
2115
2116*   The `tests` attribute of a `<testsuites>` or `<testsuite>` element tells how
2117    many test functions the googletest program or test suite contains, while the
2118    `failures` attribute tells how many of them failed.
2119
2120*   The `time` attribute expresses the duration of the test, test suite, or
2121    entire test program in seconds.
2122
2123*   The `timestamp` attribute records the local date and time of the test
2124    execution.
2125
2126*   The `file` and `line` attributes record the source file location, where the
2127    test was defined.
2128
2129*   Each `<failure>` element corresponds to a single failed googletest
2130    assertion.
2131
2132#### Generating a JSON Report
2133
2134googletest can also emit a JSON report as an alternative format to XML. To
2135generate the JSON report, set the `GTEST_OUTPUT` environment variable or the
2136`--gtest_output` flag to the string `"json:path_to_output_file"`, which will
2137create the file at the given location. You can also just use the string
2138`"json"`, in which case the output can be found in the `test_detail.json` file
2139in the current directory.
2140
2141The report format conforms to the following JSON Schema:
2142
2143```json
2144{
2145  "$schema": "http://json-schema.org/schema#",
2146  "type": "object",
2147  "definitions": {
2148    "TestCase": {
2149      "type": "object",
2150      "properties": {
2151        "name": { "type": "string" },
2152        "tests": { "type": "integer" },
2153        "failures": { "type": "integer" },
2154        "disabled": { "type": "integer" },
2155        "time": { "type": "string" },
2156        "testsuite": {
2157          "type": "array",
2158          "items": {
2159            "$ref": "#/definitions/TestInfo"
2160          }
2161        }
2162      }
2163    },
2164    "TestInfo": {
2165      "type": "object",
2166      "properties": {
2167        "name": { "type": "string" },
2168        "file": { "type": "string" },
2169        "line": { "type": "integer" },
2170        "status": {
2171          "type": "string",
2172          "enum": ["RUN", "NOTRUN"]
2173        },
2174        "time": { "type": "string" },
2175        "classname": { "type": "string" },
2176        "failures": {
2177          "type": "array",
2178          "items": {
2179            "$ref": "#/definitions/Failure"
2180          }
2181        }
2182      }
2183    },
2184    "Failure": {
2185      "type": "object",
2186      "properties": {
2187        "failures": { "type": "string" },
2188        "type": { "type": "string" }
2189      }
2190    }
2191  },
2192  "properties": {
2193    "tests": { "type": "integer" },
2194    "failures": { "type": "integer" },
2195    "disabled": { "type": "integer" },
2196    "errors": { "type": "integer" },
2197    "timestamp": {
2198      "type": "string",
2199      "format": "date-time"
2200    },
2201    "time": { "type": "string" },
2202    "name": { "type": "string" },
2203    "testsuites": {
2204      "type": "array",
2205      "items": {
2206        "$ref": "#/definitions/TestCase"
2207      }
2208    }
2209  }
2210}
2211```
2212
2213The report uses the format that conforms to the following Proto3 using the
2214[JSON encoding](https://developers.google.com/protocol-buffers/docs/proto3#json):
2215
2216```proto
2217syntax = "proto3";
2218
2219package googletest;
2220
2221import "google/protobuf/timestamp.proto";
2222import "google/protobuf/duration.proto";
2223
2224message UnitTest {
2225  int32 tests = 1;
2226  int32 failures = 2;
2227  int32 disabled = 3;
2228  int32 errors = 4;
2229  google.protobuf.Timestamp timestamp = 5;
2230  google.protobuf.Duration time = 6;
2231  string name = 7;
2232  repeated TestCase testsuites = 8;
2233}
2234
2235message TestCase {
2236  string name = 1;
2237  int32 tests = 2;
2238  int32 failures = 3;
2239  int32 disabled = 4;
2240  int32 errors = 5;
2241  google.protobuf.Duration time = 6;
2242  repeated TestInfo testsuite = 7;
2243}
2244
2245message TestInfo {
2246  string name = 1;
2247  string file = 6;
2248  int32 line = 7;
2249  enum Status {
2250    RUN = 0;
2251    NOTRUN = 1;
2252  }
2253  Status status = 2;
2254  google.protobuf.Duration time = 3;
2255  string classname = 4;
2256  message Failure {
2257    string failures = 1;
2258    string type = 2;
2259  }
2260  repeated Failure failures = 5;
2261}
2262```
2263
2264For instance, the following program
2265
2266```c++
2267TEST(MathTest, Addition) { ... }
2268TEST(MathTest, Subtraction) { ... }
2269TEST(LogicTest, NonContradiction) { ... }
2270```
2271
2272could generate this report:
2273
2274```json
2275{
2276  "tests": 3,
2277  "failures": 1,
2278  "errors": 0,
2279  "time": "0.035s",
2280  "timestamp": "2011-10-31T18:52:42Z",
2281  "name": "AllTests",
2282  "testsuites": [
2283    {
2284      "name": "MathTest",
2285      "tests": 2,
2286      "failures": 1,
2287      "errors": 0,
2288      "time": "0.015s",
2289      "testsuite": [
2290        {
2291          "name": "Addition",
2292          "file": "test.cpp",
2293          "line": 1,
2294          "status": "RUN",
2295          "time": "0.007s",
2296          "classname": "",
2297          "failures": [
2298            {
2299              "message": "Value of: add(1, 1)\n  Actual: 3\nExpected: 2",
2300              "type": ""
2301            },
2302            {
2303              "message": "Value of: add(1, -1)\n  Actual: 1\nExpected: 0",
2304              "type": ""
2305            }
2306          ]
2307        },
2308        {
2309          "name": "Subtraction",
2310          "file": "test.cpp",
2311          "line": 2,
2312          "status": "RUN",
2313          "time": "0.005s",
2314          "classname": ""
2315        }
2316      ]
2317    },
2318    {
2319      "name": "LogicTest",
2320      "tests": 1,
2321      "failures": 0,
2322      "errors": 0,
2323      "time": "0.005s",
2324      "testsuite": [
2325        {
2326          "name": "NonContradiction",
2327          "file": "test.cpp",
2328          "line": 3,
2329          "status": "RUN",
2330          "time": "0.005s",
2331          "classname": ""
2332        }
2333      ]
2334    }
2335  ]
2336}
2337```
2338
2339{: .callout .important}
2340IMPORTANT: The exact format of the JSON document is subject to change.
2341
2342### Controlling How Failures Are Reported
2343
2344#### Detecting Test Premature Exit
2345
2346Google Test implements the _premature-exit-file_ protocol for test runners to
2347catch any kind of unexpected exits of test programs. Upon start, Google Test
2348creates the file which will be automatically deleted after all work has been
2349finished. Then, the test runner can check if this file exists. In case the file
2350remains undeleted, the inspected test has exited prematurely.
2351
2352This feature is enabled only if the `TEST_PREMATURE_EXIT_FILE` environment
2353variable has been set.
2354
2355#### Turning Assertion Failures into Break-Points
2356
2357When running test programs under a debugger, it's very convenient if the
2358debugger can catch an assertion failure and automatically drop into interactive
2359mode. googletest's *break-on-failure* mode supports this behavior.
2360
2361To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value
2362other than `0`. Alternatively, you can use the `--gtest_break_on_failure`
2363command line flag.
2364
2365#### Disabling Catching Test-Thrown Exceptions
2366
2367googletest can be used either with or without exceptions enabled. If a test
2368throws a C++ exception or (on Windows) a structured exception (SEH), by default
2369googletest catches it, reports it as a test failure, and continues with the next
2370test method. This maximizes the coverage of a test run. Also, on Windows an
2371uncaught exception will cause a pop-up window, so catching the exceptions allows
2372you to run the tests automatically.
2373
2374When debugging the test failures, however, you may instead want the exceptions
2375to be handled by the debugger, such that you can examine the call stack when an
2376exception is thrown. To achieve that, set the `GTEST_CATCH_EXCEPTIONS`
2377environment variable to `0`, or use the `--gtest_catch_exceptions=0` flag when
2378running the tests.
2379
2380### Sanitizer Integration
2381
2382The
2383[Undefined Behavior Sanitizer](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html),
2384[Address Sanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizer),
2385and
2386[Thread Sanitizer](https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual)
2387all provide weak functions that you can override to trigger explicit failures
2388when they detect sanitizer errors, such as creating a reference from `nullptr`.
2389To override these functions, place definitions for them in a source file that
2390you compile as part of your main binary:
2391
2392```
2393extern "C" {
2394void __ubsan_on_report() {
2395  FAIL() << "Encountered an undefined behavior sanitizer error";
2396}
2397void __asan_on_error() {
2398  FAIL() << "Encountered an address sanitizer error";
2399}
2400void __tsan_on_report() {
2401  FAIL() << "Encountered a thread sanitizer error";
2402}
2403}  // extern "C"
2404```
2405
2406After compiling your project with one of the sanitizers enabled, if a particular
2407test triggers a sanitizer error, googletest will report that it failed.
2408