• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1 //! Write your own tests and benchmarks that look and behave like built-in tests!
2 //!
3 //! This is a simple and small test harness that mimics the original `libtest`
4 //! (used by `cargo test`/`rustc --test`). That means: all output looks pretty
5 //! much like `cargo test` and most CLI arguments are understood and used. With
6 //! that plumbing work out of the way, your test runner can focus on the actual
7 //! testing.
8 //!
9 //! For a small real world example, see [`examples/tidy.rs`][1].
10 //!
11 //! [1]: https://github.com/LukasKalbertodt/libtest-mimic/blob/master/examples/tidy.rs
12 //!
13 //! # Usage
14 //!
15 //! To use this, you most likely want to add a manual `[[test]]` section to
16 //! `Cargo.toml` and set `harness = false`. For example:
17 //!
18 //! ```toml
19 //! [[test]]
20 //! name = "mytest"
21 //! path = "tests/mytest.rs"
22 //! harness = false
23 //! ```
24 //!
25 //! And in `tests/mytest.rs` you would call [`run`] in the `main` function:
26 //!
27 //! ```no_run
28 //! use libtest_mimic::{Arguments, Trial};
29 //!
30 //!
31 //! // Parse command line arguments
32 //! let args = Arguments::from_args();
33 //!
34 //! // Create a list of tests and/or benchmarks (in this case: two dummy tests).
35 //! let tests = vec![
36 //!     Trial::test("succeeding_test", move || Ok(())),
37 //!     Trial::test("failing_test", move || Err("Woops".into())),
38 //! ];
39 //!
40 //! // Run all tests and exit the application appropriatly.
41 //! libtest_mimic::run(&args, tests).exit();
42 //! ```
43 //!
44 //! Instead of returning `Ok` or `Err` directly, you want to actually perform
45 //! your tests, of course. See [`Trial::test`] for more information on how to
46 //! define a test. You can of course list all your tests manually. But in many
47 //! cases it is useful to generate one test per file in a directory, for
48 //! example.
49 //!
50 //! You can then run `cargo test --test mytest` to run it. To see the CLI
51 //! arguments supported by this crate, run `cargo test --test mytest -- -h`.
52 //!
53 //!
54 //! # Known limitations and differences to the official test harness
55 //!
56 //! `libtest-mimic` works on a best-effort basis: it tries to be as close to
57 //! `libtest` as possible, but there are differences for a variety of reasons.
58 //! For example, some rarely used features might not be implemented, some
59 //! features are extremely difficult to implement, and removing minor,
60 //! unimportant differences is just not worth the hassle.
61 //!
62 //! Some of the notable differences:
63 //!
64 //! - Output capture and `--nocapture`: simply not supported. The official
65 //!   `libtest` uses internal `std` functions to temporarily redirect output.
66 //!   `libtest-mimic` cannot use those. See [this issue][capture] for more
67 //!   information.
68 //! - `--format=json|junit`
69 //!
70 //! [capture]: https://github.com/LukasKalbertodt/libtest-mimic/issues/9
71 
72 #![forbid(unsafe_code)]
73 
74 use std::{process, sync::mpsc, fmt, time::Instant};
75 
76 mod args;
77 mod printer;
78 
79 use printer::Printer;
80 use threadpool::ThreadPool;
81 
82 pub use crate::args::{Arguments, ColorSetting, FormatSetting};
83 
84 
85 
86 /// A single test or benchmark.
87 ///
88 /// The original `libtest` often calls benchmarks "tests", which is a bit
89 /// confusing. So in this library, it is called "trial".
90 ///
91 /// A trial is created via [`Trial::test`] or [`Trial::bench`]. The trial's
92 /// `name` is printed and used for filtering. The `runner` is called when the
93 /// test/benchmark is executed to determine its outcome. If `runner` panics,
94 /// the trial is considered "failed". If you need the behavior of
95 /// `#[should_panic]` you need to catch the panic yourself. You likely want to
96 /// compare the panic payload to an expected value anyway.
97 pub struct Trial {
98     runner: Box<dyn FnOnce(bool) -> Outcome + Send>,
99     info: TestInfo,
100 }
101 
102 impl Trial {
103     /// Creates a (non-benchmark) test with the given name and runner.
104     ///
105     /// The runner returning `Ok(())` is interpreted as the test passing. If the
106     /// runner returns `Err(_)`, the test is considered failed.
test<R>(name: impl Into<String>, runner: R) -> Self where R: FnOnce() -> Result<(), Failed> + Send + 'static,107     pub fn test<R>(name: impl Into<String>, runner: R) -> Self
108     where
109         R: FnOnce() -> Result<(), Failed> + Send + 'static,
110     {
111         Self {
112             runner: Box::new(move |_test_mode| match runner() {
113                 Ok(()) => Outcome::Passed,
114                 Err(failed) => Outcome::Failed(failed),
115             }),
116             info: TestInfo {
117                 name: name.into(),
118                 kind: String::new(),
119                 is_ignored: false,
120                 is_bench: false,
121             },
122         }
123     }
124 
125     /// Creates a benchmark with the given name and runner.
126     ///
127     /// If the runner's parameter `test_mode` is `true`, the runner function
128     /// should run all code just once, without measuring, just to make sure it
129     /// does not panic. If the parameter is `false`, it should perform the
130     /// actual benchmark. If `test_mode` is `true` you may return `Ok(None)`,
131     /// but if it's `false`, you have to return a `Measurement`, or else the
132     /// benchmark is considered a failure.
133     ///
134     /// `test_mode` is `true` if neither `--bench` nor `--test` are set, and
135     /// `false` when `--bench` is set. If `--test` is set, benchmarks are not
136     /// ran at all, and both flags cannot be set at the same time.
bench<R>(name: impl Into<String>, runner: R) -> Self where R: FnOnce(bool) -> Result<Option<Measurement>, Failed> + Send + 'static,137     pub fn bench<R>(name: impl Into<String>, runner: R) -> Self
138     where
139         R: FnOnce(bool) -> Result<Option<Measurement>, Failed> + Send + 'static,
140     {
141         Self {
142             runner: Box::new(move |test_mode| match runner(test_mode) {
143                 Err(failed) => Outcome::Failed(failed),
144                 Ok(_) if test_mode => Outcome::Passed,
145                 Ok(Some(measurement)) => Outcome::Measured(measurement),
146                 Ok(None)
147                     => Outcome::Failed("bench runner returned `Ok(None)` in bench mode".into()),
148             }),
149             info: TestInfo {
150                 name: name.into(),
151                 kind: String::new(),
152                 is_ignored: false,
153                 is_bench: true,
154             },
155         }
156     }
157 
158     /// Sets the "kind" of this test/benchmark. If this string is not
159     /// empty, it is printed in brackets before the test name (e.g.
160     /// `test [my-kind] test_name`). (Default: *empty*)
161     ///
162     /// This is the only extension to the original libtest.
with_kind(self, kind: impl Into<String>) -> Self163     pub fn with_kind(self, kind: impl Into<String>) -> Self {
164         Self {
165             info: TestInfo {
166                 kind: kind.into(),
167                 ..self.info
168             },
169             ..self
170         }
171     }
172 
173     /// Sets whether or not this test is considered "ignored". (Default: `false`)
174     ///
175     /// With the built-in test suite, you can annotate `#[ignore]` on tests to
176     /// not execute them by default (for example because they take a long time
177     /// or require a special environment). If the `--ignored` flag is set,
178     /// ignored tests are executed, too.
with_ignored_flag(self, is_ignored: bool) -> Self179     pub fn with_ignored_flag(self, is_ignored: bool) -> Self {
180         Self {
181             info: TestInfo {
182                 is_ignored,
183                 ..self.info
184             },
185             ..self
186         }
187     }
188 
189     /// Returns the name of this trial.
name(&self) -> &str190     pub fn name(&self) -> &str {
191         &self.info.name
192     }
193 
194     /// Returns the kind of this trial. If you have not set a kind, this is an
195     /// empty string.
kind(&self) -> &str196     pub fn kind(&self) -> &str {
197         &self.info.kind
198     }
199 
200     /// Returns whether this trial has been marked as *ignored*.
has_ignored_flag(&self) -> bool201     pub fn has_ignored_flag(&self) -> bool {
202         self.info.is_ignored
203     }
204 
205     /// Returns `true` iff this trial is a test (as opposed to a benchmark).
is_test(&self) -> bool206     pub fn is_test(&self) -> bool {
207         !self.info.is_bench
208     }
209 
210     /// Returns `true` iff this trial is a benchmark (as opposed to a test).
is_bench(&self) -> bool211     pub fn is_bench(&self) -> bool {
212         self.info.is_bench
213     }
214 }
215 
216 impl fmt::Debug for Trial {
fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result217     fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
218         struct OpaqueRunner;
219         impl fmt::Debug for OpaqueRunner {
220             fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
221                 f.write_str("<runner>")
222             }
223         }
224 
225         f.debug_struct("Test")
226             .field("runner", &OpaqueRunner)
227             .field("name", &self.info.name)
228             .field("kind", &self.info.kind)
229             .field("is_ignored", &self.info.is_ignored)
230             .field("is_bench", &self.info.is_bench)
231             .finish()
232     }
233 }
234 
235 #[derive(Debug)]
236 struct TestInfo {
237     name: String,
238     kind: String,
239     is_ignored: bool,
240     is_bench: bool,
241 }
242 
243 /// Output of a benchmark.
244 #[derive(Debug, Clone, Copy, PartialEq, Eq)]
245 pub struct Measurement {
246     /// Average time in ns.
247     pub avg: u64,
248 
249     /// Variance in ns.
250     pub variance: u64,
251 }
252 
253 /// Indicates that a test/benchmark has failed. Optionally carries a message.
254 ///
255 /// You usually want to use the `From` impl of this type, which allows you to
256 /// convert any `T: fmt::Display` (e.g. `String`, `&str`, ...) into `Failed`.
257 #[derive(Debug, Clone)]
258 pub struct Failed {
259     msg: Option<String>,
260 }
261 
262 impl Failed {
263     /// Creates an instance without message.
without_message() -> Self264     pub fn without_message() -> Self {
265         Self { msg: None }
266     }
267 
268     /// Returns the message of this instance.
message(&self) -> Option<&str>269     pub fn message(&self) -> Option<&str> {
270         self.msg.as_deref()
271     }
272 }
273 
274 impl<M: std::fmt::Display> From<M> for Failed {
from(msg: M) -> Self275     fn from(msg: M) -> Self {
276         Self {
277             msg: Some(msg.to_string())
278         }
279     }
280 }
281 
282 
283 
284 /// The outcome of performing a test/benchmark.
285 #[derive(Debug, Clone)]
286 enum Outcome {
287     /// The test passed.
288     Passed,
289 
290     /// The test or benchmark failed.
291     Failed(Failed),
292 
293     /// The test or benchmark was ignored.
294     Ignored,
295 
296     /// The benchmark was successfully run.
297     Measured(Measurement),
298 }
299 
300 /// Contains information about the entire test run. Is returned by[`run`].
301 ///
302 /// This type is marked as `#[must_use]`. Usually, you just call
303 /// [`exit()`][Conclusion::exit] on the result of `run` to exit the application
304 /// with the correct exit code. But you can also store this value and inspect
305 /// its data.
306 #[derive(Clone, Debug, PartialEq, Eq)]
307 #[must_use = "Call `exit()` or `exit_if_failed()` to set the correct return code"]
308 pub struct Conclusion {
309     /// Number of tests and benchmarks that were filtered out (either by the
310     /// filter-in pattern or by `--skip` arguments).
311     pub num_filtered_out: u64,
312 
313     /// Number of passed tests.
314     pub num_passed: u64,
315 
316     /// Number of failed tests and benchmarks.
317     pub num_failed: u64,
318 
319     /// Number of ignored tests and benchmarks.
320     pub num_ignored: u64,
321 
322     /// Number of benchmarks that successfully ran.
323     pub num_measured: u64,
324 }
325 
326 impl Conclusion {
327     /// Exits the application with an appropriate error code (0 if all tests
328     /// have passed, 101 if there have been failures).
exit(&self) -> !329     pub fn exit(&self) -> ! {
330         self.exit_if_failed();
331         process::exit(0);
332     }
333 
334     /// Exits the application with error code 101 if there were any failures.
335     /// Otherwise, returns normally.
exit_if_failed(&self)336     pub fn exit_if_failed(&self) {
337         if self.has_failed() {
338             process::exit(101)
339         }
340     }
341 
342     /// Returns whether there have been any failures.
has_failed(&self) -> bool343     pub fn has_failed(&self) -> bool {
344         self.num_failed > 0
345     }
346 
empty() -> Self347     fn empty() -> Self {
348         Self {
349             num_filtered_out: 0,
350             num_passed: 0,
351             num_failed: 0,
352             num_ignored: 0,
353             num_measured: 0,
354         }
355     }
356 }
357 
358 impl Arguments {
359     /// Returns `true` if the given test should be ignored.
is_ignored(&self, test: &Trial) -> bool360     fn is_ignored(&self, test: &Trial) -> bool {
361         (test.info.is_ignored && !self.ignored && !self.include_ignored)
362             || (test.info.is_bench && self.test)
363             || (!test.info.is_bench && self.bench)
364     }
365 
is_filtered_out(&self, test: &Trial) -> bool366     fn is_filtered_out(&self, test: &Trial) -> bool {
367         let test_name = &test.info.name;
368 
369         // If a filter was specified, apply this
370         if let Some(filter) = &self.filter {
371             match self.exact {
372                 true if test_name != filter => return true,
373                 false if !test_name.contains(filter) => return true,
374                 _ => {}
375             };
376         }
377 
378         // If any skip pattern were specified, test for all patterns.
379         for skip_filter in &self.skip {
380             match self.exact {
381                 true if test_name == skip_filter => return true,
382                 false if test_name.contains(skip_filter) => return true,
383                 _ => {}
384             }
385         }
386 
387         if self.ignored && !test.info.is_ignored {
388             return true;
389         }
390 
391         false
392     }
393 }
394 
395 /// Runs all given trials (tests & benchmarks).
396 ///
397 /// This is the central function of this crate. It provides the framework for
398 /// the testing harness. It does all the printing and house keeping.
399 ///
400 /// The returned value contains a couple of useful information. See
401 /// [`Conclusion`] for more information. If `--list` was specified, a list is
402 /// printed and a dummy `Conclusion` is returned.
run(args: &Arguments, mut tests: Vec<Trial>) -> Conclusion403 pub fn run(args: &Arguments, mut tests: Vec<Trial>) -> Conclusion {
404     let start_instant = Instant::now();
405     let mut conclusion = Conclusion::empty();
406 
407     // Apply filtering
408     if args.filter.is_some() || !args.skip.is_empty() || args.ignored {
409         let len_before = tests.len() as u64;
410         tests.retain(|test| !args.is_filtered_out(test));
411         conclusion.num_filtered_out = len_before - tests.len() as u64;
412     }
413     let tests = tests;
414 
415     // Create printer which is used for all output.
416     let mut printer = printer::Printer::new(args, &tests);
417 
418     // If `--list` is specified, just print the list and return.
419     if args.list {
420         printer.print_list(&tests, args.ignored);
421         return Conclusion::empty();
422     }
423 
424     // Print number of tests
425     printer.print_title(tests.len() as u64);
426 
427     let mut failed_tests = Vec::new();
428     let mut handle_outcome = |outcome: Outcome, test: TestInfo, printer: &mut Printer| {
429         printer.print_single_outcome(&outcome);
430 
431         // Handle outcome
432         match outcome {
433             Outcome::Passed => conclusion.num_passed += 1,
434             Outcome::Failed(failed) => {
435                 failed_tests.push((test, failed.msg));
436                 conclusion.num_failed += 1;
437             },
438             Outcome::Ignored => conclusion.num_ignored += 1,
439             Outcome::Measured(_) => conclusion.num_measured += 1,
440         }
441     };
442 
443     // Execute all tests.
444     let test_mode = !args.bench;
445     if args.test_threads == Some(1) {
446         // Run test sequentially in main thread
447         for test in tests {
448             // Print `test foo    ...`, run the test, then print the outcome in
449             // the same line.
450             printer.print_test(&test.info);
451             let outcome = if args.is_ignored(&test) {
452                 Outcome::Ignored
453             } else {
454                 run_single(test.runner, test_mode)
455             };
456             handle_outcome(outcome, test.info, &mut printer);
457         }
458     } else {
459         // Run test in thread pool.
460         let pool = match args.test_threads {
461             Some(num_threads) => ThreadPool::new(num_threads),
462             None => ThreadPool::default()
463         };
464         let (sender, receiver) = mpsc::channel();
465 
466         let num_tests = tests.len();
467         for test in tests {
468             if args.is_ignored(&test) {
469                 sender.send((Outcome::Ignored, test.info)).unwrap();
470             } else {
471                 let sender = sender.clone();
472                 pool.execute(move || {
473                     // It's fine to ignore the result of sending. If the
474                     // receiver has hung up, everything will wind down soon
475                     // anyway.
476                     let outcome = run_single(test.runner, test_mode);
477                     let _ = sender.send((outcome, test.info));
478                 });
479             }
480         }
481 
482         for (outcome, test_info) in receiver.iter().take(num_tests) {
483             // In multithreaded mode, we do only print the start of the line
484             // after the test ran, as otherwise it would lead to terribly
485             // interleaved output.
486             printer.print_test(&test_info);
487             handle_outcome(outcome, test_info, &mut printer);
488         }
489     }
490 
491     // Print failures if there were any, and the final summary.
492     if !failed_tests.is_empty() {
493         printer.print_failures(&failed_tests);
494     }
495 
496     printer.print_summary(&conclusion, start_instant.elapsed());
497 
498     conclusion
499 }
500 
501 /// Runs the given runner, catching any panics and treating them as a failed test.
run_single(runner: Box<dyn FnOnce(bool) -> Outcome + Send>, test_mode: bool) -> Outcome502 fn run_single(runner: Box<dyn FnOnce(bool) -> Outcome + Send>, test_mode: bool) -> Outcome {
503     use std::panic::{catch_unwind, AssertUnwindSafe};
504 
505     catch_unwind(AssertUnwindSafe(move || runner(test_mode))).unwrap_or_else(|e| {
506         // The `panic` information is just an `Any` object representing the
507         // value the panic was invoked with. For most panics (which use
508         // `panic!` like `println!`), this is either `&str` or `String`.
509         let payload = e.downcast_ref::<String>()
510             .map(|s| s.as_str())
511             .or(e.downcast_ref::<&str>().map(|s| *s));
512 
513         let msg = match payload {
514             Some(payload) => format!("test panicked: {payload}"),
515             None => format!("test panicked"),
516         };
517         Outcome::Failed(msg.into())
518     })
519 }
520