1# Analysis Tests 2 3Analysis tests are the typical way to test rule behavior. They allow observing 4behavior about a rule that isn't visible to a regular test as well as modifying 5Bazel configuration state to test rule behavior for e.g. different platforms. 6 7If you've ever wanted to verify... 8 * A certain combination of flags 9 * Building for another OS 10 * That certain providers are returned 11 * That aspects behaved a certain way 12 13Or other observable information, then an analysis test does that. 14 15## Quick start 16 17For a quick copy/paste start, create a `.bzl` file with your test code, and a 18`BUILD.bazel` file to load your tests and declare them. Here's a skeleton: 19 20``` 21# BUILD 22load(":my_tests.bzl", "my_test_suite") 23 24my_test_suite(name="my_test_suite") 25``` 26 27``` 28# my_tests.bzl 29 30load("@rules_testing//lib:analysis_test.bzl", "test_suite", "analysis_test") 31load("@rules_testing//lib:util.bzl", "util") 32 33def _test_hello(name): 34 util.helper_target( 35 native.filegroup, 36 name = name + "_subject", 37 srcs = ["hello_world.txt"], 38 ) 39 analysis_test( 40 name = name, 41 impl = _test_hello_impl, 42 target = name + "_subject" 43 ) 44 45def _test_hello_impl(env, target): 46 env.expect.that_target(target).default_outputs().contains( 47 "hello_world.txt" 48 ) 49 50def my_test_suite(name): 51 test_suite( 52 name = name, 53 tests = [ 54 _test_hello, 55 ] 56 ) 57``` 58 59## Arranging the test 60 61The arrange part of a test defines a target using the rule under test and sets 62up its dependencies. This is done by writing a macro, which runs during the 63loading phase, that instantiates the target under test and dependencies. All the 64targets taking part in the arrangement should be tagged with `manual` so that 65they are ignored by common build patterns (e.g. `//...` or `foo:all`). 66 67Example: 68 69```python 70load("@rules_proto/defs:proto_library.bzl", "proto_library") 71 72 73def _test_basic(name): 74 """Verifies basic behavior of a proto_library rule.""" 75 # (1) Arrange 76 proto_library(name=name + '_foo', srcs=["foo.proto"], deps=[name + "_bar"], tags=["manual"]) 77 proto_library(name=name + '_bar', srcs=["bar.proto"], tags=["manual"]) 78 79 # (2) Act 80 ... 81``` 82 83TIP: Source source files aren't required to exist. This is because the analysis 84phase only records the path to source files; they aren't read until after the 85analysis phase. The macro function should be named after the behaviour being 86tested (e.g. `_test_frob_compiler_passed_qux_flag`). The setup targets should 87follow the 88[macro naming conventions](https://bazel.build/rules/macros#conventions), that 89is all targets should include the name argument as a prefix -- this helps tests 90avoid creating conflicting names. 91 92<!-- TODO(ilist): Mocking implicit dependencies --> 93 94### Limitations 95 96Bazel limits the number of transitive dependencies that can be used in the 97setup. The limit is controlled by 98[`--analysis_testing_deps_limit`](https://bazel.build/reference/command-line-reference#flag--analysis_testing_deps_limit) 99flag. 100 101Mocking toolchains (adding a toolchain used only in the test) is not possible at 102the moment. 103 104## Running the analysis phase 105 106The act part runs the analysis phase for a specific target and calls a user 107supplied function. All of the work is done by Bazel and the framework. Use 108`analysis_test` macro to pass in the target to analyse and a function that will 109be called with the analysis results: 110 111```python 112load("@rules_testing//lib:analysis_test.bzl", "analysis_test") 113 114 115def _test_basic(name): 116 ... 117 118 # (2) Act 119 analysis_test(name, target=name + "_foo", impl=_test_basic) 120``` 121 122<!-- TODO(ilist): Setting configuration flags --> 123 124## Assertions 125 126The assert function (in example `_test_basic`) gets `env` and `target` as 127parameters, where... 128 * `env` is information about the overall build and test 129 * `target` is the target under test (as specified in the `target` attribute 130 during the arrange step). 131 132The `env.expect` attribute provides a `truth.Expect` object, which allows 133writing fluent asserts: 134 135```python 136 137 138def _test_basic(env, target): 139 env.expect.assert_that(target).runfiles().contains_at_least("foo.txt") 140 env.expect.assert_that(target).action_generating("foo.txt").contains_flag_values("--a") 141 142``` 143 144Note that you aren't _required_ to use `env.expect`. If you want to perform 145asserts another way, then `env.fail()` can be called to register any failures. 146 147<!-- TODO(ilist): ### Assertions on providers --> 148<!-- TODO(ilist): ### Assertions on actions --> 149<!-- TODO(ilist): ## testing aspects --> 150 151 152## Collecting the tests together 153 154Use the `test_suite` function to collect all tests together: 155 156```python 157load("@rules_testing//lib:analysis_test.bzl", "test_suite") 158 159 160def proto_library_test_suite(name): 161 test_suite( 162 name=name, 163 tests=[ 164 _test_basic, 165 _test_advanced, 166 ] 167 ) 168``` 169 170In your `BUILD` file instantiate the suite: 171 172``` 173load("//path/to/your/package:proto_library_tests.bzl", "proto_library_test_suite") 174proto_library_test_suite(name = "proto_library_test_suite") 175``` 176 177The function instantiates all test macros and wraps them into a single target. This removes the need 178to load and call each test separately in the `BUILD` file. 179 180### Advanced test collection, reuse, and parameterizing 181 182If you have many tests and rules and need to re-use them between each other, 183then there are a couple tricks to make it easy: 184 185* Tests aren't required to all be in the same file. So long as you can load the 186 arrange function and pass it to `test_suite`, then you can split tests into 187 multiple files for reuse. 188* Similarly, arrange functions themselves aren't required to take only a `name` 189 argument -- only the functions passed to `test_suite.test` require this. 190 191By using lists and lambdas, we can define collections of tests and have multiple 192rules reuse them: 193 194``` 195# base_tests.bzl 196 197_base_tests = [] 198 199def _test_common(name, rule_under_test): 200 rule_under_test(...) 201 analysis_test(...) 202 203def _test_common_impl(env, target): 204 env.expect.that_target(target).contains(...) 205 206_base_tests.append(_test_common) 207 208def create_base_tests(rule_under_test): 209 return [ 210 lambda name: test(name=name, rule_under_test=rule_under_test) 211 for test in _base_tests 212 ] 213 214# my_binary_tests.bzl 215load("//my/my_binary.bzl", "my_binary") 216load(":base_tests.bzl", "create_base_tests") 217load("@rules_testing//lib:analysis_test.bzl", "test_suite") 218 219def my_binary_suite(name): 220 test_suite( 221 name = name, 222 tests = create_base_tests(my_binary) 223 ) 224 225# my_test_tests.bzl 226load("//my/my_test.bzl", "my_test") 227load(":base_tests.bzl", "base_tests") 228load("@rules_testing//lib:analysis_test.bzl", "test_suite") 229 230def my_test_suite(name): 231 test_suite( 232 name = name, 233 tests = create_base_tests(my_test) 234 ) 235``` 236 237## Tips and best practices 238 239* Use private names for your tests, `def _test_foo`. This allows buildifier to 240 detect when you've forgotten to put a test in the `tests` attribute. The 241 framework will strip leading underscores from the test name 242* Tag the arranged inputs of your tests with `tags=["manual"]`; the 243 `util.helper_target` function helps with this. This prevents common build 244 patterns (e.g. `bazel test //...` or `bazel test :all`) from trying to 245 build them. 246* Put each rule's tests into their own directory with their own BUILD 247 file. This allows better isolation between the rules' test suites in several ways: 248 * When reusing tests, target names are less likely to collide. 249 * During the edit-run cycle, modifications to verify one rule that would 250 break another rule can be ignored until you're ready to test the other 251 rule. 252