• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Autotest for Chromium OS developers
2
3[TOC]
4
5## Useful documents
6
7[Autotest documentation on GitHub](https://github.com/autotest/autotest/wiki/AutotestApi):
8This would be a good read if you want to familiarize yourself with the basic
9Autotest concepts.
10
11[Gentoo Portage ebuild/eclass Information](http://www.gentoo.org/proj/en/devrel/handbook/handbook.xml?part=2):
12Getting to know the package build system we use.
13
14[ChromiumOS specific Portage FAQ](http://www.chromium.org/chromium-os/how-tos-and-troubleshooting/portage-build-faq):
15Learning something about the way we use portage.
16
17## Autotest and ebuild workflow
18
19To familiarize with autotest concepts, you should start with the upstream
20Autotest documentation at: https://github.com/autotest/autotest/wiki/AutotestApi
21
22The rest of this document is going to use some terms and only explain them
23vaguely.
24
25### Overview
26
27At a high level, tests are organized in test cases, each test case being either
28server or client, with one main .py file named the same as the test case, and
29one or more control files. In order to be able to perform all tasks on a given
30test, autotest expects tests to be placed in a monolithic file structure
31of:
32
33-   `/client/tests/`
34-   `/client/site_tests/`
35-   `/server/tests/`
36-   `/server/site_tests/`
37
38Each test directory has to have at least a control file, but typically also has
39a main job module (named the same as the test case). Furthermore, if it needs
40any additional files checked in, they are typically placed in a `files/`
41directory, and separate projects that can be built with a Makefile inside the
42`src/` directory.
43
44Due to structural limitations in Chromium OS, it is not possible to store all
45test cases in this structure in a single large source repository as upstream
46autotest source would (placed at `third_party/autotest/files/` in Chromium OS).
47In particular, the following has been required in the past:
48
49-   Having confidential (publicly inaccessible) tests or generally per-test ACLs
50    for sharing only with a particular partner only.
51-   Storing test cases along with the project they wrap around, because the test
52    requires binaries built as a by-product of the project’s own build system.
53    (e.g.  chrome or tpm tests)
54
55Furthermore, it has been desired to generally build everything that is not
56strongly ordered in parallel, significantly decreasing build times. That,
57however, requires proper dependency tree declaration and being able to specify
58which test cases require what dependencies, in addition to being able to
59process different "independent" parts of a single source repository in
60parallel.
61
62This leads to the ebuild workflow, which generally allows compositing any
63number of sources in any format into a single monolithic tree, whose contents
64depend on build parameters.
65
66![ebuild workflow](./atest-diagram.png)
67
68This allows using standard autotest workflow without any change, however,
69unlike what upstream does, the tests aren’t run directly from the source
70repository, rather from a staging read-only install location. This leads to
71certain differences in workflow:
72
73-   Source may live in an arbitrary location or can be generated on the fly.
74    Anything that can be created as an ebuild (shell script) can be a test source.
75    (cros-workon may be utilised, introducing a fairly standard Chromium OS
76    project workflow)
77-   The staging location (`/build/${board}/usr/local/autotest/`) may not be
78    modified; if one wants to modify it, they have to find the source to it
79    (using other tools, see FAQ).
80-   Propagating source changes requires an emerge step.
81
82### Ebuild setup, autotest eclass
83
84**NOTE**: This assumes some basic knowledge of how ebuilds in Chromium OS work.
85Further documentation is available at http://www.chromium.org/chromium-os/how-tos-and-troubleshooting/portage-build-faq
86
87An **autotest ebuild** is an ebuild that produces test cases and installs them into
88the staging area. It has three general tasks:
89
90-   Obtain the source - This is generally (but not necessarily) provided by
91    ‘cros-workon’ eclass. It could also work with the more standard tarball
92    SRC_URI pathway or generally any shell code executed in `src_unpack()`.
93-   Prepare test cases - This includes, but is not limited to preprocessing any
94    source, copying source files or intermediate binaries into the expected
95    locations, where they will be taken over by autotest code, specifically the
96    `setup()` function of the appropriate test. Typically, this is not needed.
97-   Call autotest to "build" all sources and subsequently install them - This
98    should be done exclusively by inheriting the **autotest eclass**, which
99    bundles up all the necessary code to install into the intermediate location.
100
101**Autotest eclass** is inherited by all autotest ebuilds, only requires a
102number of variables specified and works by itself otherwise. Most variables
103describe the locations and listings of work that needs to be done:
104
105-   Location variables define the paths to directories containing the test
106files:
107
108    -   `AUTOTEST_{CLIENT,SERVER}_{TESTS,SITE_TESTS}`
109    -   `AUTOTEST_{DEPS,PROFILERS,CONFIG}`
110
111    These typically only need to be specified if they differ from the defaults
112    (which follow the upstream directory structure)
113
114-   List variables (`AUTOTEST_*_LIST`) define the list of deps, profilers,
115    configs that should be handled by this ebuild.
116-   IUSE test list specification TESTS=, is a USE_EXPANDed specification of
117    tests managed by the given ebuild. By virtue of being an IUSE variable, all
118    of the options are visible as USE flag toggles while building the ebuild,
119    unlike with list variables which are a given and the ebuild has to be
120    modified for those to change.
121
122Each ebuild usually operates on a single source repository. That does not
123always have to hold true, however, and in case of autotest, many ebuilds check
124out the sources of the same source repository (*autotest.git*). Invariably, this
125means that they have to be careful to not install the same files and split the
126sources between themselves to avoid file install collisions.
127If more than one autotest ebuild operates on the same source repository, they
128**have to** use the above variables to define mutually exclusive slices in order
129to not collide during installation. Generally, if we have a source repository
130with client site_tests A and B, you can have either:
131
132-   one ebuild with IUSE_TESTS="+tests_A +tests_B"
133-   two different ebuilds, one with IUSE_TESTS="+tests_A", the other with
134    IUSE_TESTS="+tests_B"
135
136As soon as an overlap between ebuilds happens, either an outside mechanism has
137to ensure the overlapping tests are never enabled at the same time, or file
138collisions happen.
139
140
141## Building tests
142
143Fundamentally, a test has two main phases:
144
145-   `run_*()` - This is is the main part that performs all testing and is
146    invoked by the control files, once or repeatedly.
147-   `setup()` - This function, present in the test case’s main .py file is
148    supposed to prepare the test for running. This includes building any
149    binaries, initializing data, etc.
150
151During building using emerge, autotest will call a `setup()` function of all
152test cases/deps involved. This is supposed to prepare everything. Typically,
153this will invoke make on a Makefile present in the test’s src/ directory, but
154can involve any other transformation of sources (also be empty if there’s
155nothing to build).
156**Note**, however, that `setup()` is implicitly called many times as test
157initialization even during `run_*()` step, so it should be a noop on reentry
158that merely verifies everything is in order.
159
160Unlike `run_*()` functions, `setup()` gets called both during the prepare phase
161which happens on the **host and target alike**. This creates a problem with
162code that is being depended on or directly executed during `setup()`. Python
163modules that are imported in any pathway leading to `setup()` are needed both
164in the host chroot and on the target board to properly support the test. Any
165binaries would need to be compiled using the host compiler and either ensured
166that they will be skipped on the target (incremental `setup()` runs) or
167cross-compiled again and dynamically chosen while running on target.
168
169**More importantly**, in Chromium OS scenario, doing any write operations
170inside the `setup()` function will lead to **access denied failures**, because
171tests are being run from the intermediate read-only location.
172
173Given the above, building is as easy as **emerge**-ing the autotest ebuild that
174contains our test.
175```
176$ emerge-${board} ${test_ebuild}
177```
178
179*Currently, tests are organized within these notable ebuilds*: (see
180[FAQ](#Q1_What-autotest-ebuilds-are-out-there_) full list):
181
182-   chromeos-base/autotest-tests - The main ebuild handling most of autotest.git
183    repository and its client and server tests.
184-   chromeos-base/autotest-tests-* - Various ebuilds that build other parts of
185    autotest.git
186-   chromeos-base/chromeos-chrome - chrome tests; the tests that are part of
187    chrome
188
189### Building tests selectively
190
191Test cases built by ebuilds generally come in large bundles. Sometimes, only a
192subset, or generally a different set of the tests provided by a given ebuild is
193desired. That is achieved using a
194[USE_EXPANDed](http://devmanual.gentoo.org/general-concepts/use-flags/index.html)
195flag called TESTS.
196
197All USE flags (and therefore tests) have a default state, either enabled (+) or
198disabled (-), specified directly in the ebuild, that can be manually overridden
199from the commandline. There are two ways to do that.
200
201-   Non-Incremental - Simply override the default selection by an entirely new
202    selection, ignoring the defaults. This is useful if you develop a single
203    test and don’t want to waste time building the others.
204
205        $ TESTS="test1 test2" emerge-${board} ${ebuild}
206
207-   Incremental - All USE_EXPAND flags are also accessible as USE flags, with
208    the appropriate prefix, and can be used incrementally to selectively
209    enable/disable tests in addition to the defaults. This can be useful if you
210    aim to enable a test that is disabled by default and want to test locally.
211
212        $ USE="test_to_be_enabled -test_to_be_disabled" emerge-${board} \
213          ${ebuild}
214
215For operations across all tests, following incremental USE wildcard is
216supported by portage: "tests_*" to select all tests at once (or - to
217de-select).
218
219**NOTE**: Both Incremental and Non-Incremental methods can be set/overriden by
220(in this order): the ebuild (default values), make.profile, make.conf,
221/etc/portage, commandline (see above). That means that any settings provided on
222the emerge commandline override everything else.
223
224## Running tests
225
226**NOTE**: In order to run tests on your device, it needs to have a
227[test-enabled image](#W4_Create-and-run-a-test-enabled-image-on-your-device).
228
229When running tests, fundamentally, you want to either:
230
231-   Run sets of tests manually - Use case: Developing test cases
232
233    Take your local test sources, modify them, and then attempt to run them on a
234    target machine using autotest. You are generally responsible for making sure
235    that the machine is imaged to a test image, and the image contains all the
236    dependencies needed to support your tests.
237
238-   Verify a given image - Use case: Developing the projects subject to testing
239
240    Take an image, re-image the target device and run a test suite on it. This
241    requires either use of build-time autotest artifacts or their reproduction
242    by not modifying or resyncing your sources after an image has been built.
243
244### Running tests on a machine
245
246Autotests are run with a tool called
247[test_that](https://chromium.googlesource.com/chromiumos/third_party/autotest/+/refs/heads/master/docs/test-that.md).
248
249### Running tests in a VM - cros_run_vm_tests
250
251VM tests are conveniently wrapped into a script `cros_run_vm_tests` that sets up
252the VM using a given image and then calls `test_that`. This is run by builders
253to test using the Smoke suite.
254
255If you want to run your tests on the VM (see
256[here](https://www.chromium.org/chromium-os/how-tos-and-troubleshooting/running-chromeos-image-under-virtual-machines) for basic instructions for
257setting up KVM with cros images) be aware of the following:
258
259-   `cros_run_vm_test` starts up a VM and runs autotests using the port
260-   specified (defaults to 9222).  As an example:
261
262        $ ./bin/cros_run_vm_test --test_case=suite_Smoke \
263        --image_path=<my_image_to_start or don't set to use most recent build> \
264        --board=x86-generic
265
266-   The emulator command line redirects localhost port 9222 to the emulated
267    machine's port 22 to allow you to ssh into the emulator. For Chromium OS to
268    actually listen on this port you must append the `--test_image` parameter
269    when you run the `./image_to_vm.sh` script, or perhaps run the
270    `mod_image_for_test.sh` script instead.
271-   You can then run tests on the correct ssh port with something like
272
273        $ test_that --board=x86-generic localhost:9222 'f:.*platform_BootPerf/control'
274
275-   To set the sudo password run set_shared_user_password. Then within the
276    emulator you can press Ctrl-Alt-T to get a terminal, and sudo using this
277    password. This will also allow you to ssh into the unit with, e.g.
278
279        $ ssh -p 9222 root@localhost
280
281-   Warning: After
282    [crbug/710629](https://bugs.chromium.org/p/chromium/issues/detail?id=710629),
283    'betty' is the only board regularly run through pre-CQ and CQ VMTest and so
284    is the most likely to work at ToT. 'betty' is based on 'amd64-generic',
285    though, so 'amd64-generic' is likely to also work for most (non-ARC) tests.
286
287
288## Result log layout structure
289
290For information regarding the layout structure please refer to the following:
291[autotest-results-logs](https://www.chromium.org/chromium-os/testing/test-code-labs/autotest-client-tests/autotest-results-logs)
292
293### Interpreting test results
294
295Running autotest will result in a lot of information going by which is probably
296not too informative if you have not used autotest before.  At the end of the
297`test_that` run, you will see a summary of pass/failure status, along with
298performance results:
299
300```
30122:44:30 INFO | Using installation dir /home/autotest
30222:44:30 ERROR| Could not install autotest from repos
30322:44:32 INFO | Installation of autotest completed
30422:44:32 INFO | GOOD  ----  Autotest.install timestamp=1263509072 localtime=Jan 14 22:44:32
30522:44:33 INFO | Executing /home/autotest/bin/autotest /home/autotest/control phase 0
30622:44:36 INFO | START  ---- ----  timestamp=1263509075 localtime=Jan 14 14:44:35
30722:44:36 INFO |  START   sleeptest sleeptest timestamp=1263509076 localtime=Jan 14 14:44:36
30822:44:36 INFO | Bundling /usr/local/autotest/client/tests/sleeptest into test-sleeptest.tar.bz2
30922:44:40 INFO |   GOOD  sleeptest  sleeptest  timestamp=1263509079 localtime=Jan 14 14:44:39 completed successfully
31022:44:40 INFO |   END GOOD  sleeptest sleeptest  timestamp=1263509079 localtime=Jan 14 14:44:39
31122:44:42 INFO | END GOOD ---- ---- timestamp=1263509082 localtime=Jan 14 14:44:42
31222:44:44 INFO | Client complete
31322:44:45 INFO | Finished processing control file
314```
315
316`test_that` will leave around a temp directory populated with diagnostic information:
317
318```
319Finished running tests. Results can be found in /tmp/test_that_results_j8GoWH or /tmp/test_that_latest
320```
321
322This directory will contain a directory per test run.  Each directory contains
323the logs pertaining to that test run.
324
325In that directory some interesting files are:
326
327${TEST}/debug/client.DEBUG - the most detailed output from running the
328client-side test
329
330### Running tests automatically, Suites
331
332Suites provide a mechanism to group tests together in test groups. They also
333serve as hooks for automated runs of tests verifying various builds. Most
334importantly, that is the BVT (board verification tests) and Smoke (a subset of
335BVT that can run in a VM.
336
337Please refer to the [suites documentation](https://www.chromium.org/chromium-os/testing/test-suites).
338
339## Writing and developing tests
340
341### Writing a test
342
343For understanding and writing the actual python code for autotest, please refer
344to the [Developer FAQ](http://www.chromium.org/chromium-os/testing/autotest-developer-faq#TOC-Writing-Autotests)
345
346Currently, all code should be placed in a standard layout inside the
347autotest.git repository, unless otherwise is necessary for technical reasons.
348Regardless, the following text assumes that code is placed in generally any
349repository.
350
351For a test to be fully functional in Chromium OS, it has to be associated with
352an ebuild. It is generally possible to run tests without an ebuild using
353`test_that` but discouraged, as the same will not function with other parts of
354the system.
355
356### Making a new test work with ebuilds
357
358The choice of ebuild depends on the location of its sources. Structuring tests
359into more smaller ebuilds (as opposed to one ebuild per source repository)
360serves two purposes:
361
362-   Categorisation - Grouping similar tests together, possibly with deps they
363    use exclusively.
364-   Parallelisation - Multiple independent ebuilds can build entirely in
365    parallel.
366-   Dependency tracking - Larger bundles of tests depend on more system
367    packages without proper resolution which dependency belongs to which test.
368    This also increases paralellism.
369
370Current ebuild structure is largely a result of breaking off the biggest
371blockers for parallelism, ie. tests depending on chrome or similar packages,
372and as such, using any of the current ebuilds should be sufficient. (see FAQ
373for listing of ebuilds)
374
375After choosing the proper ebuild to add your test into, the test (in the form
376“+tests_<testname>”) needs to be added to IUSE_TESTS list that all autotest
377ebuilds have. Failing to do so will simply make ebuilds ignore your tests
378entirely. As with all USE flags, prepending it with + means the test will be
379enabled by default, and should be the default, unless you want to keep the test
380experimental for your own use, or turn the USE flag on explicitly by other
381means, eg. in a config for a particular board only.
382
383Should a **new ebuild** be started, it should be added to
384**chromeos-base/autotest-all** package, which is a meta-ebuild depending on all
385autotest ebuild packages that can be built. autotest-all is used by the build
386system to automatically build all tests that we have and therefore keep them
387from randomly breaking.
388
389### Deps
390
391Autotest uses deps to provide a de-facto dependencies into the ecosystem. A dep
392is a directory in ‘**client/deps**’ with a structure similar to a test case
393without a control file. A test case that depends on a dep will invoke the dep’s
394`setup()` function in its own `setup()` function and will be able to access the
395files provided by the dep. Note that autotest deps have nothing to do with
396system dependencies.
397
398As the calls to a dep are internal autotest code, it is not possible to
399automatically detect these and make them an inter-package dependencies on the
400ebuild level. For that reason, deps should either be
401[provided](#Ebuild-setup_autotest-eclass) by the same ebuild that builds test
402that consume them, or ebuild dependencies need to be declared manually between
403the dep ebuild and the test ebuild that uses it.  An **autotest-deponly**
404eclass exists to provide solution for ebuilds that build only deps and no
405tests. A number of deponly ebuilds already exist.
406
407Common deps are:
408
409-   chrome_test - Intending to use any of the test binaries produced by chrome.
410-   pyauto_dep - Using pyauto for your code.
411
412### Test naming conventions
413
414Generally, the naming convention runs like this:
415
416\<component>\_\<TestName\>
417
418That convention names the directory containing the test code.  It also names
419the .py file containing the test code, and the class of the Autotest test.
420
421If there's only one control file, it's named control.  The test's NAME in the
422control file is \<component\>_\<TestName\>, like the directory and .py
423file.
424
425If there are multiple control files for a test, they are named
426control.\<testcase\>. These tests' NAMEs are then
427\<component\>_\<TestName\>.\<testcase\>.
428
429## Common workflows
430
431### W1. Develop and iterate on a test
432
4331.  Set up the environment.
434
435        $ cd ~/trunk/src/third_party/autotest/files/
436        $ export TESTS=”<the test cases to iterate on>”
437        $ EBUILD=<the ebuild that contains TEST>
438        $ board=<the board on which to develop>
439
4402.  Ensure cros_workon is started
441
442        $ cros_workon --board=${board} start ${EBUILD}
443        $ repo sync # Necessary only if you use minilayout.
444
4453.  Make modifications (on first run, you may want to just do 3,4 to verify
446    everything works before you touch it \& break it)
447
448        $ ...
449
4504.  Build test (TESTS= is not necessary if you exported it before)
451
452        $ emerge-$board $EBUILD
453
4545.  Run test to make sure it works before you touch it
455
456        $ test_that <machine IP> ${TESTS}
457
4586.  Go to 2) to iterate
4597.  Clean up environment
460
461        $ cros_workon --board=${board} stop ${EBUILD}
462        $ unset TESTS
463
464### W2. Creating a test - steps and checklist
465
466When creating a test, the following steps should be done/verified.
467
4681.  Create the actual test directory, main test files/sources, at least one
469    control file
4702.  Find the appropriate ebuild package and start working on it:
471
472        $ cros_workon --board=${board} start <package>
473
4743.  Add the new test into the IUSE_TESTS list of 9999 ebuild
4754.  Try building: (make sure it’s the 9999 version being built)
476
477        $ TESTS=<test> emerge-$board <package>
478
4795.  Try running:
480
481        $ test_that <IP> <test>
482
4836.  Iterate on 4,5 and modify source until happy with the initial version.
4847.  Commit test source first, when it is safely in, commit the 9999 ebuild
485    version change.
4868.  Cleanup
487
488         $ cros_workon --board=${board} stop <package>
489
490### W3. Splitting autotest ebuild into two
491
492Removing a test from one ebuild and adding to another in the same revision
493causes portage file collisions unless counter-measures are taken. Generally,
494some things routinely go wrong in this process, so this checklist should serve
495to help that.
496
4971.  We have ebuild **foo-0.0.1-r100** with **test** and would like to split
498    that test off into ebuild **bar-0.0.1-r1**.
499    Assume that:
500    -   both ebuilds are using cros-workon (because it’s likely the case).
501    -   foo is used globally (eg. autotest-all depends on it), rather than just
502        some personal ebuild
5032.  Remove **test** from foo-{0.0.1-r100,9999}; uprev foo-0.0.1-r100 (to -r101)
5043.  Create bar-9999 (making a copy of foo and replacing IUSE_TESTS may be a good
505    start), with IUSE_TESTS containing just the entry for **test**
5064.  Verify package dependencies of test. Make bar-9999 only depend on what is
507    needed for test, remove the dependencies from foo-9999, unless they are
508    needed by tests that remained.
5095.  Add a blocker. Since bar installs files owned by foo-0.0.1-r100 and earlier,
510    the blocker’s format will be:
511
512        RDEPEND="!<=foo-0.0.1-r100"
513
5146.  Add a dependency to the new version of bar into
515    chromeos-base/autotest-all-0.0.1
516
517        RDEPEND="bar"
518
5197.  Change the dependency of foo in chromeos-base/autotest-all-0.0.1 to be
520    version locked to the new rev:
521
522        RDEPEND=">foo-0.0.1-r100"
523
5248.  Uprev (move) autotest-all-0.0.1-rX symlink by one.
5259.  Publish all as the same change list, have it reviewed, push.
526
527### W4. Create and run a test-enabled image on your device
528
5291.  Choose which board you want to build for (we'll refer to this as ${BOARD},
530    which is for example "x86-generic").
5312.  Set up a proper portage build chroot setup.  Go through the normal process
532    of setup_board if you haven't already.
533
534        $ ./build_packages --board=${BOARD}
535
5363.  Build test image.
537
538        $ ./build_image --board=${BOARD} test
539
5404.  Install the Chromium OS testing image to your target machine.  This is
541    through the standard mechanisms: either USB, or by reimaging a device
542    currently running a previously built Chromium OS image modded for test, or
543    by entering the shell on the machine and forcing an auto update to your
544    machine when it's running a dev server.  For clarity we'll walk through two
545    common ways below, but if you already know about this, just do what you
546    normally do.
547
548    -   If you choose to use a USB boot, you first put the image on USB and run
549        this from outside the chroot.
550
551            $ ./image_to_usb.sh --to /dev/sdX --board=${BOARD} \
552              --image_name=chromiumos_test_image.bin
553
554    -   Alternatively, if you happen to already have a machine running an image
555        modified for test and you know its IP address (${REMOTE_IP}), you can
556        avoid using a USB key and reimage it with a freshly built image by
557        running this from outside the chroot:
558
559            $ ./image_to_live.sh --remote=${REMOTE_IP} \
560              --image=`./get_latest_image.sh \
561              --board=${BOARD}`/chromiumos_test_image.bin
562
563This will automatically start dev server, ssh to your machine, cause it to
564update to from that dev server using memento_updater, reboot, wait for reboot,
565print out the new version updated to, and shut down your dev server.
566
567## Troubleshooting/FAQ
568
569### Q1: What autotest ebuilds are out there?
570
571Note that the list of ebuilds may differ per board, as each board has
572potentially different list of overlays. To find all autotest ebuilds for board
573foo, you can run:
574```
575$ board=foo
576$ for dir in $(portageq-${board} envvar PORTDIR_OVERLAY); do
577     find . -name '*.ebuild' | xargs grep "inherit.*autotest" | grep "9999" | \
578     cut -f1 -d: | \
579     sed -e 's/.*\/\([^/]*\)\/\([^/]*\)\/.*\.ebuild/\1\/\2/'
580   done
581```
582(Getting: "WARNING: 'portageq envvar PORTDIR_OVERLAY' is deprecated. Use
583'portageq repositories_configuration' instead." Please fix documentation.)
584
585### Q2: I see a test of the name ‘greattests_TestsEverything’ in build output/logs/whatever! How do I find which ebuild builds it?
586
587All ebuilds have lists of tests exported as **USE_EXPANDed** lists called
588**TESTS**. An
589expanded use can be searched for in the same way as other use flags, but with
590the appropriate prefix, in this case, you would search for
591**tests_greattests_TestsEverything**’:
592```
593$ use_search=tests_greattests_TestsEverything
594$ equery-$board hasuse $use_search
595 * Searching for USE flag tests_greattests_TestsEverything ...
596 * [I-O] [  ] some_ebuild_package_name:0
597```
598
599This will however only work on ebuilds which are **already installed**, ie.
600their potentially outdated versions.
601**Alternatively**, you can run a pretended emerge (emerge -p) of all autotest
602ebuilds and scan the output.
603```
604$ emerge -p ${all_ebuilds_from_Q1} |grep -C 10 “${use_search}”
605```
606
607### Q3: I have an ebuild ‘foo’, where are its sources?
608
609Generally speaking, one has to look at the ebuild source to figure that
610question out (and it shouldn’t be hard). However, all present autotest ebuilds
611(at the time of this writing) are also ‘cros-workon’, and for those, this
612should always work:
613```
614$ ebuild_search=foo
615$ ebuild $(equery-$board which $ebuild_search) info
616CROS_WORKON_SRCDIR=”/home/you/trunk/src/third_party/foo617CROS_WORKON_PROJECT=”chromiumos/third_party/foo618```
619
620### Q4: I have an ebuild, what tests does it build?
621
622You can run a pretended emerge on the ebuild and observe the ‘TESTS=’
623statement:
624```
625$ ebuild_name=foo
626$ emerge-$board -pv ${ebuild_name}
627These are the packages that would be merged, in order:
628
629Calculating dependencies... done!
630[ebuild   R   ] foo-foo_version to /build/$board/ USE="autox hardened tpmtools
631xset -buildcheck -opengles" TESTS="enabled_test1 enabled_test2 ... enabled_testN
632-disabled_test1 ...disabled_testN" 0 kB [1]
633```
634
635Alternately, you can use equery, which will list tests with the USE_EXPAND
636prefix:
637```
638$ equery-$board uses ${ebuild_name}
639[ Legend : U - final flag setting for installation]
640[        : I - package is installed with flag     ]
641[ Colors : set, unset                             ]
642 * Found these USE flags for chromeos-base/autotest-tests-9999:
643 U I
644 + + autotest                                    : <unknown>
645 + + autotest                                    : <unknown>
646 + + autox                                       : <unknown>
647 + + buildcheck                                  : <unknown>
648 + + hardened                                    : activate default security enhancements for toolchain (gcc, glibc, binutils)
649 - - opengles                                    : <unknown>
650 + + tests_enabled_test                     : <unknown>
651 - - tests_disabled_test                      : <unknown>
652```
653
654### Q5: I’m working on some test sources, how do I know which ebuilds to cros_workon start in order to properly propagate?
655
656You should ‘workon’ and always cros_workon start all ebuilds that have files
657that you touched.  If you’re interested in a particular file/directory, that
658is installed in `/build/$board/usr/local/autotest/` and would like know which
659package has provided that file, you can use equery:
660
661```
662$ equery-$board belongs /build/${board}/usr/local/autotest/client/site_tests/foo_bar/foo_bar.py
663 * Searching for <filename> ...
664chromeos-base/autotest-tests-9999 (<filename>)
665```
666
667DON’T forget to do equery-$board. Just equery will also work, only never
668return anything useful.
669
670As a rule of thumb, if you work on anything from the core autotest framework or
671shared libraries (anything besides
672{server,client}/{test,site_tests,deps,profilers,config}), it belongs to
673chromeos-base/autotest. Individual test case will each belong to a particular
674ebuild, see Q2.
675
676It is important to cros_workon start every ebuild involved.
677
678### Q6: I created a test, added it into ebuild, emerged it, and I’m getting access denied failures. What did I do wrong?
679
680Your test’s `setup()` function (which runs on the host before being uploaded) is
681probably trying to write into the read-only intermediate location. See
682[explanation](#Building-tests).
683
684