• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1Correctness Testing
2===================
3
4Skia correctness testing is primarily served by a tool named DM.
5This is a quickstart to building and running DM.
6
7~~~
8$ ./gyp_skia
9$ ninja -C out/Debug dm
10$ out/Debug/dm -v -w dm_output
11~~~
12
13When you run this, you may notice your CPU peg to 100% for a while, then taper
14off to 1 or 2 active cores as the run finishes.  This is intentional.  DM is
15very multithreaded, but some of the work, particularly GPU-backed work, is
16still forced to run on a single thread.  You can use `--threads N` to limit DM to
17N threads if you like.  This can sometimes be helpful on machines that have
18relatively more CPU available than RAM.
19
20As DM runs, you ought to see a giant spew of output that looks something like this.
21~~~
22Skipping nonrendering: Don't understand 'nonrendering'.
23Skipping angle: Don't understand 'angle'.
24Skipping nvprmsaa4: Could not create a surface.
25492 srcs * 3 sinks + 382 tests == 1858 tasks
26
27(  25MB  1857) 1.36ms   8888 image mandrill_132x132_12x12.astc-5-subsets
28(  25MB  1856) 1.41ms   8888 image mandrill_132x132_6x6.astc-5-subsets
29(  25MB  1855) 1.35ms   8888 image mandrill_132x130_6x5.astc-5-subsets
30(  25MB  1854) 1.41ms   8888 image mandrill_132x130_12x10.astc-5-subsets
31(  25MB  1853) 151µs    8888 image mandrill_130x132_10x6.astc-5-subsets
32(  25MB  1852) 154µs    8888 image mandrill_130x130_5x5.astc-5-subsets
33                                  ...
34( 748MB     5) 9.43ms   unit test GLInterfaceValidation
35( 748MB     4) 30.3ms   unit test HalfFloatTextureTest
36( 748MB     3) 31.2ms   unit test FloatingPointTextureTest
37( 748MB     2) 32.9ms   unit test DeferredCanvas_GPU
38( 748MB     1) 49.4ms   unit test ClipCache
39( 748MB     0) 37.2ms   unit test Blur
40~~~
41Do not panic.
42
43As you become more familiar with DM, this spew may be a bit annoying. If you
44remove -v from the command line, DM will spin its progress on a single line
45rather than print a new line for each status update.
46
47Don't worry about the "Skipping something: Here's why." lines at startup.  DM
48supports many test configurations, which are not all appropriate for all
49machines.  These lines are a sort of FYI, mostly in case DM can't run some
50configuration you might be expecting it to run.
51
52The next line is an overview of the work DM is about to do.
53~~~
54492 srcs * 3 sinks + 382 tests == 1858 tasks
55~~~
56
57DM has found 382 unit tests (code linked in from tests/), and 492 other drawing
58sources.  These drawing sources may be GM integration tests (code linked in
59from gm/), image files (from `--images`, which defaults to "resources") or .skp
60files (from `--skps`, which defaults to "skps").  You can control the types of
61sources DM will use with `--src` (default, "tests gm image skp").
62
63DM has found 3 usable ways to draw those 492 sources.  This is controlled by
64`--config`, which today defaults to "565 8888 gpu nonrendering angle nvprmsaa4".
65DM has skipped nonrendering, angle, and nvprmssa4, leaving three usable configs:
66565, 8888, and gpu.  These three name different ways to draw using Skia:
67
68  -    565:  draw using the software backend into a 16-bit RGB bitmap
69  -    8888: draw using the software backend into a 32-bit RGBA bitmap
70  -    gpu:  draw using the GPU backend (Ganesh) into a 32-bit RGBA bitmap
71
72Sometimes DM calls these configs, sometimes sinks.  Sorry.  There are many
73possible configs but generally we pay most attention to 8888 and gpu.
74
75DM always tries to draw all sources into all sinks, which is why we multiply
76492 by 3.  The unit tests don't really fit into this source-sink model, so they
77stand alone.  A couple thousand tasks is pretty normal.  Let's look at the
78status line for one of those tasks.
79~~~
80(  25MB  1857) 1.36ms   8888 image mandrill_132x132_12x12.astc-5-subsets
81~~~
82
83This status line tells us several things.
84
85First, it tells us that at the time we wrote the status line, the maximum
86amount of memory DM had ever used was 25MB.  Note this is a high water mark,
87not the current memory usage.  This is mostly useful for us to track on our
88buildbots, some of which run perilously close to the system memory limit.
89
90Next, the status line tells us that there are 1857 unfinished tasks, either
91currently running or waiting to run.  We generally run one task per hardware
92thread available, so on a typical laptop there are probably 4 or 8 running at
93once.  Sometimes the counts appear to show up out of order, particularly at DM
94startup; it's harmless, and doesn't affect the correctness of the run.
95
96Next, we see this task took 1.36 milliseconds to run.  Generally, the precision
97of this timer is around 1 microsecond.  The time is purely there for
98informational purposes, to make it easier for us to find slow tests.
99
100Finally we see the configuration and name of the test we ran.  We drew the test
101"mandrill_132x132_12x12.astc-5-subsets", which is an "image" source, into an
102"8888" sink.
103
104When DM finishes running, you should find a directory with file named dm.json,
105and some nested directories filled with lots of images.
106~~~
107$ ls dm_output
108565     8888    dm.json gpu
109
110$ find dm_output -name '*.png'
111dm_output/565/gm/3x3bitmaprect.png
112dm_output/565/gm/aaclip.png
113dm_output/565/gm/aarectmodes.png
114dm_output/565/gm/alphagradients.png
115dm_output/565/gm/arcofzorro.png
116dm_output/565/gm/arithmode.png
117dm_output/565/gm/astcbitmap.png
118dm_output/565/gm/bezier_conic_effects.png
119dm_output/565/gm/bezier_cubic_effects.png
120dm_output/565/gm/bezier_quad_effects.png
121                ...
122~~~
123
124The directories are nested first by sink type (`--config`), then by source type (`--src`).
125The image from the task we just looked at, "8888 image mandrill_132x132_12x12.astc-5-subsets",
126can be found at dm_output/8888/image/mandrill_132x132_12x12.astc-5-subsets.png.
127
128dm.json is used by our automated testing system, so you can ignore it if you
129like.  It contains a listing of each test run and a checksum of the image
130generated for that run.  (Boring technical detail: it is not a checksum of the
131.png file, but rather a checksum of the raw pixels used to create that .png.)
132
133Unit tests don't generally output anything but a status update when they pass.
134If a test fails, DM will print out its assertion failures, both at the time
135they happen and then again all together after everything is done running.
136These failures are also included in the dm.json file.
137
138DM has a simple facility to compare against the results of a previous run:
139~~~
140$ ./gyp_skia
141$ ninja -C out/Debug dm
142$ out/Debug/dm -w good
143
144   (do some work)
145
146$ ./gyp_skia
147$ ninja -C out/Debug dm
148$ out/Debug/dm -r good -w bad
149~~~
150When using `-r`, DM will display a failure for any test that didn't produce the
151same image as the `good` run.
152
153For anything fancier, I suggest using skdiff:
154~~~
155$ ./gyp_skia
156$ ninja -C out/Debug dm
157$ out/Debug/dm -w good
158
159   (do some work)
160
161$ ./gyp_skia
162$ ninja -C out/Debug dm
163$ out/Debug/dm -w bad
164
165$ ninja -C out/Debug skdiff
166$ mkdir diff
167$ out/Debug/skdiff good bad diff
168
169  (open diff/index.html in your web browser)
170~~~
171
172That's the basics of DM.  DM supports many other modes and flags.  Here are a
173few examples you might find handy.
174~~~
175$ out/Debug/dm --help        # Print all flags, their defaults, and a brief explanation of each.
176$ out/Debug/dm --src tests   # Run only unit tests.
177$ out/Debug/dm --nocpu       # Test only GPU-backed work.
178$ out/Debug/dm --nogpu       # Test only CPU-backed work.
179$ out/Debug/dm --match blur  # Run only work with "blur" in its name.
180$ out/Debug/dm --dryRun      # Don't really do anything, just print out what we'd do.
181~~~
182