README.md
1# ANGLE Performance Tests
2
3`angle_perftests` is a standalone microbenchmark testing suite that contains
4tests for the OpenGL API. `angle_trace_tests` is a suite to run captures traces for correctness and
5performance. Because the traces contain confidential data, they are not publicly available.
6For more details on ANGLE's tracer please see the [docs](../restricted_traces/README.md).
7
8The tests currently run on the Chromium ANGLE infrastructure and report
9results to the [Chromium perf dashboard](https://chromeperf.appspot.com/report).
10 Please refer to the[public dashboard docs][DashboardDocs] for help
11
12[DashboardDocs]: https://chromium.googlesource.com/catapult/+/HEAD/dashboard/README.md
13
14## Running the Tests
15
16You can follow the usual instructions to [check out and build ANGLE](../../../doc/DevSetup.md).
17 Build the `angle_perftests` or `angle_trace_tests` targets. Note that all
18test scores are higher-is-better. You should also ensure `is_debug=false` in
19your build. Running with `angle_assert_always_on` or debug validation enabled
20is not recommended.
21
22Variance can be a problem when benchmarking. We have a test harness to run a
23tests repeatedly to find a lower variance measurement. See [`src/tests/run_perf_tests.py`][RunPerfTests].
24
25To use the script first build `angle_perftests` or `angle_trace_tests`, set
26your working directory your build directory, and invoke the
27`run_perf_tests.py` script. Use `--test-suite` to specify your test suite,
28and `--filter` to specify a test filter.
29
30[RunPerfTests]: https://chromium.googlesource.com/angle/angle/+/main/scripts/perf_test_runner.py
31
32### Choosing the Test to Run
33
34You can choose individual tests to run with `--gtest_filter=*TestName*`. To
35select a particular ANGLE back-end, add the name of the back-end to the test
36filter. For example: `DrawCallPerfBenchmark.Run/gl` or
37`DrawCallPerfBenchmark.Run/d3d11`. Many tests have sub-tests that run
38slightly different code paths. You might need to experiment to find the right
39sub-test and its name.
40
41### Null/No-op Configurations
42
43ANGLE implements a no-op driver for OpenGL, D3D11 and Vulkan. To run on these
44configurations use the `gl_null`, `d3d11_null` or `vulkan_null` test
45configurations. These null drivers will not do any GPU work. They will skip
46the driver entirely. These null configs are useful for diagnosing performance
47overhead in ANGLE code.
48
49### Command-line Arguments
50
51Several command-line arguments control how the tests run:
52
53* `--run-to-key-frame`: If the trace specifies a key frame, run to that frame and stop. Traces without a `KeyFrames` entry in their JSON will default to frame 1. This is primarily to save cycles on our bots that do screenshot quality comparison.
54* `--enable-trace`: Write a JSON event log that can be loaded in Chrome.
55* `--trace-file file`: Name of the JSON event log for `--enable-trace`.
56* `--calibration`: Prints the number of steps a test runs in a fixed time. Used by `perf_test_runner.py`.
57* `--steps-per-trial x`: Fixed number of steps to run for each test trial.
58* `--max-steps-performed x`: Upper maximum on total number of steps for the entire test run. For a quick smoke test, you can specify 1.
59* `--render-test-output-dir=dir`: Directory to store test artifacts (including screenshots but unlike `--screenshot-dir`, `dir` here is always a local directory regardless of platform and `--save-screenshots` isn't implied).
60* `--verbose`: Print extra timing information.
61* `--warmup-trials x`: Number of times to warm up the test before starting timing. Defaults to 3.
62* `--warmup-steps x`: Maximum number of steps for the warmup loops. Defaults to unlimited. Set to -1 to use a trace's frame count.
63* `--no-warmup`: Skip warming up the tests. Equivalent to `--warmup-steps 0`.
64* `--calibration-time`: Run each test calibration step in a fixed time. Defaults to 1 second.
65* `--trial-time x` or `--max-trial-time x`: Run each test trial under this max time. Defaults to 10 seconds.
66* `--fixed-test-time x`: Run the tests until this much time has elapsed.
67* `--fixed-test-time-with-warmup x`: Start with a warmup trial, then run the tests until this much time has elapsed.
68* `--trials`: Number of times to repeat testing. Defaults to 3.
69* `--no-finish`: Don't call glFinish after each test trial.
70* `--validation`: Enable serialization validation in the trace tests. Normally used with SwiftShader and retracing.
71* `--perf-counters`: Additional performance counters to include in the result output. Separate multiple entries with colons: ':'.
72
73The command line arguments implementations are located in [`ANGLEPerfTestArgs.cpp`](ANGLEPerfTestArgs.cpp).
74
75## Test Breakdown
76
77### Microbenchmarks
78
79* [`DrawCallPerfBenchmark`](DrawCallPerf.cpp): Runs a tight loop around DrawArarys calls.
80 * `validation_only`: Skips all rendering.
81 * `render_to_texture`: Render to a user Framebuffer instead of the default FBO.
82 * `vbo_change`: Applies a Vertex Array change between each draw.
83 * `tex_change`: Applies a Texture change between each draw.
84* [`UniformsBenchmark`](UniformsPerf.cpp): Tests performance of updating various uniforms counts followed by a DrawArrays call.
85 * `vec4`: Tests `vec4` Uniforms.
86 * `matrix`: Tests using Matrix uniforms instead of `vec4`.
87 * `multiprogram`: Tests switching Programs between updates and draws.
88 * `repeating`: Skip the update of uniforms before each draw call.
89* [`DrawElementsPerfBenchmark`](DrawElementsPerf.cpp): Similar to `DrawCallPerfBenchmark` but for indexed DrawElements calls.
90* [`BindingsBenchmark`](BindingPerf.cpp): Tests Buffer binding performance. Does no draw call operations.
91 * `100_objects_allocated_every_iteration`: Tests repeated glBindBuffer with new buffers allocated each iteration.
92 * `100_objects_allocated_at_initialization`: Tests repeated glBindBuffer the same objects each iteration.
93* [`TexSubImageBenchmark`](TexSubImage.cpp): Tests `glTexSubImage` update performance.
94* [`BufferSubDataBenchmark`](BufferSubData.cpp): Tests `glBufferSubData` update performance.
95* [`TextureSamplingBenchmark`](TextureSampling.cpp): Tests Texture sampling performance.
96* [`TextureBenchmark`](TexturesPerf.cpp): Tests Texture state change performance.
97* [`LinkProgramBenchmark`](LinkProgramPerfTest.cpp): Tests performance of `glLinkProgram`.
98* [`glmark2`](glmark2.cpp): Runs the glmark2 benchmark.
99
100Many other tests can be found that have documentation in their classes.
101
102### Trace Tests
103
104* [`TracePerfTest`](TracePerfTest.cpp): Runs replays of restricted traces, not
105 available publicly. To enable, read more in [`RestrictedTraceTests`](../restricted_traces/README.md)
106
107Trace tests take command line arguments that pick the run configuration:
108
109* `--use-gl=native`: Runs the tests against the default system GLES implementation instad of your local ANGLE.
110* `--use-angle=backend`: Picks an ANGLE back-end. e.g. vulkan, d3d11, d3d9, gl, gles, metal, or swiftshader. Vulkan is the default.
111* `--offscreen`: Run with an offscreen surface instead of swapping every frame.
112* `--vsync`: Run with vsync enabled, and measure CPU and GPU work insead of wall clock time.
113* `--minimize-gpu-work`: Modify API calls so that GPU work is reduced to minimum.
114* `--screenshot-dir dir`: Directory to store test screenshots. Implies `--save-screenshots`. On Android this directory is on device, not local (see also `--render-test-output-dir`). Only implemented in `TracePerfTest`.
115* `--save-screenshots`: Save screenshots. Only implemented in `TracePerfTest`.
116* `--screenshot-frame <frame>`: Which frame to capture a screenshot of. Defaults to first frame (1). Only implemented in `TracePerfTest`.
117
118For example, for an endless run with no warmup on swiftshader, run:
119
120`angle_trace_tests --gtest_filter=TraceTest.trex_200 --use-angle=swiftshader --steps 1000000 --no-warmup`
121
122## Understanding the Metrics
123
124* `cpu_time`: Amount of CPU time consumed by an iteration of the test. This is backed by
125`GetProcessTimes` on Windows, `getrusage` on Linux/Android, and `zx_object_get_info` on Fuchsia.
126 * This value may sometimes be larger than `wall_time`. That is because we are summing up the time
127on all CPU threads for the test.
128* `wall_time`: Wall time taken to run a single iteration, calculated by dividing the total wall
129clock time by the number of test iterations.
130 * For trace tests, each rendered frame is an iteration.
131* `gpu_time`: Estimated GPU elapsed time per test iteration. We compute the estimate using GLES
132[timestamp queries](https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_disjoint_timer_query.txt)
133at the beginning and ending of each test loop.
134 * For trace tests, this metric is only enabled in `vsync` mode.
135