1------------------------------------------------------------------------- 2drawElements Quality Program Test Specification 3----------------------------------------------- 4 5Copyright 2014 The Android Open Source Project 6 7Licensed under the Apache License, Version 2.0 (the "License"); 8you may not use this file except in compliance with the License. 9You may obtain a copy of the License at 10 11 http://www.apache.org/licenses/LICENSE-2.0 12 13Unless required by applicable law or agreed to in writing, software 14distributed under the License is distributed on an "AS IS" BASIS, 15WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 16See the License for the specific language governing permissions and 17limitations under the License. 18------------------------------------------------------------------------- 19 Performance tests 20 21This test specification describes general techniques and methodologies used 22in performance test cases. 23 24 25Measuring performance: 26 27Performance test cases must satisfy following conditions: 28 29 + Result should represent the performance of the tested feature or use case. 30 - Use of any other features should be kept at minimum. 31 - Architecture-specific optimizations should not affect result unless they 32 target the feature being tested. 33 - Measurement overhead should be minimized. 34 35 + Hardware must be utilized to the maximum. 36 - In most cases total throughput is more important than latency. 37 - Latency is measured where it matters. 38 - Test cases should behave like other graphics-intensive applications on 39 that platform. This usually means that test cases should render multiple 40 frames and utilize platform-defined standard mechanisms to display each 41 frame on screen. 42 43 + Test result should be stable across test runs. 44 - Final result should be a function of results from multiple iterations if 45 performance is expected to vary between iterations (due to undeterministic 46 scheduling for example). 47 - Simple average may not always be the right function, often stability of 48 the performance is more important from user experience perspective. 49 50 + Test result values should be meaningful. 51 - Result should be meaningful without knowing the exact implementation 52 details of the test case. 53 - Good example: Millions of pixels or vertices with shader X per second 54 - Bad example: Frames per second 55 56 + Results can be compared across different implementations and configurations. 57 - If possible, configuration changes (viewport size for example) should not 58 affect the test result. 59 - Where configuration may affect the result, test log should include the 60 configuration details. 61 62 63Test output: 64 65Test cases will log at least following variables, if available: 66 67 * Viewport size 68 * Color, depth, stencil and multisample bits 69 * Number of draw calls 70 * Shader program source 71 * Automatic calibration values 72 * Individual iteration times (if iteration count is reasonable) 73 * Minimum and maximum iteration times 74 * Average iteration time 75 * Minimum and maximum computed performance 76 * Average computed performance 77 78 79Shader execution tests: 80 81Shader execution tests measure the performance of single pair of vertex and 82fragment shaders, optionally combined with fixed-function per-fragment 83operations such as blending. 84 85Each iteration (frame) renders N grids of quads. Each grid, drawn with single 86draw call, fills the screen entirely without any overlap between quads. 87The number of quads depends on targeted vertex load. Test cases targeting 88fragment shaders only use a single quad. N (number of times screen is filled) 89is chosen either automatically to target certain iteration time or specified 90by the test case explicitly. 91 92Test cases that measure fragment-side performance must make sure fragments 93are not discarded early due to Z-culling or any other optimization. The 94preferred way to do this is to enable simple additive blending. The extra 95cost of that is one read from the framebuffer and per-channel saturating 96additions. 97 98The result is MPix/s, MVert/s or weighted sum of both depending on test 99case type. 100