1page.title=Graphics 2@jd:body 3 4<!-- 5 Copyright 2015 The Android Open Source Project 6 7 Licensed under the Apache License, Version 2.0 (the "License"); 8 you may not use this file except in compliance with the License. 9 You may obtain a copy of the License at 10 11 http://www.apache.org/licenses/LICENSE-2.0 12 13 Unless required by applicable law or agreed to in writing, software 14 distributed under the License is distributed on an "AS IS" BASIS, 15 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 16 See the License for the specific language governing permissions and 17 limitations under the License. 18--> 19 20<div id="qv-wrapper"> 21 <div id="qv"> 22 <h2>In this document</h2> 23 <ol id="auto-toc"> 24 </ol> 25 </div> 26</div> 27 28<img style="float: right; margin: 0px 15px 15px 15px;" 29src="images/ape_fwk_hal_graphics.png" alt="Android Graphics HAL icon"/> 30 31<p>The Android framework offers a variety of graphics rendering APIs for 2D and 323D that interact with manufacturer implementations of graphics drivers, so it 33is important to have a good understanding of how those APIs work at a higher 34level. This page introduces the graphics hardware abstraction layer (HAL) upon 35which those drivers are built.</p> 36 37<p>Application developers draw images to the screen in two ways: with Canvas or 38OpenGL. See <a 39href="{@docRoot}devices/graphics/architecture.html">System-level graphics 40architecture</a> for a detailed description of Android graphics 41components.</p> 42 43<p><a 44href="http://developer.android.com/reference/android/graphics/Canvas.html">android.graphics.Canvas</a> 45is a 2D graphics API and is the most popular graphics API among developers. 46Canvas operations draw all the stock and custom <a 47href="http://developer.android.com/reference/android/view/View.html">android.view.View</a>s 48in Android. In Android, hardware acceleration for Canvas APIs is accomplished 49with a drawing library called OpenGLRenderer that translates Canvas operations 50to OpenGL operations so they can execute on the GPU.</p> 51 52<p>Beginning in Android 4.0, hardware-accelerated Canvas is enabled by default. 53Consequently, a hardware GPU that supports OpenGL ES 2.0 is mandatory for 54Android 4.0 and later devices. See the 55<a href="https://developer.android.com/guide/topics/graphics/hardware-accel.html">Hardware Acceleration guide</a> for an explanation of how the 56hardware-accelerated drawing path works and the differences in its behavior 57from that of the software drawing path.</p> 58 59<p>In addition to Canvas, the other main way that developers render graphics is 60by using OpenGL ES to directly render to a surface. Android provides OpenGL ES 61interfaces in the 62<a href="http://developer.android.com/reference/android/opengl/package-summary.html">android.opengl</a> 63package that developers can use to call into their GL implementations with the 64SDK or with native APIs provided in the <a 65href="https://developer.android.com/tools/sdk/ndk/index.html">Android 66NDK</a>.</p> 67 68<p>Android implementers can test OpenGL ES functionality using the <a href="testing.html">drawElements Quality Program</a>, also known as deqp.</p> 69 70<h2 id="android_graphics_components">Android graphics components</h2> 71 72<p>No matter what rendering API developers use, everything is rendered onto a 73"surface." The surface represents the producer side of a buffer queue that is 74often consumed by SurfaceFlinger. Every window that is created on the Android 75platform is backed by a surface. All of the visible surfaces rendered are 76composited onto the display by SurfaceFlinger.</p> 77 78<p>The following diagram shows how the key components work together:</p> 79 80<img src="images/ape_fwk_graphics.png" alt="image-rendering components"> 81 82<p class="img-caption"><strong>Figure 1.</strong> How surfaces are rendered</p> 83 84<p>The main components are described below:</p> 85 86<h3 id="image_stream_producers">Image Stream Producers</h3> 87 88<p>An image stream producer can be anything that produces graphic buffers for 89consumption. Examples include OpenGL ES, Canvas 2D, and mediaserver video 90decoders.</p> 91 92<h3 id="image_stream_consumers">Image Stream Consumers</h3> 93 94<p>The most common consumer of image streams is SurfaceFlinger, the system 95service that consumes the currently visible surfaces and composites them onto 96the display using information provided by the Window Manager. SurfaceFlinger is 97the only service that can modify the content of the display. SurfaceFlinger 98uses OpenGL and the Hardware Composer to compose a group of surfaces.</p> 99 100<p>Other OpenGL ES apps can consume image streams as well, such as the camera 101app consuming a camera preview image stream. Non-GL applications can be 102consumers too, for example the ImageReader class.</p> 103 104<h3 id="window_manager">Window Manager</h3> 105 106<p>The Android system service that controls a window, which is a container for 107views. A window is always backed by a surface. This service oversees 108lifecycles, input and focus events, screen orientation, transitions, 109animations, position, transforms, z-order, and many other aspects of a window. 110The Window Manager sends all of the window metadata to SurfaceFlinger so 111SurfaceFlinger can use that data to composite surfaces on the display.</p> 112 113<h3 id="hardware_composer">Hardware Composer</h3> 114 115<p>The hardware abstraction for the display subsystem. SurfaceFlinger can 116delegate certain composition work to the Hardware Composer to offload work from 117OpenGL and the GPU. SurfaceFlinger acts as just another OpenGL ES client. So 118when SurfaceFlinger is actively compositing one buffer or two into a third, for 119instance, it is using OpenGL ES. This makes compositing lower power than having 120the GPU conduct all computation.</p> 121 122<p>The <a href="{@docRoot}devices/graphics/architecture.html#hwcomposer">Hardware 123Composer HAL</a> conducts the other half of the work and is the central point 124for all Android graphics rendering. The Hardware Composer must support events, 125one of which is VSYNC (another is hotplug for plug-and-playHDMI support).</p> 126 127<h3 id="gralloc">Gralloc</h3> 128 129<p>The graphics memory allocator (Gralloc) is needed to allocate memory 130requested by image producers. For details, see <a 131href="{@docRoot}devices/graphics/architecture.html#gralloc_HAL">Gralloc HAL</a>. 132</p> 133 134<h2 id="data_flow">Data flow</h2> 135 136<p>See the following diagram for a depiction of the Android graphics 137pipeline:</p> 138 139<img src="images/graphics_pipeline.png" alt="graphics data flow"> 140 141<p class="img-caption"><strong>Figure 2.</strong> Graphic data flow through 142Android</p> 143 144<p>The objects on the left are renderers producing graphics buffers, such as 145the home screen, status bar, and system UI. SurfaceFlinger is the compositor 146and Hardware Composer is the composer.</p> 147 148<h3 id="bufferqueue">BufferQueue</h3> 149 150<p>BufferQueues provide the glue between the Android graphics components. These 151are a pair of queues that mediate the constant cycle of buffers from the 152producer to the consumer. Once the producers hand off their buffers, 153SurfaceFlinger is responsible for compositing everything onto the display.</p> 154 155<p>See the following diagram for the BufferQueue communication process.</p> 156 157<img src="images/bufferqueue.png" 158alt="BufferQueue communication process"> 159 160<p class="img-caption"><strong>Figure 3.</strong> BufferQueue communication 161process</p> 162 163<p>BufferQueue contains the logic that ties image stream producers and image 164stream consumers together. Some examples of image producers are the camera 165previews produced by the camera HAL or OpenGL ES games. Some examples of image 166consumers are SurfaceFlinger or another app that displays an OpenGL ES stream, 167such as the camera app displaying the camera viewfinder.</p> 168 169<p>BufferQueue is a data structure that combines a buffer pool with a queue and 170uses Binder IPC to pass buffers between processes. The producer interface, or 171what you pass to somebody who wants to generate graphic buffers, is 172IGraphicBufferProducer (part of <a 173href="http://developer.android.com/reference/android/graphics/SurfaceTexture.html">SurfaceTexture</a>). 174BufferQueue is often used to render to a Surface and consume with a GL 175Consumer, among other tasks. 176 177BufferQueue can operate in three different modes:</p> 178 179<p><em>Synchronous-like mode</em> - BufferQueue by default operates in a 180synchronous-like mode, in which every buffer that comes in from the producer 181goes out at the consumer. No buffer is ever discarded in this mode. And if the 182producer is too fast and creates buffers faster than they are being drained, it 183will block and wait for free buffers.</p> 184 185<p><em>Non-blocking mode</em> - BufferQueue can also operate in a non-blocking 186mode where it generates an error rather than waiting for a buffer in those 187cases. No buffer is ever discarded in this mode either. This is useful for 188avoiding potential deadlocks in application software that may not understand 189the complex dependencies of the graphics framework.</p> 190 191<p><em>Discard mode</em> - Finally, BufferQueue may be configured to discard 192old buffers rather than generate errors or wait. For instance, if conducting GL 193rendering to a texture view and drawing as quickly as possible, buffers must be 194dropped.</p> 195 196<p>To conduct most of this work, SurfaceFlinger acts as just another OpenGL ES 197client. So when SurfaceFlinger is actively compositing one buffer or two into a 198third, for instance, it is using OpenGL ES.</p> 199 200<p>The Hardware Composer HAL conducts the other half of the work. This HAL acts 201as the central point for all Android graphics rendering.</p> 202 203<h3 id="synchronization_framework">Synchronization framework</h3> 204 205<p>Since Android graphics offer no explicit parallelism, vendors have long 206implemented their own implicit synchronization within their own drivers. This 207is no longer required with the Android graphics synchronization framework. See 208the 209<a href="{@docRoot}devices/graphics/implement.html#explicit_synchronization">Explicit 210synchronization</a> section for implementation instructions.</p> 211 212<p>The synchronization framework explicitly describes dependencies between 213different asynchronous operations in the system. The framework provides a 214simple API that lets components signal when buffers are released. It also 215allows synchronization primitives to be passed between drivers from the kernel 216to userspace and between userspace processes themselves.</p> 217 218<p>For example, an application may queue up work to be carried out in the GPU. 219The GPU then starts drawing that image. Although the image hasn’t been drawn 220into memory yet, the buffer pointer can still be passed to the window 221compositor along with a fence that indicates when the GPU work will be 222finished. The window compositor may then start processing ahead of time and 223hand off the work to the display controller. In this manner, the CPU work can 224be done ahead of time. Once the GPU finishes, the display controller can 225immediately display the image.</p> 226 227<p>The synchronization framework also allows implementers to leverage 228synchronization resources in their own hardware components. Finally, the 229framework provides visibility into the graphics pipeline to aid in 230debugging.</p> 231