Lines Matching +full:dma +full:- +full:shared +full:- +full:all
4 The dma-buf subsystem provides the framework for sharing buffers for
5 hardware (DMA) access across multiple device drivers and subsystems, and
8 This is used, for example, by drm "prime" multi-GPU support, but is of
11 The three main components of this are: (1) dma-buf, representing a
15 shared or exclusive fence(s) associated with the buffer.
17 Shared DMA Buffers
18 ------------------
20 This document serves as a guide to device-driver writers on what is the dma-buf
21 buffer sharing API, how to use it for exporting and using shared buffers.
23 Any device driver which wishes to be a part of DMA buffer sharing, can do so as
27 exporter, and A as buffer-user/importer.
31 - implements and manages operations in :c:type:`struct dma_buf_ops
33 - allows other users to share the buffer by using dma_buf sharing APIs,
34 - manages the details of buffer allocation, wrapped in a :c:type:`struct
36 - decides about the actual backing storage where this allocation happens,
37 - and takes care of any migration of scatterlist - for all (shared) users of
40 The buffer-user
42 - is one of (many) sharing users of the buffer.
43 - doesn't need to worry about how the buffer is allocated, or where.
44 - and needs a mechanism to get access to the scatterlist that makes up this
49 Any exporters or users of the dma-buf buffer sharing framework must have a
55 Mostly a DMA buffer file descriptor is simply an opaque object for userspace,
59 - Since kernel 3.12 the dma-buf FD supports the llseek system call, but only
62 llseek operation will report -EINVAL.
64 If llseek on dma-buf FDs isn't support the kernel will report -ESPIPE for all
65 cases. Userspace can use this to detect support for discovering the dma-buf
68 - In order to avoid fd leaks on exec, the FD_CLOEXEC flag must be set
76 multi-threaded app[3]. The issue is made worse when it is library code
81 flag be set when the dma-buf fd is created. So any API provided by
85 - Memory mapping the contents of the DMA buffer is also supported. See the
86 discussion below on `CPU Access to DMA Buffer Objects`_ for the full details.
88 - The DMA buffer FD is also pollable, see `Implicit Fence Poll Support`_ below for
91 Basic Operation and Device DMA Access
94 .. kernel-doc:: drivers/dma-buf/dma-buf.c
95 :doc: dma buf device access
97 CPU Access to DMA Buffer Objects
100 .. kernel-doc:: drivers/dma-buf/dma-buf.c
106 .. kernel-doc:: drivers/dma-buf/dma-buf.c
109 DMA-BUF statistics
111 .. kernel-doc:: drivers/dma-buf/dma-buf-sysfs-stats.c
117 .. kernel-doc:: drivers/dma-buf/dma-buf.c
120 .. kernel-doc:: include/linux/dma-buf.h
124 -------------------
126 .. kernel-doc:: drivers/dma-buf/dma-resv.c
129 .. kernel-doc:: drivers/dma-buf/dma-resv.c
132 .. kernel-doc:: include/linux/dma-resv.h
135 DMA Fences
136 ----------
138 .. kernel-doc:: drivers/dma-buf/dma-fence.c
139 :doc: DMA fences overview
141 DMA Fence Cross-Driver Contract
144 .. kernel-doc:: drivers/dma-buf/dma-fence.c
145 :doc: fence cross-driver contract
147 DMA Fence Signalling Annotations
150 .. kernel-doc:: drivers/dma-buf/dma-fence.c
153 DMA Fences Functions Reference
156 .. kernel-doc:: drivers/dma-buf/dma-fence.c
159 .. kernel-doc:: include/linux/dma-fence.h
165 .. kernel-doc:: include/linux/seqno-fence.h
168 DMA Fence Array
171 .. kernel-doc:: drivers/dma-buf/dma-fence-array.c
174 .. kernel-doc:: include/linux/dma-fence-array.h
177 DMA Fence uABI/Sync File
180 .. kernel-doc:: drivers/dma-buf/sync_file.c
183 .. kernel-doc:: include/linux/sync_file.h
186 Indefinite DMA Fences
199 * Userspace fences or gpu futexes, fine-grained locking within a command buffer
201 are then imported as a DMA fence for integration into existing winsys
204 * Long-running compute command buffers, while still using traditional end of
205 batch DMA fences for memory management instead of context preemption DMA
208 Common to all these schemes is that userspace controls the dependencies of these
210 in-kernel DMA fences does not work, even when a fallback timeout is included to
213 * Only the kernel knows about all DMA fence dependencies, userspace is not aware
216 * Only userspace knows about all dependencies in indefinite fences and when
221 dependent upon DMA fences. If the kernel also support indefinite fences in the
222 kernel like a DMA fence, like any of the above proposal would, there is the
225 .. kernel-render:: DOT
231 kernel [label="Kernel DMA Fences"]
233 kernel -> userspace [label="memory management"]
234 userspace -> kernel [label="Future fence, fence proxy, ..."]
243 architecture there is no single entity with knowledge of all dependencies.
249 * No future fences, proxy fences or userspace fences imported as DMA fences,
252 * No DMA fences that signal end of batchbuffer for command submission where
254 workloads. This also means no implicit fencing for shared buffers in these