Lines Matching full:s
7 This document attempts to describe what's needed to get VM_BIND locking right,
42 of the backing store resident and making sure the gpu_vma's
58 dma_fence representing the GPU command's activity with all affected
60 it's worth mentioning that an exec function may also be the
64 single VM. Local GEM objects share the gpu_vm's dma_resv.
72 One of the benefits of VM_BIND is that local GEM objects share the gpu_vm's
79 * The ``gpu_vm->lock`` (optionally an rwsem). Protects the gpu_vm's
81 gpu_vm's list of userptr gpu_vmas. With a CPU mm analogy this would
86 userptr gpu_vma on the gpu_vm's userptr list, and in write mode during mmu
96 * The ``gpu_vm->resv`` lock. Protects the gpu_vm's list of gpu_vmas needing
97 rebinding, as well as the residency state of all the gpu_vm's local
99 Furthermore, it typically protects the gpu_vm's list of evicted and
104 * The ``gem_object->gpuva_lock`` This lock protects the GEM object's
106 object's dma_resv, but some drivers protects this list differently,
120 The GEM object's list of gpu_vm_bos, and the gpu_vm_bo's list of gpu_vmas
122 same as the GEM object's dma_resv, but if the driver
136 gpu_vm_bo. When iterating over the GEM object's list of gpu_vm_bos and
137 over the gpu_vm_bo's list of gpu_vmas, the ``gem_object->gpuva_lock`` must
145 reference counting, cleanup of the gpu_vm's gpu_vmas must not be done from the
146 gpu_vm's destructor. Drivers typically implements a gpu_vm close
172 // The following list iteration needs the Gem object's
173 // dma_resv to be held (it protects the gpu_vm_bo's list of
174 // gpu_vmas, but since local gem objects share the gpu_vm's
217 Note that since the object is local to the gpu_vm, it will share the gpu_vm's
219 The gpu_vm_bos marked for eviction are put on the gpu_vm's evict list,
222 the gpu_vm's dma_resv protecting the gpu_vm's evict list is locked.
230 code holding the object's dma_resv while revalidating will ensure a
241 Since external buffer objects may be shared by multiple gpu_vm's they
245 per-gpu_vm list which is protected by the gpu_vm's dma_resv lock or
247 the gpu_vm's reservation object is locked, it is safe to traverse the
253 object is bound to need to be put on their gpu_vm's evict list.
256 the object's private dma_resv can be guaranteed to be held. If there
263 both the gpu_vm's dma_resv and the object's dma_resv is held, and the
264 gpu_vm_bo marked evicted, can then be added to the gpu_vm's list of
266 object's dma_resv.
323 Accessing the gpu_vm's lists without the dma_resv lock held
326 Some drivers will hold the gpu_vm's dma_resv lock when accessing the
327 gpu_vm's evict list and external objects lists. However, there are
364 avoid accessing the gpu_vm's list outside of the dma_resv lock
420 the gpu_vm's list of userptr gpu_vmas needs to be protected by an
448 // submission dma_fence is added to the gpu_vm's resv, from the POW
472 what we call the ``userptr_seqlock``. In reality, the gpu_vm's userptr
511 If the gpu_vm's list of userptr gpu_vmas becomes large, it's
513 exec function to check whether each userptr gpu_vma's saved
521 list, it's not possible to take any outer locks like the
534 gpu_vm_bo in turn needs to be added to the GEM object's
535 gpu_vm_bo list, and possibly to the gpu_vm's external object
540 the ``gpu_vm->resv`` or the GEM object's dma_resv, that the gpu_vmas
542 userptr gpu_vmas it's similarly required that during vma destroy, the
568 the GEM object's dma_resv and ensuring that the dma_resv is held also