Lines Matching full:list
37 a VM. The GEM object maintains a list of gpu_vm_bos, where each gpu_vm_bo
38 maintains a list of gpu_vmas.
50 gpu_vm or a GEM object. The dma_resv contains an array / list
81 gpu_vm's list of userptr gpu_vmas. With a CPU mm analogy this would
86 userptr gpu_vma on the gpu_vm's userptr list, and in write mode during mmu
96 * The ``gpu_vm->resv`` lock. Protects the gpu_vm's list of gpu_vmas needing
99 Furthermore, it typically protects the gpu_vm's list of evicted and
105 list of gpu_vm_bos. This is usually the same lock as the GEM
106 object's dma_resv, but some drivers protects this list differently,
108 * The ``gpu_vm list spinlocks``. With some implementations they are needed
110 list. For those implementations, the spinlocks are grabbed when the
120 The GEM object's list of gpu_vm_bos, and the gpu_vm_bo's list of gpu_vmas
136 gpu_vm_bo. When iterating over the GEM object's list of gpu_vm_bos and
137 over the gpu_vm_bo's list of gpu_vmas, the ``gem_object->gpuva_lock`` must
172 // The following list iteration needs the Gem object's
173 // dma_resv to be held (it protects the gpu_vm_bo's list of
192 The reason for having a separate gpu_vm rebind list is that there
219 The gpu_vm_bos marked for eviction are put on the gpu_vm's evict list,
222 the gpu_vm's dma_resv protecting the gpu_vm's evict list is locked.
245 per-gpu_vm list which is protected by the gpu_vm's dma_resv lock or
246 one of the :ref:`gpu_vm list spinlocks <Spinlock iteration>`. Once
248 external object list and lock the dma_resvs of all external
249 objects. However, if instead a list spinlock is used, a more elaborate
253 object is bound to need to be put on their gpu_vm's evict list.
261 corresponding gpu_vm evicted list needs to be traversed. For example, when
262 traversing the list of external objects and locking them. At that time,
264 gpu_vm_bo marked evicted, can then be added to the gpu_vm's list of
274 // External object list is protected by the gpu_vm->resv lock.
327 gpu_vm's evict list and external objects lists. However, there are
332 sleeping locks need to be taken for each list item while iterating
334 temporarily moved to a private list and the spinlock released
345 struct list_head *entry = list_first_entry_or_null(&gpu_vm->list, head);
360 list_splice_tail(&still_in_list, &gpu_vm->list);
364 avoid accessing the gpu_vm's list outside of the dma_resv lock
366 driver anticipates a large number of list items. For lists where the
367 anticipated number of list items is small, where list iteration doesn't
371 if this scheme is used, it is necessary to make sure this list
372 iteration is protected by an outer level lock or semaphore, since list
373 items are temporarily pulled off the list while iterating, and it is
374 also worth mentioning that the local list ``still_in_list`` should
376 thus possible that items can be removed also from the local list
377 concurrently with list iteration.
420 the gpu_vm's list of userptr gpu_vmas needs to be protected by an
473 gpu_vma list is looped through, and the check is done for *all* of its
511 If the gpu_vm's list of userptr gpu_vmas becomes large, it's
515 *invalidated* userptr gpu_vmas on a separate gpu_vm list and
516 only check the gpu_vmas present on this list on each exec
517 function. This list will then lend itself very-well to the spinlock
521 list, it's not possible to take any outer locks like the
523 ``gpu_vm->lock`` still needs to be taken while iterating to ensure the list is
526 If using an invalidated userptr list like this, the retry check in the
527 exec function trivially becomes a check for invalidated list empty.
535 gpu_vm_bo list, and possibly to the gpu_vm's external object
536 list. This is referred to as *linking* the gpu_vma, and typically
544 the invalidated userptr list as described in the previous section,