Lines Matching +full:hardware +full:- +full:wise
28 #include <linux/dma-fence.h>
36 * DRM_SCHED_FENCE_DONT_PIPELINE - Prefent dependency pipelining
45 * DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT - A fence deadline hint has been set
81 * struct drm_sched_entity - A wrapper around a job queue (typically
84 * Entities will emit jobs in order to their corresponding hardware
244 * struct drm_sched_rq - queue of entities to be scheduled.
265 * struct drm_sched_fence - fences corresponding to the scheduling of a job.
294 * when scheduling the job on hardware. We signal the
316 * struct drm_sched_job - A job to be run by an entity.
379 return s_job && atomic_inc_return(&s_job->karma) > threshold; in drm_sched_invalidate_job()
389 * struct drm_sched_backend_ops - Define the backend operations
427 * nothing is queued while we reset the hardware queue
428 * 2. Try to gracefully stop non-faulty jobs (optional)
429 * 3. Issue a GPU reset (driver-specific)
430 * 4. Re-submit jobs using drm_sched_resubmit_jobs()
434 * Note that some GPUs have distinct hardware queues but need to reset
444 * 2. Try to gracefully stop non-faulty jobs on all queues impacted by
446 * 3. Issue a GPU reset on all faulty queues (driver-specific)
447 * 4. Re-submit jobs on all schedulers impacted by the reset using
468 * struct drm_gpu_scheduler - scheduler instance-specific data
471 * @hw_submission_limit: the max size of the hardware queue.
474 * @sched_rq: priority wise array of run queues.
480 * @hw_rq_count: the number of jobs currently in the hardware queue.
496 * One scheduler is implemented for each hardware ring.