/kernel/linux/linux-5.10/net/netfilter/ipvs/ |
D | ip_vs_sched.c | 29 * IPVS scheduler list 38 * Bind a service with a scheduler 41 struct ip_vs_scheduler *scheduler) in ip_vs_bind_scheduler() argument 45 if (scheduler->init_service) { in ip_vs_bind_scheduler() 46 ret = scheduler->init_service(svc); in ip_vs_bind_scheduler() 52 rcu_assign_pointer(svc->scheduler, scheduler); in ip_vs_bind_scheduler() 58 * Unbind a service with its scheduler 65 cur_sched = rcu_dereference_protected(svc->scheduler, 1); in ip_vs_unbind_scheduler() 72 /* svc->scheduler can be set to NULL only by caller */ in ip_vs_unbind_scheduler() 77 * Get scheduler in the scheduler list by name [all …]
|
/kernel/linux/linux-5.10/drivers/gpu/drm/i915/gvt/ |
D | sched_policy.c | 134 struct intel_gvt_workload_scheduler *scheduler = &gvt->scheduler; in try_to_schedule_next_vgpu() local 141 * let scheduler chose next_vgpu again by setting it to NULL. in try_to_schedule_next_vgpu() 143 if (scheduler->next_vgpu == scheduler->current_vgpu) { in try_to_schedule_next_vgpu() 144 scheduler->next_vgpu = NULL; in try_to_schedule_next_vgpu() 152 scheduler->need_reschedule = true; in try_to_schedule_next_vgpu() 156 if (scheduler->current_workload[engine->id]) in try_to_schedule_next_vgpu() 161 vgpu_update_timeslice(scheduler->current_vgpu, cur_time); in try_to_schedule_next_vgpu() 162 vgpu_data = scheduler->next_vgpu->sched_data; in try_to_schedule_next_vgpu() 166 scheduler->current_vgpu = scheduler->next_vgpu; in try_to_schedule_next_vgpu() 167 scheduler->next_vgpu = NULL; in try_to_schedule_next_vgpu() [all …]
|
D | scheduler.c | 274 struct intel_gvt_workload_scheduler *scheduler = &gvt->scheduler; in shadow_context_status_change() local 280 spin_lock_irqsave(&scheduler->mmio_context_lock, flags); in shadow_context_status_change() 282 scheduler->engine_owner[ring_id]) { in shadow_context_status_change() 284 intel_gvt_switch_mmio(scheduler->engine_owner[ring_id], in shadow_context_status_change() 286 scheduler->engine_owner[ring_id] = NULL; in shadow_context_status_change() 288 spin_unlock_irqrestore(&scheduler->mmio_context_lock, flags); in shadow_context_status_change() 293 workload = scheduler->current_workload[ring_id]; in shadow_context_status_change() 299 spin_lock_irqsave(&scheduler->mmio_context_lock, flags); in shadow_context_status_change() 300 if (workload->vgpu != scheduler->engine_owner[ring_id]) { in shadow_context_status_change() 302 intel_gvt_switch_mmio(scheduler->engine_owner[ring_id], in shadow_context_status_change() [all …]
|
/kernel/linux/linux-5.10/Documentation/block/ |
D | switching-sched.rst | 2 Switching Scheduler 5 Each io queue has a set of io scheduler tunables associated with it. These 6 tunables control how the io scheduler works. You can find these entries 16 It is possible to change the IO scheduler for a given block device on 20 To set a specific scheduler, simply do this:: 22 echo SCHEDNAME > /sys/block/DEV/queue/scheduler 24 where SCHEDNAME is the name of a defined IO scheduler, and DEV is the 28 a "cat /sys/block/DEV/queue/scheduler" - the list of valid names 29 will be displayed, with the currently selected scheduler in brackets:: 31 # cat /sys/block/sda/queue/scheduler [all …]
|
D | deadline-iosched.rst | 2 Deadline IO scheduler tunables 5 This little file attempts to document how the deadline io scheduler works. 12 selecting an io scheduler on a per-device basis. 19 The goal of the deadline io scheduler is to attempt to guarantee a start 21 tunable. When a read request first enters the io scheduler, it is assigned 49 When we have to move requests from the io scheduler queue to the block 60 Sometimes it happens that a request enters the io scheduler that is contiguous 69 rbtree front sector lookup when the io scheduler merge function is called.
|
/kernel/linux/linux-5.10/drivers/gpu/drm/scheduler/ |
D | sched_main.c | 27 * The GPU scheduler provides entities which allow userspace to push jobs 29 * The software queues have a priority among them. The scheduler selects the entities 30 * from the run queue using a FIFO. The scheduler provides dependency handling 32 * backend operations to the scheduler like submitting a job to hardware run queue, 35 * The organisation of the scheduler is the following: 37 * 1. Each hw run queue has one scheduler 38 * 2. Each scheduler has multiple run queues with different priorities 40 * 3. Each scheduler run queue has a queue of entities to schedule 68 * @rq: scheduler run queue 70 * Initializes a scheduler runqueue. [all …]
|
D | sched_entity.c | 37 * drm_sched_entity_init - Init a context entity used by scheduler when 40 * @entity: scheduler entity to init 88 * @entity: scheduler entity to init 107 * @entity: scheduler entity 126 * @entity: scheduler entity 144 * @entity: scheduler entity 198 * Signal the scheduler finished fence when the entity in question is killed. 258 * @entity: scheduler entity 303 * @entity: scheduler entity 329 * wake up scheduler [all …]
|
/kernel/linux/linux-5.10/Documentation/scheduler/ |
D | sched-design-CFS.rst | 2 CFS Scheduler 9 CFS stands for "Completely Fair Scheduler," and is the new "desktop" process 10 scheduler implemented by Ingo Molnar and merged in Linux 2.6.23. It is the 11 replacement for the previous vanilla scheduler's SCHED_OTHER interactivity 59 previous vanilla scheduler and RSDL/SD are affected). 79 schedules (or a scheduler tick happens) the task's CPU usage is "accounted 93 other HZ detail. Thus the CFS scheduler has no notion of "timeslices" in the 94 way the previous scheduler had, and has no heuristics whatsoever. There is 99 which can be used to tune the scheduler from "desktop" (i.e., low latencies) to 101 for desktop workloads. SCHED_BATCH is handled by the CFS scheduler module too. [all …]
|
D | sched-nice-design.rst | 2 Scheduler Nice Design 6 nice-levels implementation in the new Linux scheduler. 12 scheduler, (otherwise we'd have done it long ago) because nice level 16 In the O(1) scheduler (in 2003) we changed negative nice levels to be 77 With the old scheduler, if you for example started a niced task with +1 88 The new scheduler in v2.6.23 addresses all three types of complaints: 91 enough), the scheduler was decoupled from 'time slice' and HZ concepts 94 support: with the new scheduler nice +19 tasks get a HZ-independent 96 scheduler. 99 the new scheduler makes nice(1) have the same CPU utilization effect on [all …]
|
D | sched-energy.rst | 8 Energy Aware Scheduling (or EAS) gives the scheduler the ability to predict 23 The actual EM used by EAS is _not_ maintained by the scheduler, but by a 50 scheduler. This alternative considers two objectives: energy-efficiency and 53 The idea behind introducing an EM is to allow the scheduler to evaluate the 56 time, the EM must be as simple as possible to minimize the scheduler latency 60 for the scheduler to decide where a task should run (during wake-up), the EM 71 EAS (as well as the rest of the scheduler) uses the notion of 'capacity' to 87 The scheduler manages references to the EM objects in the topology code when the 89 scheduler maintains a singly linked list of all performance domains intersecting 115 Please note that the scheduler will create two duplicate list nodes for [all …]
|
/kernel/linux/linux-5.10/block/ |
D | Kconfig.iosched | 7 tristate "MQ deadline I/O scheduler" 10 MQ version of the deadline IO scheduler. 13 tristate "Kyber I/O scheduler" 16 The Kyber I/O scheduler is a low-overhead scheduler suitable for 22 tristate "BFQ I/O scheduler" 24 BFQ I/O scheduler for BLK-MQ. BFQ distributes the bandwidth of
|
/kernel/linux/linux-5.10/include/drm/ |
D | gpu_scheduler.h | 57 * Jobs from this entity can be scheduled on any scheduler 81 * ring, and the scheduler will alternate between entities based on 110 * @sched: the scheduler to which this rq belongs to. 130 * @scheduled: this fence is what will be signaled by the scheduler 136 * @finished: this fence is what will be signaled by the scheduler 154 * @sched: the scheduler instance to which the job having this struct 174 * @sched: the scheduler instance on which this job is scheduled. 178 * @id: a unique id assigned to each job scheduled on the scheduler. 180 * limit of the scheduler then the job is marked guilty and will not 187 * should call drm_sched_entity_push_job() once it wants the scheduler [all …]
|
/kernel/linux/linux-5.10/drivers/net/wireless/intel/iwlegacy/ |
D | prph.h | 236 /* 3945 Tx scheduler registers */ 247 * Tx Scheduler 249 * The Tx Scheduler selects the next frame to be transmitted, choosing TFDs 275 * 1) Scheduler-Ack, in which the scheduler automatically supports a 281 * In scheduler-ack mode, the scheduler keeps track of the Tx status of 292 * 2) FIFO (a.k.a. non-Scheduler-ACK), in which each TFD is processed in order. 300 * Driver controls scheduler operation via 3 means: 301 * 1) Scheduler registers 302 * 2) Shared scheduler data base in internal 4956 SRAM 313 * the scheduler (especially for queue #4/#9, the command queue, otherwise [all …]
|
/kernel/linux/linux-5.10/arch/arm64/include/asm/ |
D | topology.h | 21 * Replace task scheduler's default counter-based 28 /* Replace task scheduler's default frequency-invariant accounting */ 33 /* Replace task scheduler's default cpu-invariant accounting */ 39 /* Replace task scheduler's default thermal pressure API */
|
/kernel/linux/linux-5.10/net/sched/ |
D | Kconfig | 16 If you say N here, you will get the standard packet scheduler, which 58 CBQ is a commonly used scheduler, so if you're unsure, you should 92 Say Y here if you want to use the ATM pseudo-scheduler. This 106 scheduler. 114 Say Y here if you want to use an n-band queue packet scheduler 199 tristate "Time Aware Priority (taprio) Scheduler" 244 tristate "Deficit Round Robin scheduler (DRR)" 255 tristate "Multi-queue priority scheduler (MQPRIO)" 257 Say Y here if you want to use the Multi-queue Priority scheduler. 258 This scheduler allows QOS to be offloaded on NICs that have support [all …]
|
/kernel/linux/linux-5.10/drivers/net/wireless/intel/iwlwifi/ |
D | iwl-prph.h | 154 * Tx Scheduler 156 * The Tx Scheduler selects the next frame to be transmitted, choosing TFDs 183 * 1) Scheduler-Ack, in which the scheduler automatically supports a 189 * In scheduler-ack mode, the scheduler keeps track of the Tx status of 200 * 2) FIFO (a.k.a. non-Scheduler-ACK), in which each TFD is processed in order. 208 * Driver controls scheduler operation via 3 means: 209 * 1) Scheduler registers 210 * 2) Shared scheduler data base in internal SRAM 221 * the scheduler (especially for queue #4/#9, the command queue, otherwise 227 * Max Tx window size is the max number of contiguous TFDs that the scheduler [all …]
|
/kernel/linux/linux-5.10/tools/power/cpupower/bench/ |
D | config.h | 11 #define SCHEDULER SCHED_OTHER macro 14 #define PRIORITY_HIGH sched_get_priority_max(SCHEDULER) 15 #define PRIORITY_LOW sched_get_priority_min(SCHEDULER)
|
/kernel/linux/linux-5.10/tools/testing/kunit/test_data/ |
D | test_is_test_passed-no_tests_run.log | 33 io scheduler noop registered 34 io scheduler deadline registered 35 io scheduler cfq registered (default) 36 io scheduler mq-deadline registered 37 io scheduler kyber registered
|
/kernel/linux/linux-5.10/arch/arm/include/asm/ |
D | topology.h | 12 /* Replace task scheduler's default frequency-invariant accounting */ 18 /* Replace task scheduler's default cpu-invariant accounting */ 24 /* Replace task scheduler's default thermal pressure API */
|
/kernel/linux/linux-5.10/drivers/gpu/drm/v3d/ |
D | v3d_sched.c | 7 * The shared DRM GPU scheduler is used to coordinate submitting jobs 9 * own scheduler entity, which will process jobs in order. The GPU 10 * scheduler will round-robin between clients to submit the next job. 69 * If placed in the scheduler's .dependency method, the corresponding 269 /* block scheduler */ in v3d_gpu_reset_for_timeout() 406 dev_err(v3d->drm.dev, "Failed to create bin scheduler: %d.", ret); in v3d_sched_init() 416 dev_err(v3d->drm.dev, "Failed to create render scheduler: %d.", in v3d_sched_init() 428 dev_err(v3d->drm.dev, "Failed to create TFU scheduler: %d.", in v3d_sched_init() 441 dev_err(v3d->drm.dev, "Failed to create CSD scheduler: %d.", in v3d_sched_init() 453 dev_err(v3d->drm.dev, "Failed to create CACHE_CLEAN scheduler: %d.", in v3d_sched_init()
|
/kernel/linux/linux-5.10/Documentation/admin-guide/pm/ |
D | cpufreq.rst | 157 all of the online CPUs belonging to the given policy with the CPU scheduler. 158 The utilization update callbacks will be invoked by the CPU scheduler on 160 scheduler tick or generally whenever the CPU utilization may change (from the 161 scheduler's perspective). They are expected to carry out computations needed 165 scheduler context or asynchronously, via a kernel thread or workqueue, depending 186 callbacks are invoked by the CPU scheduler in the same way as for scaling 188 use and change the hardware configuration accordingly in one go from scheduler 387 This governor uses CPU utilization data available from the CPU scheduler. It 388 generally is regarded as a part of the CPU scheduler, so it can access the 389 scheduler's internal data structures directly. [all …]
|
D | cpuidle.rst | 33 CPU idle time management operates on CPUs as seen by the *CPU scheduler* (that 84 Tasks are the CPU scheduler's representation of work. Each task consists of a 87 processor every time the task's code is run by a CPU. The CPU scheduler 93 events to occur or similar). When a task becomes runnable, the CPU scheduler 164 configuration of the kernel and in particular on whether or not the scheduler 188 Idle CPUs and The Scheduler Tick 191 The scheduler tick is a timer that triggers periodically in order to implement 192 the time sharing strategy of the CPU scheduler. Of course, if there are 199 may not want to give the CPU away voluntarily, however, and the scheduler tick 203 The scheduler tick is problematic from the CPU idle time management perspective, [all …]
|
/kernel/linux/linux-5.10/Documentation/admin-guide/cgroup-v1/ |
D | cpusets.rst | 60 CPUs or Memory Nodes not in that cpuset. The scheduler will not 106 kernel to avoid any additional impact on the critical scheduler or 294 the system load imposed by a batch scheduler monitoring this 299 counter, a batch scheduler can detect memory pressure with a 304 the batch scheduler can obtain the key information, memory 392 The kernel scheduler (kernel/sched/core.c) automatically load balances 400 linearly with the number of CPUs being balanced. So the scheduler 433 scheduler will avoid load balancing across the CPUs in that cpuset, 438 enabled, then the scheduler will have one sched domain covering all 451 scheduler might not consider the possibility of load balancing that [all …]
|
/kernel/linux/linux-5.10/Documentation/powerpc/ |
D | dscr.rst | 27 (B) Scheduler Changes: 29 Scheduler will write the per-CPU DSCR default which is stored in the 33 default DSCR value, scheduler will write the changed value which will 38 gets used directly in the scheduler process context switch at all.
|
/kernel/linux/linux-5.10/drivers/net/ethernet/intel/ice/ |
D | ice_sched.c | 7 * ice_sched_add_root_node - Insert the Tx scheduler root node in SW DB 9 * @info: Scheduler element information from firmware 44 * ice_sched_find_node_by_teid - Find the Tx scheduler node in SW DB 120 * ice_aq_query_sched_elems - query scheduler elements 141 * ice_sched_add_node - Insert the Tx scheduler node in SW DB 143 * @layer: Scheduler layer of the node 144 * @info: Scheduler element information from firmware 146 * This function inserts a scheduler node to the SW DB. 203 * ice_aq_delete_sched_elems - delete scheduler elements 296 * ice_free_sched_node - Free a Tx scheduler node from SW DB [all …]
|