Home
last modified time | relevance | path

Searched full:scheduler (Results 1 – 25 of 138) sorted by relevance

123456

/Documentation/block/
Dswitching-sched.rst2 Switching Scheduler
5 Each io queue has a set of io scheduler tunables associated with it. These
6 tunables control how the io scheduler works. You can find these entries
16 It is possible to change the IO scheduler for a given block device on
20 To set a specific scheduler, simply do this::
22 echo SCHEDNAME > /sys/block/DEV/queue/scheduler
24 where SCHEDNAME is the name of a defined IO scheduler, and DEV is the
28 a "cat /sys/block/DEV/queue/scheduler" - the list of valid names
29 will be displayed, with the currently selected scheduler in brackets::
31 # cat /sys/block/sda/queue/scheduler
[all …]
Ddeadline-iosched.rst2 Deadline IO scheduler tunables
5 This little file attempts to document how the deadline io scheduler works.
12 selecting an io scheduler on a per-device basis.
19 The goal of the deadline io scheduler is to attempt to guarantee a start
21 tunable. When a read request first enters the io scheduler, it is assigned
49 When we have to move requests from the io scheduler queue to the block
60 Sometimes it happens that a request enters the io scheduler that is contiguous
69 rbtree front sector lookup when the io scheduler merge function is called.
Dkyber-iosched.rst2 Kyber I/O scheduler tunables
5 The only two tunables for the Kyber scheduler are the target latencies for
/Documentation/scheduler/
Dsched-ext.rst2 Extensible Scheduler Class
5 sched_ext is a scheduler class whose behavior can be defined by a set of BPF
6 programs - the BPF scheduler.
11 * The BPF scheduler can group CPUs however it sees fit and schedule them
14 * The BPF scheduler can be turned on and off dynamically anytime.
16 * The system integrity is maintained no matter what the BPF scheduler does.
21 * When the BPF scheduler triggers an error, debug information is dumped to
23 scheduler binary. The debug dump can also be accessed through the
25 triggers a debug dump. This doesn't terminate the BPF scheduler and can
47 sched_ext is used only when the BPF scheduler is loaded and running.
[all …]
Dsched-design-CFS.rst4 CFS Scheduler
11 CFS stands for "Completely Fair Scheduler," and is the "desktop" process
12 scheduler implemented by Ingo Molnar and merged in Linux 2.6.23. When
14 scheduler's SCHED_OTHER interactivity code. Nowadays, CFS is making room
16 Documentation/scheduler/sched-eevdf.rst.
63 previous vanilla scheduler and RSDL/SD are affected).
83 schedules (or a scheduler tick happens) the task's CPU usage is "accounted
97 other HZ detail. Thus the CFS scheduler has no notion of "timeslices" in the
98 way the previous scheduler had, and has no heuristics whatsoever. There is
103 which can be used to tune the scheduler from "desktop" (i.e., low latencies) to
[all …]
Dsched-nice-design.rst2 Scheduler Nice Design
6 nice-levels implementation in the new Linux scheduler.
12 scheduler, (otherwise we'd have done it long ago) because nice level
16 In the O(1) scheduler (in 2003) we changed negative nice levels to be
77 With the old scheduler, if you for example started a niced task with +1
88 The new scheduler in v2.6.23 addresses all three types of complaints:
91 enough), the scheduler was decoupled from 'time slice' and HZ concepts
94 support: with the new scheduler nice +19 tasks get a HZ-independent
96 scheduler.
99 the new scheduler makes nice(1) have the same CPU utilization effect on
[all …]
Dsched-eevdf.rst2 EEVDF Scheduler
8 away from the earlier Completely Fair Scheduler (CFS) in favor of a version
11 Documentation/scheduler/sched-design-CFS.rst.
Dsched-energy.rst8 Energy Aware Scheduling (or EAS) gives the scheduler the ability to predict
23 The actual EM used by EAS is _not_ maintained by the scheduler, but by a
50 scheduler. This alternative considers two objectives: energy-efficiency and
53 The idea behind introducing an EM is to allow the scheduler to evaluate the
56 time, the EM must be as simple as possible to minimize the scheduler latency
60 for the scheduler to decide where a task should run (during wake-up), the EM
71 EAS (as well as the rest of the scheduler) uses the notion of 'capacity' to
87 The scheduler manages references to the EM objects in the topology code when the
89 scheduler maintains a singly linked list of all performance domains intersecting
115 Please note that the scheduler will create two duplicate list nodes for
[all …]
Dsched-arch.rst2 CPU Scheduler implementation hints for architecture specific code
15 To request the scheduler call switch_to with the runqueue unlocked,
20 penalty to the core scheduler implementation in the CONFIG_SMP case.
Dtext_files.rst1 Scheduler pelt c program
Dschedutil.rst14 With PELT we track some metrics across the various scheduler entities, from
90 - Documentation/scheduler/sched-capacity.rst:"1. CPU Capacity + 2. Task utilization"
123 Every time the scheduler load tracking is updated (task wakeup, task
150 Because these callbacks are directly from the scheduler, the DVFS hardware
Dsched-debug.rst2 Scheduler debugfs
6 scheduler specific debug files under /sys/kernel/debug/sched. Some of
/Documentation/gpu/rfc/
Di915_scheduler.rst2 I915 GuC Submission/DRM Scheduler Section
8 i915 with the DRM scheduler is:
14 * Lots of rework will need to be done to integrate with DRM scheduler so
32 * Convert the i915 to use the DRM scheduler
33 * GuC submission backend fully integrated with DRM scheduler
35 handled in DRM scheduler)
36 * Resets / cancels hook in DRM scheduler
37 * Watchdog hooks into DRM scheduler
39 integrated with DRM scheduler (e.g. state machine gets
41 * Execlists backend will minimum required to hook in the DRM scheduler
[all …]
/Documentation/arch/powerpc/
Ddscr.rst27 (B) Scheduler Changes:
29 Scheduler will write the per-CPU DSCR default which is stored in the
33 default DSCR value, scheduler will write the changed value which will
38 gets used directly in the scheduler process context switch at all.
/Documentation/devicetree/bindings/usb/
Dda8xx-usb.txt34 CPPI DMA Scheduler, Queue Manager
35 - reg-names: "controller", "scheduler", "queuemgr"
75 reg-names = "controller", "scheduler", "queuemgr";
/Documentation/networking/
Dmptcp-sysctl.rst94 scheduler - STRING
95 Select the scheduler of your choice.
105 The packet scheduler ignores stale subflows.
Dmptcp.rst53 the packet scheduler.
76 Packet Scheduler
79 The Packet Scheduler is in charge of selecting which available *subflow(s)* to
84 Packet schedulers are controlled by the ``net.mptcp.scheduler`` sysctl knob --
/Documentation/admin-guide/cgroup-v1/
Dcpusets.rst60 CPUs or Memory Nodes not in that cpuset. The scheduler will not
106 kernel to avoid any additional impact on the critical scheduler or
294 the system load imposed by a batch scheduler monitoring this
299 counter, a batch scheduler can detect memory pressure with a
304 the batch scheduler can obtain the key information, memory
392 The kernel scheduler (kernel/sched/core.c) automatically load balances
400 linearly with the number of CPUs being balanced. So the scheduler
433 scheduler will avoid load balancing across the CPUs in that cpuset,
438 enabled, then the scheduler will have one sched domain covering all
451 scheduler might not consider the possibility of load balancing that
[all …]
/Documentation/admin-guide/pm/
Dcpufreq.rst157 all of the online CPUs belonging to the given policy with the CPU scheduler.
158 The utilization update callbacks will be invoked by the CPU scheduler on
160 scheduler tick or generally whenever the CPU utilization may change (from the
161 scheduler's perspective). They are expected to carry out computations needed
165 scheduler context or asynchronously, via a kernel thread or workqueue, depending
186 callbacks are invoked by the CPU scheduler in the same way as for scaling
188 use and change the hardware configuration accordingly in one go from scheduler
391 This governor uses CPU utilization data available from the CPU scheduler. It
392 generally is regarded as a part of the CPU scheduler, so it can access the
393 scheduler's internal data structures directly.
[all …]
Dcpuidle.rst33 CPU idle time management operates on CPUs as seen by the *CPU scheduler* (that
84 Tasks are the CPU scheduler's representation of work. Each task consists of a
87 processor every time the task's code is run by a CPU. The CPU scheduler
93 events to occur or similar). When a task becomes runnable, the CPU scheduler
164 configuration of the kernel and in particular on whether or not the scheduler
188 Idle CPUs and The Scheduler Tick
191 The scheduler tick is a timer that triggers periodically in order to implement
192 the time sharing strategy of the CPU scheduler. Of course, if there are
199 may not want to give the CPU away voluntarily, however, and the scheduler tick
203 The scheduler tick is problematic from the CPU idle time management perspective,
[all …]
/Documentation/admin-guide/mm/
Dmultigen_lru.rst100 When a new job comes in, the job scheduler needs to find out whether
103 scheduler needs to estimate the working sets of the existing jobs.
133 A typical use case is that a job scheduler runs this command at a
142 comes in, the job scheduler wants to proactively reclaim cold pages on
157 A typical use case is that a job scheduler runs this command before it
/Documentation/virt/kvm/
Dhalt-polling.rst12 before giving up the cpu to the scheduler in order to let something else run.
15 very quickly by at least saving us a trip through the scheduler, normally on
18 interval or some other task on the runqueue is runnable the scheduler is
21 savings of not invoking the scheduler are distinguishable.
34 The maximum time for which to poll before invoking the scheduler, referred to
77 whether the scheduler is invoked within that function).
/Documentation/networking/device_drivers/ethernet/mellanox/mlx5/
Dtracepoints.rst110 - mlx5_esw_vport_qos_create: trace creation of transmit scheduler arbiter for vport::
117 - mlx5_esw_vport_qos_config: trace configuration of transmit scheduler arbiter for vport::
124 - mlx5_esw_vport_qos_destroy: trace deletion of transmit scheduler arbiter for vport::
131 - mlx5_esw_group_qos_create: trace creation of transmit scheduler arbiter for rate group::
138 - mlx5_esw_group_qos_config: trace configuration of transmit scheduler arbiter for rate group::
145 - mlx5_esw_group_qos_destroy: trace deletion of transmit scheduler arbiter for group::
/Documentation/ABI/testing/
Dsysfs-cfq-target-latency6 when the user sets cfq to /sys/block/<device>/scheduler.
/Documentation/admin-guide/hw-vuln/
Dcore-scheduling.rst19 scheduling is a scheduler feature that can mitigate some (not all) cross-HT
36 on the same core. The core scheduler uses this information to make sure that
121 The scheduler tries its best to find tasks that trust each other such that all
128 by the scheduler (idle thread is scheduled to run).
144 in the case of guests. At best, this would only leak some scheduler metadata
154 each other. This is because the core scheduler does not have information about

123456