Home
last modified time | relevance | path

Searched full:task (Results 1 – 25 of 302) sorted by relevance

12345678910>>...13

/Documentation/admin-guide/mm/
Dnuma_memory_policy.rst20 both cpusets and policies are applied to a task, the restrictions of the cpuset
44 Task/Process Policy
45 this is an optional, per-task policy. When defined for a
46 specific task, this policy controls all page allocations made
47 by or on behalf of the task that aren't controlled by a more
48 specific scope. If a task does not define a task policy, then
50 task policy "fall back" to the System Default Policy.
52 The task policy applies to the entire address space of a task. Thus,
54 [clone() w/o the CLONE_VM flag] and exec*(). This allows a parent task
55 to establish the task policy for a child task exec()'d from an
[all …]
Dsoft-dirty.rst5 The soft-dirty is a bit on a PTE which helps to track which pages a task
8 1. Clear soft-dirty bits from the task's PTEs.
11 task in question.
23 when the soft-dirty bit is cleared. So, after this, when the task tries to
27 Note, that although all the task's address space is marked as r/o after the
34 there is still a scenario when we can lose soft dirty bits -- a task
/Documentation/scheduler/
Dsched-design-CFS.rst22 power and which can run each task at precise equal speed, in parallel, each at
26 On real hardware, we can run only a single task at once, so we have to
27 introduce the concept of "virtual runtime." The virtual runtime of a task
29 multi-tasking CPU described above. In practice, the virtual runtime of a task
37 In CFS the virtual runtime is expressed and tracked via the per-task
39 timestamp and measure the "expected CPU time" a task should have gotten.
42 p->se.vruntime value --- i.e., tasks would execute simultaneously and no task
45 CFS's task picking logic is based on this p->se.vruntime value and it is thus
46 very simple: it always tries to run the task with the smallest p->se.vruntime
47 value (i.e., the task which executed least so far). CFS always tries to split
[all …]
Dsched-ext.rst18 a runnable task stalls, or on invoking the SysRq key sequence
49 If a task explicitly sets its scheduling policy to ``SCHED_EXT``, it will be
110 If ``CONFIG_SCHED_DEBUG`` is set, whether a given task is on sched_ext can
130 * Decide which CPU a task should be migrated to before being
133 * then dispatch the task directly to SCX_DSQ_LOCAL and skip the
157 * Do a direct dispatch of a task to the global DSQ. This ops.enqueue()
162 * default ops.enqueue implementation, which just dispatches the task
205 A CPU always executes a task from its local DSQ. A task is "dispatched" to a
206 DSQ. A non-local DSQ is "consumed" to transfer a task to the consuming CPU's
209 When a CPU is looking for the next task to run, if the local DSQ is not
[all …]
Dsched-deadline.rst2 Deadline Task Scheduling
19 4.2 Task interface
53 "deadline", to schedule tasks. A SCHED_DEADLINE task should receive
57 every time the task wakes up, the scheduler computes a "scheduling deadline"
59 scheduled using EDF[1] on these scheduling deadlines (the task with the
61 task actually receives "runtime" time units within "deadline" if a proper
66 that each task runs for at most its runtime every period, avoiding any
68 algorithm selects the task with the earliest scheduling deadline as the one
70 with the "traditional" real-time task model (see Section 3) can effectively
76 - Each SCHED_DEADLINE task is characterized by the "runtime",
[all …]
Dsched-capacity.rst127 2. Task utilization
133 Capacity aware scheduling requires an expression of a task's requirements with
135 while task utilization is specific to CFS, it is convenient to describe it here
138 Task utilization is a percentage meant to represent the throughput requirements
139 of a task. A simple approximation of it is the task's duty cycle, i.e.::
143 On an SMP system with fixed frequencies, 100% utilization suggests the task is a
144 busy loop. Conversely, 10% utilization hints it is a small periodic task that
170 This yields duty_cycle(p) == 50%, despite the task having the exact same
173 The task utilization signal can be made frequency invariant using the following
179 task utilization of 25%.
[all …]
Dsched-util-clamp.rst31 These two bounds will ensure a task will operate within this performance range
32 of the system. UCLAMP_MIN implies boosting a task, while UCLAMP_MAX implies
33 capping a task.
85 On the other hand, a busy task for instance that requires to run at maximum
106 Note that by design RT tasks don't have per-task PELT signal and must always
110 when an RT task wakes up. This cost is unchanged by using uclamp. Uclamp only
121 Util clamp is a property of every task in the system. It sets the boundaries of
125 The actual utilization signal of a task is never clamped in reality. If you
127 they are intact. Clamping happens only when needed, e.g: when a task wakes up
131 performance point for a task to run on, it must be able to influence the
[all …]
Dsched-eevdf.rst15 time to each task, creating a "lag" value that can be used to determine
16 whether a task has received its fair share of CPU time. In this way, a task
17 with a positive lag is owed CPU time, while a negative lag means the task
19 zero and calculates a virtual deadline (VD) for each, selecting the task
27 by sleeping briefly to reset their negative lag: when a task sleeps, it
Dschedutil.rst15 individual tasks to task-group slices to CPU runqueues. As the basis for this
31 Note that blocked tasks still contribute to the aggregates (task-group slices
37 time an entity spends on the runqueue. When there is only a single task these
39 will decrease to reflect the fraction of time each task spends on the CPU
90 - Documentation/scheduler/sched-capacity.rst:"1. CPU Capacity + 2. Task utilization"
114 It is possible to set effective u_min and u_max clamps on each CFS or RT task;
123 Every time the scheduler load tracking is updated (task wakeup, task
141 XXX IO-wait: when the update is due to a task wakeup from IO-completion we
147 XXX: deadline tasks (Sporadic Task Model) allows us to calculate a hard f_min
164 - In saturated scenarios task movement will cause some transient dips,
[all …]
Dsched-debug.rst24 memory node local to where the task is running. Every "scan delay" the task
30 hence the scan rate of every task is adaptive and depends on historical
44 rate for each task.
46 ``scan_delay_ms`` is the starting "scan delay" used for a task when it
51 rate for each task.
/Documentation/accounting/
Dtaskstats-struct.rst12 delivery at do_exit() of a task.
34 4) Per-task and per-thread context switch count statistics
55 /* The exit code of a task. */
58 /* The accounting flags of a task as defined in <linux/acct.h>
63 /* The value of task_nice() of a task. */
66 /* The name of the command that started this task. */
69 /* The scheduling discipline as set in task->policy field. */
78 /* The time when a task begins, in [secs] since 1970. */
81 /* The elapsed time of a task, in [usec]. */
84 /* The user CPU time of a task, in [usec]. */
[all …]
Dtaskstats.rst2 Per-task statistics interface
6 Taskstats is a netlink-based interface for sending per-task and
11 - efficiently provide statistics during lifetime of a task and on its exit
18 "pid", "tid" and "task" are used interchangeably and refer to the standard
19 Linux task defined by struct task_struct. per-pid stats are the same as
20 per-task stats.
24 use of tgid, there is no special treatment for the task that is thread group
25 leader - a process is deemed alive as long as it has any task belonging to it.
30 To get statistics during a task's lifetime, userspace opens a unicast netlink
32 The response contains statistics for a task (if pid is specified) or the sum of
[all …]
Ddelay-accounting.rst7 runnable task may wait for a free CPU to run on.
9 The per-task delay accounting functionality measures
10 the delays experienced by a task while
13 b) completion of synchronous block I/O initiated by the task
24 Such delays provide feedback for setting a task's cpu priority,
35 statistics of a task are available both during its lifetime as well as on its
56 counter (say cpu_delay_total) for a task will give the delay
57 experienced by the task waiting for the corresponding resource
60 When a task exits, records containing the per-task statistics
62 task of a thread group, the per-tgid statistics are also sent. More details
[all …]
/Documentation/locking/
Drt-mutex-design.rst105 structure holds a pointer to the task, as well as the mutex that
106 the task is blocked on. It also has rbtree node structures to
107 place the task in the waiters rbtree of a mutex as well as the
108 pi_waiters rbtree of a mutex owner task (described below).
110 waiter is sometimes used in reference to the task that is waiting
111 on a mutex. This is the same as waiter->task.
124 task and process are used interchangeably in this document, mostly to
205 Task PI Tree
213 The top of the task's PI tree is always the highest priority task that
214 is waiting on a mutex that is owned by the task. So if the task has
[all …]
Dlocktypes.rst29 Sleeping locks can only be acquired in preemptible task context.
92 The context (task) that acquired the lock must release it.
108 execute most such regions of code in preemptible task context, especially
248 prevents reentrancy due to task preemption.
255 remain valid even if the task is preempted.
257 - Task state is preserved across spinlock acquisition, ensuring that the
258 task-state rules apply to all kernel configurations. Non-PREEMPT_RT
259 kernels leave task state untouched. However, PREEMPT_RT must change
260 task state if the task blocks during acquisition. Therefore, it saves
261 the current task state before blocking and the corresponding lock wakeup
[all …]
/Documentation/security/
Dcredentials.rst35 accounting and limitation (disk quotas and task rlimits for example).
59 For instance an open file may send SIGIO to a task using the UID and EUID
60 given to it by a task that called ``fcntl(F_SETOWN)`` upon it. In this case,
70 A Linux task, for example, has the FSUID, FSGID and the supplementary
73 task.
154 granted piecemeal to a task that an ordinary task wouldn't otherwise have.
163 The effective capabilities are the ones that a task is actually allowed to
204 operations that a task may do. Currently Linux supports several LSM
208 rules (policies) that say what operations a task with one label may do to
215 interact directly with task and file credentials; rather it keeps system
[all …]
/Documentation/admin-guide/hw-vuln/
Dcore-scheduling.rst61 ``pid`` of the task for which the operation applies.
67 will be performed for all tasks in the task group of ``pid``.
89 specified task or a share a cookie with a task. In combination this allows a
90 simple helper program to pull a cookie from a task in an existing core
95 Each task that is tagged is assigned a cookie internally in the kernel. As
102 The idle task is considered special, as it trusts everything and everything
105 During a schedule() event on any sibling of a core, the highest priority task on
107 the sibling has the task enqueued. For rest of the siblings in the core,
108 highest priority task with the same cookie is selected if there is one runnable
109 in their individual run queues. If a task with same cookie is not available,
[all …]
Dl1d_flush.rst34 When PR_SET_L1D_FLUSH is enabled for a task a flush of the L1D cache is
35 performed when the task is scheduled out and the incoming task belongs to a
66 **NOTE** : The opt-in of a task for L1D flushing works only when the task's
67 affinity is limited to cores running in non-SMT mode. If a task which
69 a SIGBUS to the task.
/Documentation/admin-guide/cgroup-v1/
Dcpusets.rst48 the resources within a task's current cpuset. They form a nested
56 Requests by a task, using the sched_setaffinity(2) system call to
59 policy, are both filtered through that task's cpuset, filtering out any
61 schedule a task on a CPU that is not allowed in its cpus_allowed
63 node that is not allowed in the requesting task's mems_allowed vector.
68 specify and query to which cpuset a task is assigned, and list the
69 task pids assigned to a cpuset.
117 CPUs a task may be scheduled (sched_setaffinity) and on which Memory
124 - Each task in the system is attached to a cpuset, via a pointer
125 in the task structure to a reference counted cgroup structure.
[all …]
Dcgroups.rst53 A *subsystem* is a module that makes use of the task grouping
61 every task in the system is in exactly one of the cgroups in the
66 At any one time there may be multiple active hierarchies of task
71 which cgroup a task is assigned, and list the task PIDs assigned to
177 - Each task in the system has a reference-counted pointer to a
182 registered in the system. There is no direct link from a task to
188 task's actual cgroup assignments (in particular, moving between
204 - in fork and exit, to attach and detach a task from its css_set.
232 Each task under /proc has an added file named 'cgroup' displaying,
262 The attachment of each task, automatically inherited at fork by any
[all …]
/Documentation/trace/rv/
Dmonitor_wwnr.rst5 - Type: per-task deterministic automaton
11 This is a per-task sample monitor, with the following
29 This model is broken, the reason is that a task can be running
31 task about to sleep::
37 waking the task up. BOOM, the wakeup will happen while the task is
/Documentation/sound/designs/
Dcompress-accel.rst31 - signal user space when the task is finished (standard poll mechanism)
41 and a new set of task related ioctls are introduced. The standard
49 input data and second (separate) buffer is used for the output data. Each task
72 all passthrough task ops +----------+
97 Free a set of input/output buffers. If a task is active, the stop
103 Starts (queues) a task. There are two cases of the task start - right after
104 the task is created. In this case, origin_seqno must be zero.
105 The second case is for reusing of already finished task. The origin_seqno
106 must identify the task to be reused. In both cases, a new seqno value
122 Stop (dequeues) a task. If seqno is zero, operation is executed for all
[all …]
/Documentation/bpf/
Dbpf_iterators.rst40 For example, users can define a BPF iterator that iterates over every task on
42 them. Another BPF task iterator may instead dump the cgroup information for each
43 task. Such flexibility is the core value of BPF iterators.
107 struct task_struct *task;
118 'task', 'fd' and 'file' field values. The 'task' and 'file' are `reference
131 struct task_struct *task = ctx->task;
135 if (task == NULL || file == NULL)
143 if (tgid == task->tgid && task->tgid != task->pid)
146 if (last_tgid != task->tgid) {
147 last_tgid = task->tgid;
[all …]
/Documentation/arch/sparc/
Dadi.rst6 ADI allows a task to set version tags on any subset of its address
8 address space of a task, the processor will compare the tag in pointers
14 Following steps must be taken by a task to enable ADI fully:
17 the task's entire address space to enable/disable ADI for the task.
35 size is same as cacheline size which is 64 bytes. A task that sets ADI
40 When ADI is enabled on a set of pages by a task for the first time,
41 kernel sets the PSTATE.mcde bit for the task. Version tags for memory
63 after it has been allocated to a task and a pte has been created for
66 - When a task frees a memory page it had set version tags on, the page
67 goes back to free page pool. When this page is re-allocated to a task,
[all …]
/Documentation/arch/arm64/
Dasymmetric-32bit.rst68 On a homogeneous system, the CPU affinity of a task is preserved across
82 2. Otherwise, the cpuset hierarchy of the task is walked until an
84 affinity of the task is then changed to match the 32-bit-capable
90 A subsequent ``execve(2)`` of a 64-bit program by the 32-bit task will
92 affinity of the task using the saved mask if it was previously valid.
97 Calls to ``sched_setaffinity(2)`` for a 32-bit task will consider only
99 affinity for the task is updated and any saved mask from a prior
105 Explicit admission of a 32-bit deadline task to the default root domain
110 ``execve(2)`` of a 32-bit program from a 64-bit deadline task will
111 return ``-ENOEXEC`` if the root domain for the task contains any
[all …]

12345678910>>...13